text
stringlengths
1
2.51M
meta
dict
\section{Introduction} Let $\alpha>0$. For $1\le p<\infty$, we denote by $L\pla$ the space of all measurable functions $f$ on $\C$ such that \[ \|f\|_{L\pla}^p:=\int_\C \left|f(z) e^{-\alpha|z|^{2\ell}/2}\right|^p \,d\nu(z) < \infty, \] where $d\nu$ denotes the Lebesgue measure on $\C$. For $p=\infty$, $L^{\infty,\ell}_{\alpha}$ denotes the space of all measurable functions $f$ on $\C$ such that \[ \|f\|_{L^{\infty,\ell}_{\alpha}}:= \esssup_{z\in\C} |f(z) e^{-\alpha|z|^{2\ell}/2}| < \infty. \] Note that $L\pla = L^p(\C, e^{-p\alpha |z|^{2\ell}/2}\, d\nu)$, $1\le p < \infty$. So, for $1\le p\le\infty$, $(L\pla, \|\,\cdot\,\|_{L\pla})$ is a Banach space and $L\dla$ is a Hilbert space with the inner product \[ \langle f,\,g\rangle\la:=\int_{\C}f(z)\overline{g(z)}\, e^{-\alpha |z|^{2\ell}}\,d\nu(z) \quad(f,g\in L\dla). \] The generalized Fock spaces are defined to be \[ F\pla := H(\C) \cap L\pla \qquad (1\le p \leq \infty), \] where $H(\C)$ denotes the space of entire functions. It is well known that the space of holomorphic polynomials is dense in $F\pla$ for $p<\infty$. If $p=2$, $F\dla$ is a Hilbert space. We will denote by $P\la$ the orthogonal projection from $L\dla$ to $F\dla$, which is an integral operator whose kernel is $K\la$, the Bergman reproducing kernel for $F\dla$. It is also convenient to consider the little Fock space \[ \mathfrak{f}^{\infty,\ell}_{\alpha}:=\bigl\{\,f\in H(\C)\,:\,\lim_{|z|\to\infty}|f(z)|e^{-\alpha|z|^{2\ell}/2}=0\,\bigr\}, \] which is the closure of the space of all holomorphic polynomials in $F^{\infty,\ell}_{\alpha}$. Recall that for $\ell=1$ one obtains the classical Fock spaces $F^p_{\alpha}$ and $\mathfrak{f}^\infty_{\alpha}$. The main goal of this paper is to characterize the boundedness and the compactness of the small Hankel operators $$ \hbla(f):=\overline{P\la(b\overline{f})} $$ on $F\pla$ for the whole range $1\le p<\infty$. For the classical case $\ell=1$ and $p=2$, it is well known that, if $b\in {F}^2_{\alpha}$, the small Hankel operator $ \mathfrak{h}^1_{b,\alpha}(f):=\overline{P^{1}_\alpha(b\overline{f})} $ is bounded (compact) from $F^2_\alpha$ to $\overline{F^2_\alpha}$ if and only if $b\in F^\infty_{\alpha/2}$ ($b\in \mathfrak{f}^\infty_{\alpha/2}$). Moreover, $\mathfrak{h}^1_{b,\alpha}$ is a Hilbert-Schmidt operator if and only if $b\in F^2_{\alpha/2}$ (see~{\cite{janson-peetre-rochberg} and~{\cite{Zhu2012}}}). Up to our knowledge, there are not known results on small Hankel operators for $\ell>1$. This is not the case for the big Hankel operator $\mathcal{H}_{\overline{b}}(f):= \overline{b}f- P^1_\alpha( \overline{b}f)$. In \cite{bommier-youssfi} (see also \cite{constantin--ortega-cerda}) the authors prove that $\mathcal{H}_{\overline{b}}$ is a bounded operator on $F^{2,\ell}_\alpha$ if and only if $b'(z)(1+|z|)^{1-\ell}\in L^\infty$, that is, $b$ is a polynomial of degree at most $\ell$. It is also worth mentioning \cite{SeiYouJGA2011}, where are described the bounded, compact and Schatten class big Hankel operators on Hilbert Fock spaces induced by radial rapidly decreasing weights. Observe that $(1+|z|)^{1-\ell}\simeq (\Delta |z|^{2\ell})^{-1/2}$, $|z|\ge 1$. It is well known that in the general theory of Fock spaces $F^{p}_\phi$, the Laplacian of the subharmonic weight $\phi$ plays an important role (see, for instance, the recent papers \cite{constantin-pelaez} and \cite{oliver-pascuas} and the references therein). A natural question from those observations, which will be solved by the main results of this paper, is whether or not the boundedness of $\hbla$ on $F\dla$ is described by conditions on $b$ involving $\Delta |z|^{2\ell}$. In order to introduce a natural space of symbols to study the small Hankel operator acting on $F\pla$, notice that if $\hbla$ is bounded on $F\pla$, for some $b\in H(\C)$, then $b=\overline {\hbla(1)}\in F\pla$. For the classical case, we have $F^p_\alpha\subset F^\infty_\alpha$, if $1\le p< \infty$. Those considerations suggest that the appropriate space of symbols in this classical setting is $F^\infty_\alpha$. When $\ell>1$ the inclusion $F\pla\subset F^{\infty,\ell}_\alpha$ is no longer true (see for instance \cite[Corollary~2]{ConsPelJGA2016}). Instead, for any function $b$ in $F\pla$ the pointwise estimate $$ |b(z)|\lesssim \|b\|_{F\pla} (1+|z|)^{(2\ell-2)/p}e^{\alpha|z|^{2\ell}/2} $$ holds (see~{\cite[Lemma 19(a)]{marco-massaneda-ortega}}). Hence, in the general setting we consider the space of holomorphic symbols given by \begin{equation*} H^{\infty,\ell}_\alpha:=\left\{b\in H(\C):\,|b(z)|=O\bigl((1+|z|)^{2\ell-2}e^{\alpha|z|^{2\ell}/2}\bigr)\right\}. \end{equation*} Assuming $b\in H^{\infty,\ell}_\alpha$, the operator $\hbla$ is well defined on the space $E$ of entire functions of order $\ell$ and finite type, that is, \begin{equation}\label{eqn:E} E:=\{f\in H(\C):\,|f(z)|=O(e^{\beta|z|^\ell}),\quad\text{for some }\,\, \beta>0\}. \end{equation} Since $E$ contains the space of the holomorphic polynomials, $E$ is dense in $\mathfrak{f}^{\infty,\ell}_\alpha$ and in $F\pla$, for any $p<\infty$. Our main results are the following. \begin{thm}\label{thm:main1} Let $\alpha>0$, $\ell\in \N$, $b\in H^{\infty,\ell}_\alpha$ and $1\le p<\infty$. Then $\hbla$ is a bounded operator from $F\pla$ to $\overline{F\pla}$ if and only if $b\in F^{\infty,\ell}_{\alpha/2}$. In such case, $ \|\hbla\|_{F\pla}\simeq \|b\|_{F^{\infty,\ell}_{\alpha/2}}. $ Analogously, $\hbla$ is a bounded operator from $\mathfrak{f}^{\infty,\ell}_\alpha$ to $\overline{\mathfrak{f}^{\infty,\ell}_\alpha}$ if and only if $b\in F^{\infty,\ell}_{\alpha/2}$ and $ \|\hbla\|_{\mathfrak{f}^{\infty,\ell}_\alpha}\simeq\|b\|_{F^{\infty,\ell}_{\alpha/2}}.$ \end{thm} Here and throughout the paper $\|\hbla\|_{F\pla}$ denotes the norm of $\hbla$ as an operator from $F\pla$ to $\overline{F\pla}$. Since the boundedness of small Hankel operators is equivalent to the boundedness of the corresponding Hankel forms, as an application of Theorem \ref{thm:main1} we obtain: \begin{thm}\label{thm:main3} Let $1< p<\infty$, $\ell\in\N$, $\alpha>0$ and $b\in H^{\infty,\ell}_\alpha$. \begin{enumerate} \item \label{item:main31} Let $\Lambda^\ell_{b,\alpha}$ be the Hankel bilinear form defined by $$ \Lambda^\ell_{b,\alpha}(f,g):=\langle fg,b\rangle\la \qquad(f,g\in E). $$ Then, $\Lambda^\ell_{b,\alpha}$ extends to a bounded bilinear form either on $F^{p,\ell}_\alpha\times F^{p',\ell}_\alpha$ or on $F^{1,\ell}_\alpha\times \mathfrak{f}^{\infty,\ell}_\alpha$ if and only if $b\in F^{\infty,\ell}_{\alpha/2}$. \item \label{item:main32} The space $F^{\infty,\ell}_{\alpha/2}$ coincides with $P\la(L^\infty)$ and also with the dual of $F^{1,\ell}_{2\alpha}$ with respect to the pairing $\langle \cdot,\cdot\rangle\la$. \item \label{item:main33} $F^{p,\ell}_\alpha\odot F^{p',\ell}_\alpha =F^{1,\ell}_\alpha\odot \mathfrak{f}^{\infty,\ell}_\alpha=F^{1,\ell}_\alpha\odot F^{\infty,\ell}_\alpha= F^{1,\ell}_{2\alpha}$. \end{enumerate} \end{thm} Here and troughout the paper, $p'$ denotes the conjugate exponent of $p$. We recall that the weak product $F^{p,\ell}_\alpha\odot F^{p',\ell}_\alpha$ consists of all entire functions $h=\sum_{j=1}^\infty f_j g_j$, $f_j\in F^{p,\ell}_\alpha$ and $g_j\in F^{p',\ell}_\alpha$, such that $$ \|h\|_{F^{p,\ell}_\alpha\odot F^{p',\ell}_\alpha} :=\inf\left\{\sum_{j=1}^\infty \|f_j\|_{F^{p,\ell}_\alpha}\|g_j\|_{F^{p',\ell}_\alpha}: h=\sum_{j=1}^\infty f_jg_j\right\}<\infty. $$ \begin{thm}\label{thm:main2} Let $\alpha>0$, $\ell\in \N$, $b\in H^{\infty,\ell}_\alpha$ and $1\le p<\infty$. Then $\hbla$ is compact from $F\pla$ to $\overline{F\pla}$ if and only if $b\in \mathfrak{f}^{\infty,\ell}_{\alpha/2}$. Similarly, $\hbla$ is compact from $\mathfrak{f}^{\infty,\ell}_{\alpha}$ to $\overline{\mathfrak{f}^{\infty,\ell}_{\alpha}}$ if and only if $b\in \mathfrak{f}^{\infty,\ell}_{\alpha/2}$. \end{thm} As far as we know, the techniques that have been used to prove characterizations of the boundedness and the compactness of small Hankel operators on the classical Fock spaces $F^p_\alpha=F^{p,1}_\alpha$ (see \cite{janson-peetre-rochberg,Zhu2012}) are strongly based on the fact that the Bergman reproducing kernel of $F^2_\alpha$ is given by the neat expression $ K^1_\alpha(z,w)=\frac{\alpha}{\pi}e^{\alpha{z\overline{w}}}, $ which permits to factorize the kernel as \begin{equation}\label{eqn:factorization1} K^1_\alpha(z,w) =\frac{\pi}{\alpha}K^1_\alpha(z/2,w)K^1_\alpha(z/2,w). \end{equation} Thus, the proof is quite easy since the integral operator with kernel $K^1_\alpha(z/2,\cdot)$ maps the function $f$ in the Fock space to the function $f(\cdot/2)$. However, the general situation on $F^{2,\ell}_\alpha$, $\ell>1$, is much more involved because of the lack of such a simple expression for $K\la$. In this general case we use the factorization \begin{equation}\label{eqn:factorization2} K^\ell_\alpha(w,z) = G_{\alpha,0}(w,z)G_{\alpha,1}(w,z), \end{equation} where \[ G_{\alpha,0}(w,z):= e^{\frac{\alpha}2(w\overline z)^\ell}\quad\text{and}\quad G_{\alpha,1}(w,z):= e^{-\frac{\alpha}2(w\overline z)^\ell}K^\ell_\alpha(w,z), \] which for $\ell=1$ is just \eqref{eqn:factorization1}. Note that \eqref{eqn:factorization2}, which is given in terms of analytic functions, is possible because $\ell$ is a positive integer. For other values of $\ell$ it is not clear how to choose a suitable decomposition. Finally, we characterize the membership of $\hbla$ in the class $\mathcal{S}_2(F^{2,\ell}_{\alpha})$ of Hilbert-Schmidt operators from $F^{2,\ell}_{\alpha}$ to $\overline{F^{2,\ell}_{\alpha}}$. For $\ell=1$, $\hbla\in \mathcal{S}_2(F^{2}_{\alpha})$ if and only if $b\in F^{2}_{\alpha/2}$ (see~{\cite{janson-peetre-rochberg} or~{\cite{Zhu2012}}}). For $\ell>1$ the characterization is given in terms of the space $F^{2,\ell}_{\alpha,\Delta}$ of all functions $f\in H(\C)$ such that $$ \|f\|^2_{F^{2,\ell}_{\alpha,\Delta}}:=\int_{\C}|f(z) e^{-\frac{\alpha}{2}|z|^{2\ell}}|^2\, (1+|z|)^{2(\ell-1)}\,d\nu(z)<\infty. $$ \begin{thm}\label{thm:main4} Let $\alpha>0$, $\ell\in \N$ and $b\in H^{\infty,\ell}_\alpha$. Then, $\hbla \in \mathcal{S}_2(F^{2,\ell}_{\alpha})$ if and only if $b\in F^{2,\ell}_{\alpha/2,\Delta}$. Moreover, $$\|\mathfrak{h}^{\ell}_{b,\alpha}\|_{\mathcal{S}_2(F^{2,\ell}_{\alpha})}\simeq \|b\|_{F^{2,\ell}_{\alpha/2,\Delta}}.$$ \end{thm} Observe that, while the descriptions of the boundedness and compactness of the small Hankel operators on $F\pla$ obtained in Theorems \ref{thm:main1} and \ref{thm:main2} do not depend on the Laplacian of $|z|^{2\ell}$, this is not the case for Hilbert-Schmidt operators. Taking into account our results, it seems natural to conjecture analogous ones for weighted Fock spaces induced by weights $e^{-\phi}$, where $\phi$ is a subharmonic function such that $\Delta\phi$ is a doubling measure. The paper is organized as follows. In Section \ref{sect:prelim} we state some useful properties of the Bergman projection, as well as the main properties of the spaces $F\pla$ and of the small Hankel operator. In Section \ref{sect:ProofThm1} we prove Theorem \ref{thm:main1}. In Sections \ref{sect:ProofThm3} and \ref{sect:ProofThm2} we give the proof of Theorems \ref{thm:main3} and \ref{thm:main2}, respectively. Finally, in Section \ref{sect:ProofThm4} we provide a proof of Theorem \ref{thm:main4}, which follows from the definition of the Hilbert-Schmidt norm. \subsection{Notations} Throughout the paper, $\N$ denotes the set of all positive integers. We denote by $p'$ the conjugate exponent of $p$. The letter $C$ will denote a positive constant, which may vary from place to place. The notation $A\lesssim B$ means that there exists a constant $C>0$, which does not depend on the involved variables, such that $A\le C\, B$. We write $A\simeq B$ when $A\lesssim B$ and $B\lesssim A$. We will also say that $\hbla$ is bounded (compact) on $F\pla$ if it is bounded (compact) from $F\pla$ to $\overline{F\pla}$. We denote the norm of this operator by $\|\hbla\|_{F\pla}$. The same notations will be used replacing $F\pla$ by $\mathfrak{f}^{\infty,\ell}_\alpha$. \section{Preliminaries}\label{sect:prelim} \subsection{Properties of the Fock spaces $F^{p,\ell}_{\alpha}$}\quad\par We begin the subsection recalling some useful embeddings of the generalized Fock spaces. \begin{lem}\label{lem:propertiesF} Let $1\le p,q\le \infty$. If $0<\alpha<\beta<\gamma<\delta$ then we have the embeddings \[ F^{\infty,\ell}_{\alpha}\hookrightarrow F^{p,\ell}_{\beta}\hookrightarrow H^{\infty,\ell}_{\beta}\hookrightarrow \mathfrak{f}^{\infty,\ell}_{\gamma} \hookrightarrow F^{q,\ell}_{\delta}. \] \end{lem} \begin{proof} As we said in the introduction, the embedding $F^{p,\ell}_{\beta}\hookrightarrow H^{\infty,\ell}_{\beta}$ is proved in \cite[Lemma 19(a)]{marco-massaneda-ortega}. The rest follows directly. \end{proof} Since our weights $\alpha|z|^{2\ell}/2$ are radial, the dilations $z\mapsto\lambda z$, $\lambda>0$, act isometrically on our spaces $L^{p,\ell}_{\alpha}$ and $F^{p,\ell}_{\alpha}$, as it is stated in the following proposition. \begin{prop}\label{prop:dilations:act:isometrically} Let $1\le p\le \infty$, $\alpha,\lambda>0$ and $\ell\in\N$. For any function $f$ on $\C$ we define \begin{equation}\label{eqn:dilations:act:isometrically} \Phi^{\ell}_{\lambda}f(z):=f(\lambda^{1/(2\ell)}z) \qquad(z\in\C). \end{equation} Then $\Phi^{p,\ell}_{\lambda}:= \lambda^{1/(p\ell)}\,\Phi^{\ell}_{\lambda}$ is a linear isometry from $L\pla$ onto $L^{p,\ell}_{\lambda\alpha}$ such that $\Phi^{p,\ell}_{\lambda}(F\pla)=F^{p,\ell}_{\lambda\alpha}$ and $\Phi^{p,\ell}_{\lambda}(\mathfrak{f}^{\infty,\ell}_{\alpha})=\mathfrak{f}^{\infty,\ell}_{\lambda\alpha}$. In particular, \[ \langle\Phi^{2,\ell}_{\lambda}f,\, \Phi^{2,\ell}_{\lambda}g\rangle^{\ell}_{\lambda\alpha}= \langle f,\,g\rangle^{\ell}_{\alpha} \qquad(f,g\in L^{2,\ell}_{\alpha}). \] \end{prop} \begin{proof} The first assertion follows by making the change of variable $w=\lambda^{1/(2\ell)}z$. The second assertion is a direct consequence of the first one for $p=2$. \end{proof} \subsection{The Bergman kernel}\label{section:bergmankernel}\quad\par It is well-known that $F\dla$ with the inner product $\langle \cdot,\cdot\rangle\la$ is a Hilbert space such that the pointwise evaluation $f \mapsto f(z)$ is a bounded linear functional on $F\dla$, for any $z\in\C$. Thus $F\dla$ is a reproducing kernel Hilbert space, that is, for any $z\in\C$ there exists a unique function $K^{\ell}_{\alpha,z}$ in $F\dla$ such that $f(z) = \langle f,\,K^{\ell}_{\alpha,z}\rangle\la$, for every $f\in F\dla$. The {\em Bergman kernel} for $F\dla$ is the function \[ K\la(z,w):=K^{\ell}_{\alpha,w}(z)=\overline{K^{\ell}_{\alpha,z}(w)}\qquad(z,w\in\C). \] The following result is well known (see for instance \cite{bommier-englis-youssfi}). \begin{prop}\label{prop:kernel} Let $\alpha>0$ and $\ell\in\N$. Then the sequence of monomials $\{z^m\}_{m\ge0}$ is an orthogonal basis of $F\dla$ and \begin{equation*} \|z^m\|_{F\dla}^2=\frac{\pi}{\ell\alpha^{(m+1)/\ell}}\Gamma\left(\frac{m+1}{\ell}\right). \end{equation*} Therefore, the sequence \[\{e_m\}_{m\ge0}:= \left\{\frac{z^m}{\|z^m\|_{F\dla}}\right\}= \left\{\sqrt{\frac{\ell}{\pi}\,\frac{\alpha^{(m+1)/\ell}}{\Gamma\left(\frac{m+1}{\ell}\right)}}\,z^m\right\}_{m\ge0} \] is an orthonormal basis of $F\dla$ and the Bergman kernel for $F\dla$ admits the representation \begin{equation}\label{eqn:kernel} K\la(z,w) =\sum_{m=0}^\infty\frac{z^m\,\overline{w}^m}{\|w^m\|_{F\dla}\,\|z^m\|_{F\dla}} =\frac{\ell\alpha^{1/\ell}} {\pi}\sum_{m=0}^\infty\frac{\alpha^{m/\ell} z^m\,\overline{w}^m} {\Gamma\left(\frac{m+1}{\ell}\right)}. \end{equation} In particular, \begin{equation}\label{eqn:kernel:change:parameter} K^{\ell}_{\alpha}(z,w)=\alpha^{1/\ell}\, K^{\ell}_1(\alpha^{1/(2\ell)}z,\,\alpha^{1/(2\ell)}w). \end{equation} \end{prop} Formula \eqref{eqn:kernel} shows that the Bergman kernel can be written in terms of the Mittag-Leffler functions. Namely, \begin{equation}\label{eqn:KH} K^{\ell}_{\alpha}(z,w) =H\la(z\overline w)= \frac{\ell \alpha^{1/\ell}}{\pi} E_{\frac 1\ell,\frac 1\ell}(\alpha^{1/\ell}z\overline w) \quad(z,w\in\C), \end{equation} where \[ E_{\frac 1\ell,\frac 1\ell}(\lambda) =\sum_{k=0}^\infty \frac{\lambda^k}{\Gamma\bigl(\frac{k+1}\ell\bigr)} \quad(\lambda\in\C). \] It is known that the Mittag-Leffler function $E_{\frac 1\ell,\frac 1\ell}(\lambda)$ satisfies the following asymptotic expansion as $|\lambda|\to\infty$ (see \cite[Chapter XVIII ]{Bateman-Erdelyi}): \begin{equation}\label{eqn:asimpE} E_{\frac 1\ell,\frac 1\ell}(\lambda)= \begin{cases} \ell\lambda^{\ell-1}e^{\lambda^\ell}+O(\lambda^{-1}), &\text{if}\,\,\,|\arg(\lambda)|\le \frac{\pi}{2\ell},\\ O(\lambda^{-1}), &\text{if}\,\,\,|\arg(\lambda)|> \frac{\pi}{2\ell}. \end{cases} \end{equation} Here $\arg(\lambda)$ denotes the principal branch of the argument of $\lambda$, that is, $-\pi<\arg(\lambda)\le\pi$. It is clear that \eqref{eqn:asimpE} implies the following pointwise estimate of the Bergman kernel. \begin{prop}\label{prop:pointwise} \[ |K^\ell_\alpha(z,w)|\lesssim (1+|z\overline w|)^{\ell-1} \left(e^{\alpha\Re((z\overline w)^\ell)}+1\right) \quad(z,w\in\C). \] \end{prop} Observe that \eqref{eqn:asimpE} also gives pointwise estimates of $K^\ell_\alpha$ for $\ell$ not necessarily integer. However, in this non-integer case to obtain a factorization of the Bergman kernel as in \eqref{eqn:factorization2} seems more difficult. Other estimates for more general radial weights are given in \cite{SeiYouJGA2011}. \subsection{The Bergman projection} The orthogonal projection $P^\ell_{\alpha}$ from $L\dla$ onto $F\dla$ admits the integral representation \[ P^\ell_{\alpha}f(z):= \int_{\C}K\la(z,w)\,f(w)\,e^{-\alpha|w|^{2\ell}}\,d\nu(w) \quad(f\in L\dla,\,z\in\C). \] Note that if $f\in L^{p,\ell}_\beta$, $1\le p<\infty$, $0<\beta<2\alpha$, then $P^\ell_{\alpha}f$ is well defined, that is, for any $z\in\C$, the function \[ F_z(w)=K^\ell_{\alpha,z}(w)\,f(w)\,e^{-\alpha|w|^{2\ell}} =K^\ell_{\alpha}(w,z)\,f(w)\,e^{-\alpha|w|^{2\ell}} \] is integrable on $\C$. Indeed, by Proposition \ref{prop:pointwise}, $|F_z(w)|\le C_z G_z(w) H_z(w)$, where $G_z(w):=|f(w)|\,e^{-\beta|w|^{2\ell}/2}$ and $H_z(w):=(1+|w|)^{\ell-1}e^{\alpha |z|^\ell|w|^\ell}e^{-(\alpha-\beta/2)|w|^{2\ell}/2}$. Since $G_z\in L^p(\C)$ and $H_z\in L^{p'}(\C)$, H\"{o}lder's inequality gives that $$ \int_\C |F_z(w)|d\nu(w)\le C_z \|f\|_{L^{p,\ell}_\beta}. $$ Hence $u(f)=\int_\C F_z\,d\nu$ is a bounded linear form on $L^{p,\ell}_\beta$. Since $u(P)=P(z)$, for every holomorphic polynomial $P$, and the holomorphic polynomials are dense on $F^{p,\ell}_\beta$, it turns out that \begin{equation}\label{eqn:reproducing:property} b(z)=\int_\C K^\ell_{\alpha} (z,w)\,b(w)\,e^{-\alpha|w|^\ell} d\nu(w), \end{equation} for any $b\in F^{p,\ell}_{\beta}$. In particular, since by Lemma \ref{lem:propertiesF}, $H^{\infty,\ell}_{\alpha}\subset F^{p,\ell}_{\beta}$, for $\beta>\alpha$, \eqref{eqn:reproducing:property} also holds for any $b\in H^{\infty,\ell}_{\alpha}$. \begin{prop} \label{prop:Interp-dual} For $\ell\ge 1$ and $\alpha>0$ we have: \begin{enumerate} \item\label{item:Interp-dual1} If $1\le p\le\infty$, then $ P\la$ is a bounded projection from $ L\pla$ onto $F\pla$. \item\label{item:Interp-dual3} If $1\le p<\infty$, then $(F^{p,\ell}_\alpha)^*\equiv F^{p',\ell}_\alpha$, with respect to the pairing $ \langle\cdot,\cdot\rangle\la$. \item\label{item:Interp-dual4} $(\mathfrak{f}^{\infty,\ell}_{\alpha})^*\equiv F^{1,\ell}_\alpha$, with respect to the pairing $ \langle\cdot,\cdot\rangle\la$. \end{enumerate} \end{prop} \begin{proof} The proof of the first two assertions can be found, for instance, in~{\cite[Theorem 13 and Corollary 14]{constantin-pelaez}} and~{\cite[Theorems 3.1 and 3.6]{oliver-pascuas}}, so we only have to prove the last one. First note that if $b\in F^{1,\ell}_\alpha$ then $\langle\cdot,b\rangle\la\in (\mathfrak{f}^{\infty,\ell}_\alpha)^*$ and $\|\langle\cdot,b\rangle\la\|_{(\mathfrak{f}^{\infty,\ell}_\alpha)^*} \lesssim\|b \|_{F^{1,\ell}_\alpha}$. Conversely, given $u\in(\mathfrak{f}^{\infty,\ell}_\alpha)^*$, we are going to prove that there is $b\in F^{1,\ell}_\alpha$ such that $u=\langle\cdot,b\rangle\la$ and $\|b\|_{F^{1,\ell}_\alpha}\lesssim\|u\|_{(\mathfrak{f}^{\infty,\ell}_\alpha)^*}$. Pick $\alpha/2<\beta<\alpha$. Then, by Lemma~{\ref{lem:propertiesF}}, we have the embedding $F^{2,\ell}_{\beta}\hookrightarrow \mathfrak{f}^{\infty,\ell}_\alpha$ and so the restriction of $u$ to $F^{2,\ell}_{\beta}$ is a bounded linear form on this space. It follows that there is $g\in F^{2,\ell}_{\beta}$ such that $u(f)=\langle f,g\rangle^\ell_\beta$, for every $f\in E$. Now Proposition~{\ref{prop:dilations:act:isometrically}} for $\lambda=\alpha/\beta$ shows that $b:=\Phi^{2,\ell}_{\lambda^2}g\in F^{2,\ell}_{\alpha^2/\beta}$ satisfies \[ u(f)=\langle f,g\rangle^\ell_\beta= \langle\Phi^{2,\ell}_{\lambda}f, \,\Phi^{2,\ell}_{\lambda}g\rangle^{\ell}_{\alpha} \stackrel{(*)}{=} \langle f,\,b\rangle^{\ell}_{\alpha}, \quad\mbox{ for every $f\in E$.} \] (Note that $(*)$ holds because both functions $f$ and $g$ are entire.) Thus it only remains to prove that $\|b\|_{L^{1,\ell}_{\alpha}} \lesssim\|u\|_{(\mathfrak{f}^{\infty,\ell}_\alpha)^*}$. Recall that, by duality, \[ \|b\|_{L^{1,\ell}_\alpha}=\sup_{\substack{f\in C_c(\C)\\\|f\|_{L^\infty}=1}} \left|\int_\C f(z)e^{-\frac{\alpha}2|z|^{2\ell}}\,\overline{b(z)}\,d\nu(z)\right| =\sup_{\substack{ f\in C_c(\C)\\\|f\|_{L^\infty}=1}} |\langle T_{\alpha}f,b\rangle\la|, \] where $T_{\alpha}f(z):=f(z)e^{\frac{\alpha}2|z|^{2\ell}}$. Note that $b=P^{\ell}_{\alpha}(b)$, because $b\in F^{2,\ell}_{\alpha^2/\beta}$ and $\alpha^2/\beta<2\alpha$. Therefore, for any $f\in C_c(\C)$, we have that \[ \langle T_{\alpha} f,b\rangle\la= \langle T_{\alpha} f,P_{\alpha}^{\ell}b\rangle\la\stackrel{(1)}{=} \langle P_{\alpha}^{\ell}(T_ {\alpha} f),b\rangle\la\stackrel{(2)}{=} u(P_{\alpha}^{\ell}(T_ {\alpha} f)), \] where $(1)$ follows from Fubini's theorem and $(2)$ holds since $P_{\alpha}^{\ell}(T_ {\alpha} f)\in E$. And hence \[ |\langle T_{\alpha} f,b\rangle\la|\le \|u\|_{(\mathfrak{f}^{\infty,\ell}_\alpha)^*} \|P^{\ell}_{\alpha}\|_{L^{\infty,\ell}_{\alpha}} \|T_ {\alpha} f\|_{L^{\infty,\ell}_{\alpha}}= \|u\|_{(\mathfrak{f}^{\infty,\ell}_\alpha)^*} \|P^{\ell}_{\alpha}\|_{L^{\infty,\ell}_{\alpha}} \|f\|_{L^{\infty}}, \] which gives that $\|b\|_{L^{1,\ell}_\alpha}\lesssim \|u\|_{(\mathfrak{f}^{\infty,\ell}_\alpha)^*}$. \end{proof} The last result of this subsection states that the dilation operators $\Phi^{\ell}_{\lambda}$, defined by~{\eqref{eqn:dilations:act:isometrically}}, ``conmute'' with the Bergman projections. \begin{prop}\label{prop:dilations:conmute:Bergman:projection} Let $1\le p\le\infty$, $\ell\in\N$ and $\alpha,\beta,\lambda>0$ such that $\beta<2\alpha$. Then \begin{equation*} \Phi^{\ell}_{\lambda}(P^{\ell}_{\alpha}f)= P^{\ell}_{\lambda\alpha}(\Phi^{\ell}_{\lambda}f) \qquad(f\in L^{p,\ell}_{\beta}). \end{equation*} \end{prop} \begin{proof} Let $f\in L^{p,\ell}_{\beta}$. Then \[ \Phi^{\ell}_{\lambda}(P^{\ell}_{\alpha}f)(z)= \int_{\C}K^{\ell}_{\alpha}(\lambda^{1/(2\ell)}z,w) f(w) e^{-\alpha|w|^{2\ell}}d\nu(w). \] By making the change of variable $w=\lambda^{1/(2\ell)}v$ and taking into account that \[ \lambda^{1/\ell}K\la(\lambda^{1/(2\ell)}z,\lambda^{1/(2\ell)}v)= K^{\ell}_{\lambda\alpha}(z,v), \] which follows from~{\eqref{eqn:kernel}}, we conclude that $\Phi^{\ell}_{\lambda}(P^{\ell}_{\alpha}f)(z)=P^{\ell}_{\lambda\alpha}(\Phi^{\ell}_{\lambda}f)(z)$. \end{proof} \subsection{The small Hankel operator on $F\pla$, $1\le p<\infty$}\quad\par The next lemma gives some properties of the subspace of entire functions $E$ defined in \eqref{eqn:E}. \begin{lem}\label{lem:spaceE} The space $E$ satisfies the following properties: \begin{enumerate} \item $E\cdot E\subset E$. \item $E\subset F^{1,\ell}_\alpha$, for any $\alpha>0$ \item $E$ contains the space of all the holomorphic polynomials. \item $E$ contains the space $Span\{ K^{\ell}_{\alpha,z}:z\in\C\}$, i.e. the set of finite linear combinations of functions $K^{\ell}_{\alpha,z}$. \item $E$ is dense in $\mathfrak{f}^{\infty,\ell}_{\alpha}$ and in $F\pla$, for any $1\le p<\infty$. \end{enumerate} \end{lem} \begin{proof} The first three assertions are a consequence of the definition of $E$ and the fact that $e^{\beta|w|^\ell-\gamma |w|^{2\ell}}\in L^1$, for any $\beta,\gamma>0$. The fourth assertion is a consequence of Proposition \ref{prop:pointwise}. The density of $E$ in $F\pla$ is a consequence of the fact that the holomorphic polynomials are dense in $F\pla$ (see \cite[Theorem~28]{ConsPelJGA2016}). \end{proof} In order to define the small Hankel operator for a large class of symbols we consider the space $X^{\infty,\ell}_\alpha$ of all measurable functions $\varphi$ on $\C$ such that \[ \|\varphi\|_{X^{\infty,\ell}_\alpha}:=\esssup_{z\in\C}|\varphi(z)|(1+|z|)^{2-2\ell}e^{-\frac{\alpha}2|z|^{2\ell}} <\infty. \] Observe that $H^{\infty,\ell}_\alpha=H(\C)\cap X^{\infty,\ell}_\alpha.$ Let $\varphi$ be a function in $X^{\infty,\ell}_\alpha$. Since $X^{\infty,\ell}_\alpha\subset L^{1,\ell}_{\beta}$, for any $\beta>\alpha$, the {\em small Hankel operator} $\hvla$ with symbol $\varphi$ is well defined on $E$ by \begin{equation}\label{eqn:def:small:Hankel} \mathfrak{h}^\ell_{\varphi,\alpha}(f)(z):=\overline{P\la(\overline{f} \varphi)(z)}= \int_{\C}K\la(w,z)f(w)\,\overline{\varphi(w)}\,e^{-\alpha|w|^{2\ell}}\,d\nu(w). \end{equation} The next proposition states the relationship between the the small Hankel operator $\hvla$ and the corresponding Hankel bilinear form defined by \[ \Lambda^\ell_{\varphi,\alpha}(f,g):= \langle fg,\varphi\rangle\la\qquad(f,g\in E). \] \begin{prop}\label{prop:hankelform} If $f, g\in E$ and $\varphi\in X^{\infty,\ell}_\alpha$, then we have \begin{equation}\label{eqn:adjointh} \Lambda^\ell_{\varphi,\alpha}(f,g)=\langle g,\overline{\hvla(f)}\rangle\la =\langle f,\overline{\hvla(g)}\rangle\la. \end{equation} Moreover, if $b=P\la(\varphi)\in H^{\infty,\ell}_\alpha$, then $\hvla(f)=\mathfrak{h}^\ell_{b,\alpha}(f)$, $\Lambda^\ell_{b,\alpha}(f,g)= \Lambda^\ell_{\varphi,\alpha}(f,g)$ and $\hvla(f)=\mathfrak{h}^\ell_{b,\alpha}(f)$. \end{prop} \begin{proof} Formula~{\eqref{eqn:adjointh}} follows from Fubini's theorem and the fact that \begin{equation*} \Psi_{f,g,\varphi}(z,w):=K\la(w,z)f(w)\,\overline{\varphi}(w)\, e^{-\alpha|w|^{2\ell}}g(z) \,e^{-\alpha|z|^{2\ell}} \end{equation*} is in $L^1(\C\times\C)$. This is a consequence of Proposition~{\ref{prop:pointwise}}. Indeed, if $\lambda>0$ we have that \begin{align*} |\Psi_{f,g,\varphi}(w,z)| &\lesssim \|\varphi\|_{X^{\infty,\ell}_\alpha} (1+|w|)^{3\ell-3}|f(w)|(1+|z|)^{\ell-1}|g(z)| e^{\alpha|z|^\ell|w|^\ell-\frac{\alpha}2|w|^{2\ell}-\alpha|z|^{2\ell}}\\ &\lesssim \|\varphi\|_{X^{\infty,\ell}_\alpha} e^{\beta |w|^\ell} e^{\beta |z|^\ell} e^{\frac{\alpha}2(\frac1{\lambda^2}-1)|w|^{2\ell} +\alpha(\frac{\lambda^2}2-1)|z|^{2\ell}} \end{align*} for some $\beta>0$. Therefore by choosing $1<\lambda<\sqrt{2}$ we see that $\Psi_{f,g,\varphi}\in L^1(\C\times\C)$. By Lemma~{\ref{lem:spaceE}}, if $f,\, g\in E$ then $fg\in E\subset F^{1,\ell}_\alpha$, and so $fg=P\la(fg)$, by Proposition~{\ref{prop:Interp-dual}}. Therefore \begin{align*} \Lambda^\ell_{\varphi,\alpha}(f,g) &=\int_\C P\la(fg)(w)\overline{\varphi(w)}e^{-\alpha|w|^{2\ell}} d\nu(w)\\ &=\int_\C \int_\C(fg)(z)K\la(w,z)e^{-\alpha|z|^{2\ell}}d\nu(z) \overline{\varphi(w)}e^{-\alpha|w|^{2\ell}} d\nu(w). \end{align*} Since $$ (fg)(z)K\la(w,z)e^{-\alpha|z|^{2\ell}} \overline{\varphi(w)}e^{-\alpha|w|^{2\ell}} =\Psi_{1,fg,\varphi}(w,z)\in L^1(\C\times\C), $$ Fubini's theorem gives $\Lambda^\ell_{b,\alpha}(f,g)=\Lambda^\ell_{\varphi,\alpha}(f,g)$ and $\hvla(f)=\mathfrak{h}^\ell_{b,\alpha}(f)$ for any $f,\,g\in E$. \end{proof} As a consequence of the above proposition and Proposition \ref{prop:Interp-dual}\eqref{item:Interp-dual3}-\eqref{item:Interp-dual4} we obtain: \begin{cor}\label{cor:hankelform} \quad\par \begin{enumerate} \item If $1<p<\infty$, the Hankel operator $\hvla$ defined on the space $E$ extends to a bounded operator on $F\pla$, also denoted by $\hvla$, if and only if the bilinear form $\Lambda^\ell_{\varphi,\alpha}$ defined on $E\times E$ extends to a bounded bilinear form on $F\pla\times F^{p',\ell}_\alpha$. Moreover, $\|\hvla\|_{F\pla}\simeq \|\Lambda^\ell_{\varphi,\alpha}\|_{F\pla\times F^{q,\ell}_\alpha}$. \item The Hankel operator $\hvla$ defined on the space $E$ extends to a bounded operator, also denoted by $\hvla$, either on $F^{1,\ell}_\alpha$ or on $\mathfrak{f}^{\infty,\ell}_{\alpha}$ if and only if the bilinear form $\Lambda^\ell_{\varphi,\alpha}$ defined on $E\times E$ extends to a bounded bilinear form on $F^{1,\ell}_\alpha\times\mathfrak{f}^{\infty,\ell}_{\alpha}$. Moreover, $\|\hvla\|_{F^{1,\ell}_\alpha}\simeq \|\Lambda^\ell_{\varphi,\alpha}\|_{F^{1,\ell}_\alpha\times\mathfrak{f}^{\infty,\ell}_{\alpha}}$. \item The adjoint (in the sense of \eqref{eqn:adjointh}) of $\hvla:F\pla\to\overline{F\pla}$, $1<p<\infty$, is $\hvla:F^{p',\ell}_\alpha\to\overline{F^{p',\ell}_\alpha}$ and the adjoint of $\hvla:\mathfrak{f}^{\infty,\ell}_{\alpha}\to\overline{\mathfrak{f}^{\infty,\ell}_{\alpha}}$ is $\hvla:F^{1,\ell}_\alpha\to\overline{F^{1,\ell}_\alpha}$. \end{enumerate} \end{cor} The last result of this subsection shows that the dilation operators $\Phi^{\ell}_{\lambda}$, defined by~{\eqref{eqn:dilations:act:isometrically}}, ``conmute'' with the small Hankel operators. \begin{prop}\label{prop:dilations:small;Hankel:operator} Let $1\le p<\infty$, $\alpha,\lambda>0$ and $\ell\in\N$. Then: \begin{enumerate} \item \label{item:dilations:small;Hankel:operator:1} $\Phi^{\ell}_{\lambda}(X^{\infty,\ell}_\alpha)= X^{\infty,\ell}_{\lambda\alpha}$ and $\Phi^{\ell}_{\lambda}(E)=E$. \item \label{item:dilations:small;Hankel:operator:2} If $\varphi\in X^{\infty,\ell}_\alpha$ and $\psi=\Phi^{\ell}_{\lambda}\varphi$ then $\Phi^{\ell}_{\lambda}(\hvla f)= \mathfrak{h}^\ell_{\psi,\lambda\alpha} (\Phi^{\ell}_{\lambda}f)$, for every $f\in E$, and so $\|\hvla\|_{F\pla}= \|\mathfrak{h}^\ell_{\psi,\lambda\alpha} \|_{F^{p,\ell}_{\lambda\alpha}}$ and $\|\hvla\|_{\mathfrak{f}^{\infty,\ell}_{\alpha}}= \|\mathfrak{h}^\ell_{\psi,\lambda\alpha} \|_{\mathfrak{f}^{\infty,\ell}_{\alpha}}$. \end{enumerate} \end{prop} \begin{proof} The proof of~{\eqref{item:dilations:small;Hankel:operator:1}} is straightforward. Part~{\eqref{item:dilations:small;Hankel:operator:2}} follows from Proposition~{\ref{prop:dilations:act:isometrically}} and Proposition~{\ref{prop:dilations:conmute:Bergman:projection}}. Indeed, \begin{align*} \Phi^{\ell}_{\lambda}(\hvla f) &=\Phi^{\ell}_{\lambda} (\overline{P^{\ell}_{\alpha}(f\overline{\varphi})})= \overline{\Phi^{\ell}_{\lambda} (P^{\ell}_{\alpha}(f\overline{\varphi}))}\\ &\stackrel{(*)}{=} \overline{P^{\ell}_{\lambda\alpha}(\Phi^{\ell}_{\lambda} (f\overline{\varphi}))}= \overline{P^{\ell}_{\lambda\alpha} ((\Phi^{\ell}_{\lambda}f)\,\overline{\psi})}= \mathfrak{h}^{\ell}_{\psi,\lambda\alpha} (\Phi^{\ell}_{\lambda}f), \end{align*} for every $f\in E$, where $(*)$ holds by Proposition~{\ref{prop:dilations:conmute:Bergman:projection}}. Then the above identity and Proposition~{\ref{prop:dilations:act:isometrically}} directly imply that $\|\hvla\|_{F\pla}= \|\mathfrak{h}^\ell_{\psi,\lambda\alpha} \|_{F^{p,\ell}_{\lambda\alpha}}$. \end{proof} \section{Proof of Theorem~{\ref{thm:main1}}} \label{sect:ProofThm1} \subsection{Proof of the sufficiency} \begin{lem}\label{lem:sufcond} If $\varphi\in L^\infty$ and $1\le p\le \infty$, then $\hvla$ is bounded on $F\pla$ and $\|\hvla\|\lesssim \|\varphi\|_{L^\infty}$. \end{lem} \begin{proof} If $\varphi\in L^\infty$ and $f\in F\pla$, then $\varphi f\in L\pla$ and $\|\varphi f\|_{L\pla}\le\|\varphi\|_{\infty}\|f\|_{F\pla}$. By Proposition~{\ref{prop:Interp-dual}\eqref{item:Interp-dual1}} $P\la$ is bounded on $L\pla$, so we conclude that \[ \|\hvla(f)\|_{L\pla}=\|P\la(\varphi\overline{f})\|_{F\pla} \lesssim\|P\la\|_{L\pla}\,\|\varphi\|_{L^\infty}\|f\|_{F\pla}.\qedhere \] \end{proof} The following result is a corollary of Proposition~{\ref{prop:Interp-dual}\eqref{item:Interp-dual1}}. \begin{prop}\label{prop:PtaLinf} The projection $P\la$ is bounded from $L^\infty$ onto $F^{\infty,\ell}_{\alpha/2}$. Moreover, $\inf\{\|\varphi\|_{L^\infty}: \varphi\in L^{\infty}, P\la \varphi=f\}\le 2^{1/\ell}\|f\|_{F^{\infty,\ell}_{\alpha/2}}$, for every $f\in F^{\infty,\ell}_{\alpha/2}$. \end{prop} \begin{proof} It is clear that $T_{\alpha}\varphi(z):=\varphi(z)e^{\alpha|z|^{2\ell}}$ defines a linear isometry from $L^{\infty}$ onto $L^{\infty,\ell}_{2\alpha}$. Then, for any $\varphi\in L^\infty$, we have that \begin{align*} P\la\varphi(z) &=\int_\C K\la(z,w)\, T_{\alpha}\varphi(w)\, e^{-2\alpha|w|^{2\ell}} d\nu(w)\\ &\stackrel{(1)}{=}2^{-1/\ell}\int_\C K^{\ell}_{2\alpha}(2^{-1/\ell}z,w) \, T_{\alpha}\varphi(w)\, e^{-2\alpha|w|^{2\ell}} d\nu(w) \\ &=2^{-1/\ell}\,P^{\ell}_{2\alpha}(T_{\alpha}\varphi)(2^{-1/\ell}z) \stackrel{(2)}{=} 2^{-1/\ell}\,\Phi^{\ell}_{1/4}(P^{\ell}_{2\alpha}(T_{\alpha}\varphi))(z) \end{align*} where $(1)$ and $(2)$ follow from~{\eqref{eqn:kernel}} and~{\eqref{eqn:dilations:act:isometrically}}, respectively. In other words, the projection $P\la$ on $L^{\infty}$ is the composition of the following three bounded linear exhaustive operators: \begin{enumerate} \item $T_{\alpha}:L^{\infty}\to L^{\infty,\ell}_{2\alpha}$; \item $P^{\ell}_{2\alpha}:L^{\infty,\ell}_{2\alpha}\to F^{\infty,\ell}_{2\alpha}$; \item $\Psi:=2^{-1/\ell}\,\Phi^{\ell}_{1/4}:F^{\infty,\ell}_{2\alpha}\to F^{\infty,\ell}_{\alpha/2}$. \end{enumerate} It directly follows that $P^{\ell}_{\alpha}$ is bounded from $L^{\infty}$ onto $F^{\infty,\ell}_{\alpha/2}$. Moreover, since $P^{\ell}_{2\alpha}$ is a projection from $L^{\infty,\ell}_{2\alpha}$ onto $F^{\infty,\ell}_{2\alpha}$ (by Proposition~{\ref{prop:Interp-dual}\eqref{item:Interp-dual1}}) and the operator $\Psi:=2^{-1/\ell}\,\Phi^{\ell}_{1/4}:F^{\infty,\ell}_{2\alpha}\to F^{\infty,\ell}_{\alpha/2}$ is an isomorphism such that $\Psi^{-1}=2^{1/\ell}\Phi^{\ell}_4$ satisfies $\|\Psi^{-1}(f)\|_{F^{\infty,\ell}_{2\alpha}}= 2^{1/\ell}\|f\|_{F^{\infty,\ell}_{\alpha/2}}$, for every $f\in F^{\infty,\ell}_{\alpha/2}$, (by Proposition~{\ref{prop:dilations:act:isometrically}}) we conclude that \[ \inf\{\|\varphi\|_{L^\infty}: \varphi\in L^{\infty}, P\la \varphi=f\}\le 2^{1/\ell}\|f\|_{F^{\infty,\ell}_{\alpha/2}}, \quad\mbox{for every $f\in F^{\infty,\ell}_{\alpha/2}$.}\qedhere \] \end{proof} \begin{prop}\label{prop:sufcond} Let $b\in F^{\infty,\ell}_{\alpha/2}$. \begin{enumerate} \item \label{item:sufcond1} If $1\le p< \infty$, then $\hbla$ extends to a bounded operator on $F\pla$. \item \label{item:sufcond2} $\hbla$ extends to a bounded operator on $\mathfrak{f}^{\infty,\ell}_\alpha$. \end{enumerate} Moreover, $\|\hbla\|_{F^{p,\ell}_{\alpha}}\lesssim\|b\|_{F^{\infty,\ell}_{\alpha/2}}$, for any $1\le p<\infty$, and $\|\hbla\|_{\mathfrak{f}^{\infty,\ell}_{\alpha}}\lesssim \|b\|_{F^{\infty,\ell}_{\alpha/2}}$. \end{prop} \begin{proof} In order to prove \eqref{item:sufcond1}, we show that for $1\le p\le\infty$, \begin{equation}\label{eq:suf2} \|\hbla(f)\|_{L\pla}\lesssim \|b\|_{F^{\infty,\ell}_{\alpha/2}}\|f\|_{F\pla} \qquad(f\in E). \end{equation} By Proposition \ref{prop:PtaLinf}, $b=P\la(\varphi)$ for some $\varphi\in L^\infty$ such that $\|\varphi\|_{L^\infty}\le3\|b\|_{F^{\infty,\ell}_{\alpha/2}}$. If $f\in E$, Proposition \ref{prop:hankelform} gives $\hbla(f)=\hvla(f)$, so Lemma~{\ref{lem:sufcond}} implies \eqref{eq:suf2}. Taking into account \eqref{eq:suf2} for $p=\infty$, the proof of \eqref{item:sufcond2} will follow after checking $\hbla(E)\subset \mathfrak{f}^{\infty,\ell}_\alpha$. Indeed, by Proposition \ref{prop:pointwise}, for $f\in E$ and $0<\lambda<1$, we have \begin{align*} |\hbla(f)(z)| &\lesssim \int_\C (1+|z|)^{\ell-1}(1+|w|)^{\ell-1}e^{\alpha|z|^\ell|w|^\ell} e^{\beta|w|^\ell} e^{\alpha|w|^{2\ell}/4} e^{-\alpha|w|^{2\ell}}d\nu(w)\\ & \lesssim (1+|z|)^{\ell-1}\int_\C e^{\alpha\lambda^2|z|^{2\ell}/2} e^{\alpha|w|^{2\ell}/(2\lambda^2)} e^{2\beta|w|^\ell} e^{-3\alpha|w|^{2\ell}/4}d\nu(w)\\ &\lesssim (1+|z|)^{\ell-1} e^{\alpha\lambda^2 |z|^{2\ell}/2} \int_\C e^{2\beta |w|^\ell} e^{-\alpha(3/2-1/\lambda^2)|w|^{2\ell}/2} d\nu(w). \end{align*} Choosing $\sqrt{2/3}<\lambda<1$, the last integral is finite and we get \begin{equation*} \lim_{|z|\to \infty} |\hbla(f)(z)|e^{-\alpha|z|^{2\ell}/2}=0.\qedhere \end{equation*} \end{proof} \subsection{Proof of the necessity}\quad\par \label{subsec:proof:necessity:boundedness} In order to prove the necessity we need some technical results. The first one is a simple consequence of Stirling's formula. \begin{lem}\label{lem:Stirling} Let $\delta$ be a positive number. Then \begin{enumerate} \item\label{item:Stirling1} $\Gamma(s+t)\simeq s^t\,\Gamma(s)\qquad(s\ge 2\delta,\,|t|\le \delta).$ \vspace*{3pt} \item\label{item:Stirling2} Let $a$ be a real number. Then \[ \sum_{k=0}^{\infty}\frac{s^k}{k!}\frac1{(k+1)^a}\simeq \frac{e^s}{(1+s)^a} \qquad(s\ge0). \] \end{enumerate} All the constants in the above equivalences only depend on $\delta$ and $a$. \end{lem} \begin{proof}\eqref{item:Stirling1} Stirling's formula gives \[ \Gamma(x)\simeq x^{x-1/2}e^{-x}\qquad(x\ge \delta), \] so \[ \Gamma(s+t)\simeq (s+t)^{s+t-1/2}e^{-s-t} \simeq (s+t)^t (s+t)^{s-1/2} e^{-s}. \] Since $\frac{s}{2}\le s+t\le 2s$ and $|t|\le\eta$, we have $(s+t)^t\simeq s^t$ and $(s+t)^{s-1/2}\simeq s^{s-1/2}$. \noindent \eqref{item:Stirling2} Note that both terms of the estimate are positive continuous functions of $s\ge0$. So it is clear that we only have to prove that \begin{equation}\label{eqn:exponential:estimate} f_a(x):=\sum_{k=0}^{\infty}\frac{s^k}{k!}\frac{s^a}{(k+1)^a} \simeq e^s \qquad(s\ge1). \end{equation} But \[ f_{a+1}(s) =\sum_{k=0}^{\infty}\frac{s^{k+1}}{(k+1)!}\frac{s^a}{(k+1)^a} =\sum_{k=1}^{\infty}\frac{s^k}{k!}\frac{s^a}{k^a} \simeq \sum_{k=1}^{\infty}\frac{s^k}{k!}\frac{s^a}{(k+1)^a} \simeq f_a(s), \] and so we may assume that $0\le a<1$. Let $s\ge1$ and let $j\in\N$ be its integer part. Then \[ f_a(s)=\sum_{k=0}^{j-1}\frac{s^k}{k!}\left(\frac{s}{k+1}\right)^a +\sum_{k=j}^{\infty}\frac{s^k}{k!}\left(\frac{s}{k+1}\right)^a. \] Now \[ 1\le\left(\frac{s}{k+1}\right)^a\le\frac{s}{k+1} \qquad(0\le k<j) \] and \[ \frac{s}{k+1}\le\left(\frac{s}{k+1}\right)^a\le1 \qquad(j\le k). \] It follows that \[ \sum_{k=0}^{j-1}\frac{s^k}{k!} +\sum_{k=j}^{\infty}\frac{s^{k+1}}{(k+1)!}\le f_a(s)\le \sum_{k=0}^{j-1}\frac{s^{k+1}}{(k+1)!} +\sum_{k=j}^{\infty}\frac{s^k}{k!}, \] and therefore \[ e^s\left(1-\frac2{e}\right)\le e^s\left(1-\frac{(j+1)^j}{e^j\,j!}\right)\le e^s-\frac{s^j}{j!}\le f_a(s)\le 2e^s, \] since the sequence $c_j=\frac{(j+1)^j}{e^j\,j!}$ is decreasing. Hence \eqref{eqn:exponential:estimate} holds. \end{proof} The following lemma is an essential tool to prove the necessity. \begin{lem} \label{lem:estimate} For $\ell\in\N$, $a,b> 0$ and $c\ge 0$, let \[ {\mathcal I}^{\ell}_{a,b,c}(z):= \int_\C\bigl|e^{a(z\overline{w})^\ell}\bigr|^2 e^{-b|w|^{2\ell}} (1+|w|)^{c} d\nu(w)\qquad(z\in\C). \] Then \begin{equation}\label{eqn:estimateI} {\mathcal I}^{\ell}_{a,b,c}(z) \simeq e^{a^2|z|^{2\ell}/b} (1+|z|)^{c+2-2\ell} \qquad(z\in\C). \end{equation} \end{lem} \begin{proof} It is enough to prove the estimate \eqref{eqn:estimateI} for $|z|\ge 1$. Observe that \[ {\mathcal I}^{\ell}_{a,b,c}(z) \simeq {\mathcal J}^{\ell}_{a,b,0}(z)+{\mathcal J}^{\ell}_{a,b,c}(z), \] where \[ {\mathcal J}^{\ell}_{a,b,c}(z):= \int_\C\bigl|e^{a(z\overline{w})^\ell}\bigr|^2 e^{-b|w|^{2\ell}} |w|^{c} d\nu(w). \] Thus we only have to show that \[ {\mathcal J}^{\ell}_{a,b,c}(z) \simeq e^{a^2|z|^{2\ell}/b} |z|^{c+2-2\ell} \qquad(|z|\ge 1). \] Indeed, by integrating in polar coordinates and orthogonality, \begin{align*} {\mathcal J}^{\ell}_{a,b,c}(z) &\simeq\sum_{k=0}^\infty \int_0^\infty \frac{a^{2k}|z|^{2k\ell}}{(k!)^2}e^{-br^{2\ell}}r^{2k\ell+c+1}dr\\ &\simeq\sum_{k=0}^\infty \frac{a^{2k}|z|^{2k\ell}}{b^{k+(c+2)/(2\ell)}} \frac1{(k!)^2}\int_0^\infty e^{-t}t^{(2k\ell+c+2)/(2\ell)-1}dt\\ &\simeq\sum_{k=0}^\infty \frac{a^{2k}|z|^{2k\ell}}{b^{k}} \frac{\Gamma(k+(c+2)/(2\ell))}{(k!)^2}. \end{align*} Therefore Lemma \ref{lem:Stirling} completes the proof: \[ {\mathcal J}^{\ell}_{a,b,c}(z)\simeq \sum_{k=0}^\infty \frac{a^{2k}|z|^{2k\ell}}{b^{k}} \frac{1}{k!(k+1)^{(2\ell-2-c)/(2\ell)}} \simeq e^{a^2|z|^{2\ell}/b} |z|^{c+2-2\ell}.\qedhere \] \end{proof} \begin{proof}[Proof of the necessity] Let $1\le p\le \infty$ and $b\in H^{\infty,\ell}_{\alpha}$. Suppose that $\hbla:(E,\|\cdot\|_{F\pla})\to L\pla$ is bounded and we want to prove that $b\in F^{\infty,\ell}_{\alpha/2}$ and $\|b\|_{F^{\infty,\ell}_{\alpha/2}}\lesssim\|\hbla\|_{F\pla}$. First of all, by Proposition~{\ref{prop:dilations:small;Hankel:operator}} we may assume that $\alpha=1$. Now \eqref{eqn:reproducing:property} gives that \begin{equation} \label{eqn:reproducing:property:bis} \overline{b(z)}=\int_\C K^\ell_1(w,z)\,\overline{b(w)}\,e^{-|w|^{2\ell}}\,d\nu(w) =\langle K^{\ell}_1(\cdot,z),\,b\rangle^{\ell}_1. \end{equation} We decompose the Bergman kernel as \begin{equation*}\label{eqn:bergmankernel:decomposition} K^\ell_1(w,z) = G_0(w,z)G_1(w,z), \end{equation*} where \begin{equation}\label{eqn:defnG0G1} G_0(w,z):= e^{\frac{(w\overline z)^\ell}{2}}\quad\text{and}\quad G_1(w,z):= e^{-\frac{(w\overline z)^\ell}{2}}K^\ell_1(w,z). \end{equation} By Proposition \ref{prop:pointwise}, $G_{0}(\cdot,z), G_1(\cdot,z)\in E$, and so \eqref{eqn:reproducing:property:bis} and Proposition~{\ref{prop:hankelform}} show \begin{equation}\label{eqn:symbol:representation} \overline{b(z)}= \langle G_1(\cdot,z),\, \overline{\mathfrak{h}^{\ell}_{b,1}(G_0(\cdot,z))} \rangle^{\ell}_1. \end{equation} Therefore the boundedness of $\mathfrak{h}^{\ell}_{b,1}$ implies that \begin{equation}\label{eqn:estimate:boundedness} |b(z)| \lesssim \|\mathfrak{h}^{\ell}_{b,1}\|_{F^{p,\ell}_1} \| G_0(\cdot,z)\|_{F^{p,\ell}_{1} } \|G_{1}(\cdot,z)\|_{F^{p',\ell}_{1}}. \end{equation} We claim that: \begin{eqnarray} \| G_{0}(\cdot,z)\|_{F^{p,\ell}_{1} }&\simeq& (1+|z|)^{2(1-\ell)/p}\,e^{|z|^{2\ell}/8} \label{eqn:G0:estimate}\\ \| G_{1}(\cdot,z)\|_{F^{p',\ell}_{1} }&\lesssim& (1+|z|)^{2(\ell-1)/p}\,e^{|z|^{2\ell}/8} \label{eqn:G1:estimate} \end{eqnarray} These norm-estimates together with~{\eqref{eqn:estimate:boundedness}} give $|b(z)|\lesssim \|\hbla\|_{F^{p,\ell}_1}\, e^{|z|^{2\ell}/4}$. Now, for $1\le p<\infty$, \eqref{eqn:G0:estimate} is a consequence of Lemma \ref{lem:estimate}: \begin{align*} \| G_{0}(\cdot,z)\|^p_{F^{p,\ell}_{1} }&= \int_\C\left|e^{p(z\overline w)^\ell/4}\right|^2e^{-p|w|^{2\ell}/2} d\nu(w) ={\mathcal I}^{\ell}_{p/4,p/2,0}(z)\\ &\simeq (1+|z|)^{2(1-\ell)}\,e^{p|z|^{2\ell}/8}. \end{align*} If $p=\infty$, using the identity \begin{equation}\label{eqn:completarquadrats} \Re((z\overline{w})^\ell)-|w|^{2\ell}=-|w^\ell-z^\ell/2|^2+|z|^{2\ell}/4, \end{equation} we obtain \[ \| G_{0}(\cdot,z)\|_{F^{\infty,\ell}_{1} }=\sup_{w\in\C}|e^{(z\overline w)^\ell/2}|e^{-|w|^{2\ell}/2}= \,e^{|z|^{2\ell}/8}. \] On the other hand, by Proposition \ref{prop:pointwise} \begin{align*} |G_1(w,z)| &\lesssim (1+|zw|)^{\ell-1}\left( e^{\Re((z\overline w)^\ell)/2}+ e^{-\Re((z\overline w)^\ell)/2} \right) \\ & \lesssim (1+|z|)^{\ell-1}(1+|w|)^{\ell-1} \left( e^{\Re((z\overline w)^\ell)/2}+ e^{-\Re((z\overline w)^\ell)/2} \right). \end{align*} Therefore, for $1\le p'<\infty$, we have \[ \| G_{1}(\cdot,z)\|_{F^{p',\ell}_{1} }\lesssim J_1(z)^{1/p'}+J_2(z)^{1/p'}, \] where \[ J_1(z):= (1+|z|)^{p'(\ell-1)}\int_\C\left|e^{p'(z\overline w)^\ell/4}\right|^2e^{-p'|w|^{2\ell}/2} (1+|w|)^{p'(\ell-1)}d\nu(w) \] and \[ J_2(z):= (1+|z|)^{p'(\ell-1)}\int_\C\left|e^{-p'(z\overline w)^\ell/4}\right|^2e^{-p'|w|^{2\ell}/2} (1+|w|)^{p'(\ell-1)}d\nu(w). \] By Lemma \ref{lem:estimate}, \[ J_1(z) =(1+|z|)^{p'(\ell-1)}\,{\mathcal I}^{\ell}_{p'/4,p'/2,p'(\ell-1)}(z)\\ \simeq (1+|z|)^{2(p'-1)(\ell-1)}\,e^{p'|z|^{2\ell}/8}. \] Since $J_2(z)=J_1(e^{i\pi/\ell}z)$, we obtain the estimate \eqref{eqn:G1:estimate}. If $p'=\infty$, by using \eqref{eqn:completarquadrats}, we have \[ \| G_{1}(\cdot,z)\|_{F^{\infty,\ell}_{1} }=\sup_{w\in\C}|G_1(w,z)|e^{-|w|^{2\ell}/2}\lesssim (1+|z|)^{2(\ell-1)}\,e^{|z|^{2\ell}/8}.\qedhere \] \end{proof} \section{Proof of Theorem \ref{thm:main3}} \label{sect:ProofThm3} The next proposition will be used to prove Theorem \ref{thm:main3}. \begin{prop}\label{prop:duall12a} The dual of $F^{1,\ell}_{2\alpha}$ with respect to the pairing $\langle\cdot,\cdot\rangle\la$ is $F^{\infty,\ell}_{\alpha/2}$. \end{prop} \begin{proof} By Proposition \ref{prop:Interp-dual}, if $\Phi\in \left(F^{1,\ell}_{2\alpha}\right)^*$, there exists a unique $h\in F^{\infty,\ell}_{2\alpha}$ such that $$ \Phi(f)=\langle f,h\rangle^\ell_{2\alpha}=\langle f(z),h(z)e^{-\alpha|z|^{2\ell}}\rangle\la,\quad\mbox{for any $f\in E$.} $$ Since $\varphi(z)=h(z)e^{-\alpha|z|^{2\ell}}\in L^\infty$, Proposition \ref{prop:PtaLinf} gives $g=P\la(\varphi)\in F^{\infty,\ell}_{\alpha/2}$, so $$ \Phi(f)=\langle f,g \rangle\la,\quad\mbox{for any $f\in E$.} $$ Conversely, if $g\in F^{\infty,\ell}_{\alpha/2}$, by Proposition \ref{prop:PtaLinf} there exists $\varphi\in L^\infty$ such that $P\la(\varphi)=g$ and $\|\varphi\|_{L^\infty}\simeq \|g\|_{F^{\infty,\ell}_{\alpha/2}}$. Thus, for $f\in E$, we have $$ |\langle f,g\rangle\la|=|\langle f,\varphi\rangle\la|\le \|\varphi\|_{L^\infty}\|f\|_{F^{1,\ell}_{2\alpha}}. $$ This ends the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main3}] The proof of this result follows from standard arguments used in the setting of classical spaces of holomorphic functions. We only include a sketch of the proof for the sake of completeness. \begin{enumerate} \item It is a consequence of Corollary \ref{cor:hankelform} and Theorem \ref{thm:main1}. \item It is a consequence of Propositions \ref{prop:PtaLinf} and \ref{prop:duall12a}. \item First we consider the case $1<p<\infty$. By~{\eqref{item:main32}}, in order to show that $F^{p,\ell}_\alpha\odot F^{p',\ell}_\alpha=F^{1,\ell}_{2\alpha}$, it is enough to prove that the dual of $F^{p,\ell}_\alpha\odot F^{p',\ell}_\alpha$ with respect to the pairing $\langle \cdot,\,\cdot\rangle^{\ell}_{\alpha}$ is $F^{\infty,\ell}_{\alpha/2}$. By~{\eqref{item:main31}}, if $b\in F^{\infty,\ell}_{\alpha/2}$ then $\Lambda^{\ell}_{b,\alpha}$ defines a bounded bilinear form on $F^{p,\ell}_\alpha\times F^{p',\ell}_\alpha$, so $h\mapsto \Lambda^{\ell}_{b,\alpha}(h,1)$ is a bounded linear form on $F^{p,\ell}_\alpha\odot F^{p',\ell}_\alpha$. Conversely, it is clear that any form $\Phi$ on $F^{p,\ell}_\alpha\odot F^{p',\ell}_\alpha$ defines a bounded linear form on $F^{p,\ell}_\alpha$. Thus, by Proposition~{\ref{prop:Interp-dual}\eqref{item:Interp-dual3},} there exists $b\in F^{p',\ell}_\alpha$ such that $\Phi(h)=\Lambda^{\ell}_{b,\alpha}(h,1)$, for any $h\in E$. Since the space $E$ is dense in $F\pla$ and $F^{p',\ell}_{\alpha}$, the bilinear form $\Lambda^{\ell}_{b,\alpha}$ extends boundedly to $F^{p,\ell}_\alpha\odot F^{p',\ell}_\alpha$. Thus, by part~{\eqref{item:main31}}, $b\in F^{\infty,\ell}_{\alpha/2}$. Similar arguments, using Proposition~{\ref{prop:Interp-dual}\eqref{item:Interp-dual4},} prove that $F^{1,\ell}_\alpha\odot \mathfrak{f}^{\infty,\ell}_\alpha=F^{1,\ell}_{2\alpha}$. Since $F^{1,\ell}_\alpha\odot F^{\infty,\ell}_\alpha\subset F^{1,\ell}_{2\alpha}$, we have $$ F^{1,\ell}_{2\alpha}= F^{1,\ell}_\alpha\odot\mathfrak{f}^{\infty,\ell}_\alpha\subset F^{1,\ell}_\alpha\odot F^{\infty,\ell}_\alpha\subset F^{1,\ell}_{2\alpha}, $$ which ends the proof. \qedhere \end{enumerate} \end{proof} \section{Proof of Theorem~{\ref{thm:main2}}} \label{sect:ProofThm2} In order to prove Theorem~{\ref{thm:main2}} we will use a standard technique based on the following lemma. \begin{lem}\label{lem:weakly:convergence} Let $1<p\le\infty$, $\ell\in\N$ and $\alpha>0$. Let $\{g_n\}_{n\in\N}$ be a sequence of functions in $E$. Then, the following conditions are equivalent: \begin{enumerate} \item \label{item:weakly:convergence1} $g_n\to 0$ weakly in $F^{p,\ell}_{\alpha}$, if $p<\infty$, and in $\mathfrak{f}^{\infty,\ell}_{\alpha}$, if $p=\infty$. \item \label{item:weakly:convergence2} $g_n\to 0$ uniformly on compact subsets of $\C$ and ${\displaystyle\sup_{n\in\N}\|g_n\|_{F^{p,\ell}_{\alpha}}<\infty}$. \end{enumerate} \end{lem} \begin{proof} Assume that~{\eqref{item:weakly:convergence1}} holds. Then it is well known that $\sup_{n\in\N}\|g_n\|_{F^{p,\ell}_{\alpha}}<\infty$, so $\{g_n\}$ is uniformly bounded on compact subsets of $\C$. Moreover, since $g_n\to 0$ weakly in $F^{p,\ell}_{\alpha}$, then, for each $z\in\C$, \[ g_n(z)= \langle g_n, K^{\ell}_{\alpha,z} \rangle^{\ell}_{\alpha}\to 0, \quad \mbox{as $n\to \infty$.}\] Consequently, $g_n\to 0$ uniformly on compact subsets of $\C$, by Montel's theorem. \par Reciprocally, assume that~{\eqref{item:weakly:convergence2}} holds. By Proposition~{\ref{prop:Interp-dual}\eqref{item:Interp-dual3}-\eqref{item:Interp-dual4}}, we have to show that $\langle f,\,g_n\rangle^{\ell}_{\alpha}\to 0$, as $n\to\infty$, for every $f\in F^{p',\ell}_{\alpha}$. Let $f\in F^{p',\ell}_{\alpha}$. Then, for every $R>0$, we have \[ \langle f,\,g_n\rangle^{\ell}_{\alpha}= \biggl\{\int_{|w|\le R}+\int_{|w|>R}\biggr\} f(w)g_n(w)e^{-|w|^{2\ell}}d\nu(w)=I_n(R)+J_n(R). \] Since $p'<\infty$, we have that $\int_{|w|>R}|f(w)e^{-|w|^{2\ell}/2}|^{p'}\,d\nu(w)\to0$, as $R\to\infty$, so \[ \lim_{R\to\infty}\sup_{n\in\N}J_n(R)=0, \] by H\"{o}lder's inequality and the fact that $\sup_{n\in\N}\|g_n\|_{F^{p,\ell}_{\alpha}}<\infty$. Moreover, since $g_n\to 0$ uniformly on compact subsets of $\C$ then $I_n(R)\to0$, as $n\to\infty$, for every $R>0$. It turns out that $\langle f,\,g_n\rangle^{\ell}_{\alpha}\to 0$, as $n\to\infty$, and the proof is complete. \end{proof} \begin{proof}[Proof of Theorem~{\ref{thm:main2}}] By Proposition~{\ref{prop:dilations:small;Hankel:operator}} we only have to prove Theorem~{\ref{thm:main2}} for $\alpha=1$. First we prove that, if either $\mathfrak{h}^{\ell}_{b,1}: F^{p,\ell}_1\to \overline{F^{p,\ell}_1}$ , $1<p<\infty$, or $\mathfrak{h}^{\ell}_{b,1}: \mathfrak{f}^{\infty,\ell}_1\to \overline{\mathfrak{f}^{\infty,\ell}_1}$ is compact, then $b\in \mathfrak{f}^{\infty,\ell}_{1/2}$. Suppose that $\mathfrak{h}^{\ell}_{b,1}: F^{p,\ell}_1\to L^{p,\ell}_1$ is compact and we want to prove that $b\in \mathfrak{f}^{\infty,\ell}_{1/2}$. Let $G_0, G_1$ be the functions defined by~{\eqref{eqn:defnG0G1}}. Since $\mathfrak{h}^{\ell}_{b,1}: F^{p,\ell}_1\to L^{p,\ell}_1$ ($\mathfrak{h}^{\ell}_{b,1}:\mathfrak{f}^{\infty,\ell}_ 1\to L^{\infty,\ell}_1$) is bounded, the proof of the necessity in Theorem~{\ref{thm:main1}} (see~{\S\ref{subsec:proof:necessity:boundedness}}) implies that~{\eqref{eqn:symbol:representation}} holds, and so \begin{align*} |b(z)|&\lesssim \|\mathfrak{ h}^{\ell}_{b,1}(G_{0}(\cdot,z))\|_{L^{p,\ell}_{1}}\, \|G_{1}(\cdot,z)\|_{F^{p',\ell}_{1}}\\ &= \|\mathfrak{ h}^{\ell}_{b,1}(g_{0}(\cdot,z))\|_{L^{p,\ell}_{1}}\, \|G_{0}(\cdot,z)\|_{F^{p,\ell}_{1}} \|G_{1}(\cdot,z)\|_{F^{p',\ell}_{1}}, \end{align*} where $g_0(w,z)=G_0(w,z)/\|G_{0}(\cdot,z)\|_{F^{p,\ell}_{1}}$. Then~{\eqref{eqn:G0:estimate}} and~{\eqref{eqn:G1:estimate}} show that \[ |b(z)|e^{-|z|^{2\ell}/4}\lesssim \|{\mathfrak h}^{\ell}_{b,1}(g_{0}(\cdot,z))\|_{L^{p,\ell}_{1}}, \] It is easy to check that $g_0(\cdot,z)\to 0$ uniformly on compact subsets of $\C$, as $|z|\to\infty$. By Lemma~{\ref{lem:weakly:convergence}} it follows that $g_0(\cdot,z)\to 0$ weakly in $F^{p,\ell}_1$, as $|z|\to\infty$. Note that, if $p=\infty$, the same arguments show that $g_0(\cdot,z)\to 0$ weakly in $\mathfrak{f}^{\infty,\ell}_1$. Then the compactness of $\mathfrak{h}^{\ell}_{b,1}: F^{p,\ell}_1\to L^{p,\ell}_1$, $1<p<\infty$, ($\mathfrak{h}^{\ell}_{b,1}:\mathfrak{f}^{\infty,\ell}_ 1\to L^{\infty,\ell}_1$, respectively) shows that \begin{equation}\label{eqn:norm:convergence:to:0} \lim_{|z|\to \infty} \|\mathfrak{h}^{\ell}_{b,1}(g_0(\cdot,z))\|_{L^{p,\ell}_{1}}=0. \end{equation} and so~{\eqref{eqn:norm:convergence:to:0}} gives that $|b(z)|e^{-|z|^{2\ell}/4}\to0$, as $|z|\to\infty$. Now we consider the case $p=1$. By Corollary \ref{cor:hankelform}, the operator $\mathfrak{h}^{\ell}_{b,1}: F^{1,\ell}_1\to \overline{F^{1,\ell}_1}$ is the adjoint of $\mathfrak{h}^{\ell}_{b,1}:{\mathfrak f}^{\infty,\ell}_1\to \overline{{\mathfrak f}^{\infty,\ell}_1}$. Thus the compactness of the first operator implies the compactness of the second operator and, as we have just shown, this implies that $b\in \mathfrak{f}^{\infty,\ell}_{1/2}$. Now assume that $b\in\mathfrak{f}^{\infty,\ell}_{1/2}$ and $1\le p<\infty$. Then, by Theorem~{\ref{thm:main1}}, $\mathfrak{h}^{\ell}_{b,1}$ is a bounded operator from $F^{p,\ell}_1$ to $\overline{F^{p,\ell}_1}$. Moreover, since $\mathfrak{f}^{\infty,\ell}_{1/2}$ is the closure of the polynomials in $F^{\infty,\ell}_{1/2}$, there is a sequence of polynomials $\{P_n\}_{n\in\N}$ such that $\|P_n-b\|_{F^{\infty,\ell}_{1/2}}\to0$. Therefore $\|\mathfrak{h}^{\ell}_{b,1} -\mathfrak{h}^{\ell}_{P_n,1}\|_{F^{p,\ell}_1}\to0$, because \[ \|\mathfrak{h}^{\ell}_{P_n,1} -\mathfrak{h}^{\ell}_{b,1}\|_{F^{p,\ell}_1} =\|\mathfrak{h}^{\ell}_{P_n-b,1}\|_{F^{p,\ell}_1}\lesssim \|P_n-b\|_{F^{\infty,\ell}_{1/2}}, \] by Theorem~{\ref{thm:main1}} again. Since $\{\mathfrak{h}^{\ell}_{P_n,1}\}_{n\ge0}$ is a sequence of finite rank operators, it follows that $\mathfrak{h}^{\ell}_{b,1}:F^{p,\ell}_{1}\to\overline{F^{p,\ell}_{1}}$ is compact. \par Note that the above argument also works by replacing the space $F^{p,\ell}_{1}$ by $\mathfrak{f}^{\infty,\ell}_1$, and hence the proof of Theorem~{\ref{thm:main2}} is complete. \end{proof} \section{Proof of Theorem~{\ref{thm:main4}}} \label{sect:ProofThm4} \subsection{The small Hankel operator on $F\dla$}\quad\par By Proposition~{\ref{prop:dilations:small;Hankel:operator}} it is enough to prove the result for $\alpha=1$, that is, to prove \begin{equation}\label{eqn:main:estimate} \|\mathfrak{h}^{\ell}_{b,1}\|_{\mathcal{S}_2(F^{2,\ell}_1)}^2 \simeq\|b\|^2_{F^{2,\ell}_{1/2,\Delta}}. \end{equation} In order to do that, first we estimate $\|\mathfrak{h}^{\ell}_{b,1}\|_{\mathcal{S}_2(F^{2,\ell}_1)}^2$ and $\|b\|^2_{F^{2,\ell}_{1/2,\Delta}}$ in terms of the Taylor coefficients of $b$. \begin{lem}\label{lem:coefficients} Let $\ell\in \N$ and let $b(z)=\sum_{m=0}^{\infty}c_mz^m$ be a function in $H^{\infty,\ell}_1$. Then \begin{equation}\label{eqn:coefficients:eq1} \|\mathfrak{h}^{\ell}_{b,1}\|_{\mathcal{S}_2(F^{2,\ell}_1)}^2 \simeq\sum_{m=0}^{\infty}|c_m|^2\,\, \Gamma\Bigl(\frac{m+1}{\ell}\Bigr)^2\, \sum_{k=0}^m\frac1{\Gamma\bigl(\frac{k+1}{\ell}\bigr) \Gamma\bigl(\frac{m-k+1}{\ell}\bigr)}, \end{equation} and \begin{equation}\label{eqn:coefficients:eq2} \|b\|^2_{F^{2,\ell}_{1/2,\Delta}}\simeq \sum_{m=0}^{\infty}|c_m|^2\,\,2^{m/\ell}\, \Gamma\Bigl(\frac{m}{\ell}+1\Bigr). \end{equation} \end{lem} \begin{proof} We begin proving \eqref{eqn:coefficients:eq1}. Let $e_n(z)=z^n/\|z^n\|_{F^{2,\ell}_1}$, $n=0,1,\cdots$. It is easy to check that \begin{align*} \overline{\mathfrak{h}^{\ell}_{b,1}(e_n)(z)} =\sum_{m=0}^\infty c_{n+m}\frac{\|w^{m+n}\|_{F^{2,\ell}_1}^2}{\|w^{m}\|_{F^{2,\ell}_1}\|w^{n}\|_{F^{2,\ell}_1}}\,e_m(z). \end{align*} Thus \begin{align*} \|\mathfrak{h}^{\ell}_{b,1}\|^2_{S_2(F^{2,\ell}_1)} &=\sum_{n=0}^\infty\|\mathfrak{h}^{\ell}_{b,1}(e_n)\|^2_{F^{2,\ell}_1} =\sum_{n=0}^\infty \sum_{m=0}^\infty |c_{n+m}|^2\frac{\|w^{m+n}\|_{F^{2,\ell}_1}^4}{\|w^{n}\|_{F^{2,\ell}_1}^2\|w^{m}\|_{F^{2,\ell}_1}^2}\\ &=\sum_{m=0}^\infty \sum_{n=0}^m |c_{m}|^2\frac{\|w^{m}\|_{F^{2,\ell}_1}^4}{\|w^{n}\|_{F^{2,\ell}_1}^2\|w^{m-n}\|_{F^{2,\ell}_1}^2}\\ &=\sum_{m=0}^{\infty}|c_m|^2\,\, \Gamma\Bigl(\frac{m+1}{\ell}\Bigr)^2\, \sum_{k=0}^m\frac1{\Gamma\bigl(\frac{k+1}{\ell}\bigr) \Gamma\bigl(\frac{m-k+1}{\ell}\bigr)}. \end{align*} Next we prove \eqref{eqn:coefficients:eq2}: \begin{align*} \|b\|_{F^{2,\ell}_{1/2,\Delta}}^2&=\sum_{m=0}^\infty |c_m|^2 \int_\C |z|^{2m}e^{-|z|^{2\ell}/2}(1+|z|^{2\ell-2})\,d\nu(z)\\ &\simeq\sum_{m=0}^\infty |c_m|^2 \int_0^\infty r^{2m+1}(1+r^{2\ell-2})\,e^{-r^{2\ell}/2} dr\\ &\simeq \sum_{m=0}^\infty |c_m|^2\, 2^{m/\ell}\, \Bigl\{\Gamma\Bigl(\frac{m+1}{\ell}\Bigr)+ \Gamma\Bigl(\frac{m}{\ell}+1\Bigr)\Bigr\}\\ &\simeq \sum_{m=0}^\infty |c_m|^2\, 2^{m/\ell}\,\Gamma\bigl(\frac{m}{\ell}+1\bigr).\qedhere \end{align*} \end{proof} From Lemma~{\ref{lem:coefficients}} it is clear that~{\eqref{eqn:main:estimate}} is equivalent to \[ \Gamma\Bigl(\frac{m+1}{\ell}\Bigr)^2\, \sum_{k=0}^m\frac1{\Gamma\bigl(\frac{k+1}{\ell}\bigr) \Gamma\bigl(\frac{m-k+1}{\ell}\bigr)} \simeq \,2^{m/\ell}\, \Gamma\Bigl(\frac{m}{\ell}+1\Bigr)\qquad(m\ge0), \] which can be written as \begin{equation}\label{eqn:estimate:coefficients} \sum_{k=0}^m \frac{\Gamma\bigl(\frac{m+2-\ell}{\ell}\bigr)} {\Gamma\bigl(\frac{k+1}{\ell}\bigr) \Gamma\bigl(\frac{m-k+1}{\ell}\bigr)} \simeq \,2^{m/\ell}\, \frac{\Gamma\Bigl(\frac{m+2-\ell}{\ell}\Bigr) \Gamma\Bigl(\frac{m}{\ell}+1\Bigr)} {\Gamma\bigl(\frac{m+1}{\ell}\bigr)^2}\qquad(m\ge8\ell). \end{equation} Now, by Stirling's formula, \[ \frac{\Gamma\Bigl(\frac{m+2-\ell}{\ell}\Bigr) \Gamma\Bigl(\frac{m}{\ell}+1\Bigr)} {\Gamma\bigl(\frac{m+1}{\ell}\bigr)^2}\simeq1 \qquad(m\ge8\ell). \] Hence~{\eqref{eqn:estimate:coefficients}} follows from the following lemma. \begin{lem}\label{lem:estimate:coefficients} \[ \sum_{k=0}^m \frac{\Gamma\bigl(\frac{m+2-\ell}{\ell}\bigr)} {\Gamma\bigl(\frac{k+1}{\ell}\bigr) \Gamma\bigl(\frac{m-k+1}{\ell}\bigr)}\simeq 2^{m/\ell} \qquad(m\ge8\ell). \] \end{lem} The key ingredient to prove Lemma~{\ref{lem:estimate:coefficients}} is the following important inequality. \begin{Chernoff:inequality}[{\cite[(1.3.10) p.16]{dudley}}] \[ \sum_{0\le i\le n/4}\binom{n}{i}\le 2^n e^{-{n/8}}, \quad\mbox{ for every $n\ge0$.} \] \end{Chernoff:inequality} \begin{proof}[Proof of Lemma~{\ref{lem:estimate:coefficients}}] Let $m=n\ell+r$, where $n\ge8$ and $0\le r<\ell$. Then we may decompose the sum $S(m)$ of the statement as \begin{eqnarray*} S(m) &=& \sum_{j=0}^{n-1}\sum_{s=0}^{\ell-1} \frac{\Gamma\bigl(\frac{m+2-\ell}{\ell}\bigr)} {\Gamma\bigl(\frac{j\ell+s+1}{\ell}\bigr) \Gamma\bigl(\frac{m-(j\ell+s)+1}{\ell}\bigr)} +\sum_{s=0}^r \frac{\Gamma\bigl(\frac{m+2-\ell}{\ell}\bigr)} {\Gamma\bigl(\frac{n\ell+s+1}{\ell}\bigr) \Gamma\bigl(\frac{m-(n\ell+s)+1}{\ell}\bigr)}\\ &=& \sum_{s=0}^{\ell-1}\sum_{j=0}^{n-1} \frac{\Gamma\bigl(n-1+\frac{r+2}{\ell}\bigr)} {\Gamma\bigl(j+\frac{s+1}{\ell}\bigr) \Gamma\bigl(n-j+\frac{r-s+1}{\ell}\bigr)} +\sum_{s=0}^r \frac{\Gamma\bigl(n-1+\frac{r+2}{\ell}\bigr)} {\Gamma\bigl(n+\frac{s+1}{\ell}\bigr) \Gamma\bigl(\frac{r-s+1}{\ell}\bigr)}\\ &=& \sum_{s=0}^{\ell-1} \frac{\Gamma\bigl(n-1+\frac{r+2}{\ell}\bigr)} {\Gamma\bigl(\frac{s+1}{\ell}\bigr) \Gamma\bigl(n+\frac{r-s+1}{\ell}\bigr)} + \sum_{s=0}^r \frac{\Gamma\bigl(n-1+\frac{r+2}{\ell}\bigr)} {\Gamma\bigl(n+\frac{s+1}{\ell}\bigr) \Gamma\bigl(\frac{r-s+1}{\ell}\bigr)}\\ & & + \sum_{s=0}^{\ell-1} \Bigl\{\sum_{1\le j\le\frac{n}4}+ \sum_{\frac{n}4<j<\frac{3n}4}+ \sum_{\frac{3n}4\le j\le n-1} \Bigr\} \frac{\Gamma\bigl(n-1+\frac{r+2}{\ell}\bigr)} {\Gamma\bigl(j+\frac{s+1}{\ell}\bigr) \Gamma\bigl(n-j+\frac{r-s+1}{\ell}\bigr)}\\ &=& S_1(m)+S_2(m)+S_3(m)+S_4(m)+S_5(m). \end{eqnarray*} In order to estimate the above five sums we recall that $\Gamma$ is an increasing function on $[2,\infty)$. Then, since $\frac{r+2}{\ell}\le2$, we have that \begin{equation}\label{estimate:numerator} \Gamma\bigl(n-1+\tfrac{r+2}{\ell})\le\Gamma(n+1). \end{equation} On the other hand, since \[ \frac2{\ell}-1\le\frac{r-s+1}{\ell}\le1 \quad\mbox{ and }\quad \frac1{\ell}\le\frac{s+1}{\ell}\le1 \qquad(0\le s<\ell), \] we also have that \begin{equation}\label{estimate:denominator1} \Gamma(n-j-1)\le\Gamma\bigl(n-j+\tfrac{r-s+1}{\ell}\bigr) \qquad(0\le s<\ell,\,0\le j\le n-3), \end{equation} and \begin{equation}\label{estimate:denominator2} \Gamma(j)\le\Gamma\bigl(j+\tfrac{s+1}{\ell}\bigr) \qquad(2\le j,\,0\le s<\ell). \end{equation} Now~{\eqref{estimate:numerator}} and~{\eqref{estimate:denominator1}} imply that \begin{equation}\label{estimate:S1} S_1(m)\lesssim\frac{\Gamma(n+1)}{\Gamma(n-1)}=n(n-1) \lesssim 2^n\simeq 2^{m/\ell}, \end{equation} and, in particular, \begin{equation}\label{estimate:S2} S_2(m) =\sum_{s=0}^r \frac{\Gamma\bigl(n-1+\frac{r+2}{\ell}\bigr)} {\Gamma\bigl(n+\frac{r-s+1}{\ell}\bigr) \Gamma\bigl(\frac{s+1}{\ell}\bigr)} \le S_1(m)\lesssim 2^{m/\ell}. \end{equation} Moreover, by~{\eqref{estimate:numerator}}, \eqref{estimate:denominator1} and~{\eqref{estimate:denominator2}} we have that \begin{eqnarray*} S_3(m)+S_5(m) &\lesssim& \sum_{1\le j\le\frac{n}4} \frac{\Gamma(n+1)}{\Gamma(j)\Gamma(n-j+1)} +\sum_{\frac{3n}4\le j\le n-1} \frac{\Gamma(n+1)}{\Gamma(j)\Gamma(n-j+1)}\\ &=& \sum_{1\le j\le\frac{n}4} \frac{\Gamma(n+1)}{\Gamma(j)\Gamma(n-j+1)} +\sum_{1\le j\le\frac{n}4} \frac{\Gamma(n+1)}{\Gamma(n-j)\Gamma(j+1)}\\ &=& \sum_{1\le j\le\frac{n}4} \frac{\Gamma(n+1)}{\Gamma(j)\Gamma(n-j+1)}\, \left(1+\frac{n-j}{j}\right)\\ &=&n\sum_{1\le j\le\frac{n}4} \frac{\Gamma(n+1)}{\Gamma(j+1)\Gamma(n-j+1)} \le n\sum_{0\le j\le\frac{n}4}\binom{n}{j}. \end{eqnarray*} So Chernoff's inequality gives \begin{equation}\label{estimate:S3-S5} S_3(m)+S_5(m)\lesssim n\,e^{-n/8}\,2^n\lesssim 2^n \simeq 2^{m/\ell}. \end{equation} To estimate $S_4(m)$ we apply Lemma~{\ref{lem:Stirling}} and we obtain that \begin{eqnarray*} S_4(m) &\simeq& \sum_{s=0}^{\ell-1} \sum_{\frac{n}4<j<\frac{3n}4} \frac{(n-1)^{\frac{r+2}{\ell}}} {j^{\frac{s+1}{\ell}}(n-j)^{\frac{r-s+1}{\ell}}}\, \frac{\Gamma(n-1)}{\Gamma(j)\Gamma(n-j)}\\ &\simeq& \sum_{\frac{n}4<j<\frac{3n}4} \frac{\Gamma(n-1)}{\Gamma(j)\Gamma(n-j)} \simeq \sum_{\frac{n}4<j<\frac{3n}4} \binom{n}{j}=2^n-2\sum_{0\le j\le\frac{n}4}\binom{n}{j}. \end{eqnarray*} Therefore Chernoff's inequality shows that \begin{equation}\label{estimate:S4} S_4(m)\simeq 2^n\simeq 2^{m/\ell}. \end{equation} By \eqref{estimate:S1}, \eqref{estimate:S2}, \eqref{estimate:S3-S5} and \eqref{estimate:S4} we conclude that $S(m)\simeq 2^{m/\ell}$, and the proof is complete. \end{proof} \subsection{The small Hankel operator on $L\dla$}\quad\par In this section we characterize the membership of $\hvla$ to the Hilbert-Schmidt class $\mathcal{S}_2(L^{2,\ell}_{\alpha})$ of $L^{2,\ell}_{\alpha}$. Let $L^2_\Delta:=L^2(\C, (1+|z|)^{2(\ell-1)}\,d\nu)$. Then we have: \begin{thm}\label{thm:HSL2} $\mathfrak{h}^{\ell}_{\varphi,\alpha} \in \mathcal{S}_2(L^{2,\ell}_{\alpha})$ if and only if $\varphi \in L^2_\Delta$. Moreover, \[ \|\mathfrak{h}^{\ell}_{\varphi,\alpha}\|_{\mathcal{S}_2(L^{2,\ell}_{\alpha})} \simeq\| \varphi\|_{L^2_\Delta}. \] In particular, if $\varphi \in L^2_\Delta$, then $\mathfrak{h}^{\ell}_{\varphi,\alpha}\in\mathcal{S}_2(F^{2,\ell}_{\alpha})$ and $\|\mathfrak{h}^{\ell}_{\varphi,\alpha}\|_ {\mathcal{S}_2(F^{2,\ell}_{\alpha})}\lesssim \| \varphi\|_{L^2_\Delta}.$ \end{thm} \begin{proof} Note that \[ \mathfrak{h}^\ell_{\varphi,\alpha}(f)(z):=\overline{P\la(\overline{f} \varphi)(z)}= \int_{\C}K\la(w,z)f(w)\,\overline{\varphi}(w)\,e^{-\alpha|w|^{2\ell}}\,d\nu(w) \] is an integral operator with respect to the positive measure $e^{-\alpha|w|^{2\ell}}\,d\nu(w)$ and whose integral kernel is $K^\ell_\alpha(w,z)\overline{\varphi(w)}$. So it is well known (see~{\cite[Theorem~3.5]{Zhu2007}}, for example) that \begin{equation*}\begin{split} \|\mathfrak{h}^{\ell}_{\varphi,\alpha}\|_{\mathcal{S}_2(L^{2,\ell}_{\alpha/2})}^2 &=\int_{\C}\biggl(\int_{\C} |K^\ell_\alpha(w,z)|^2|\varphi(w)|^2\,e^{-\alpha|w|^{2\ell}}\,d\nu(w)\biggr) e^{-\alpha|z|^{2\ell}}\,d\nu(z) \\ & =\int_{\C}|\varphi(w)|^2 \biggl(\int_{\C}|K^\ell_\alpha(w,z)|^2e^{-\alpha|z|^{2\ell}}\,d\nu(z)\biggr) e^{-\alpha|w|^{2\ell}}\,d\nu(w)\\ &= \int_{\C} |\varphi(w)|^2K\la(w,w)e^{-\alpha|w|^{2\ell}}\,d\nu(w)\\ &\simeq \int_{\C} |\varphi(w)|^2(1+|w|)^{2(\ell-1)}\,d\nu(w), \end{split}\end{equation*} where the last equivalence follows from $ K\la(w,w)=H\la(|w|^2)\simeq (1+|w|)^{\ell-1}e^{\alpha|w|^{2\ell}}$ (see \eqref{eqn:KH} and \eqref{eqn:asimpE}). And that's all. \end{proof} Finally we show that the space of Hilbert-Schmidt symbols for $F\dla$ is just the projection of the space of Hilbert-Schmidt symbols for $L\dla$. \begin{prop}\label{prop:Ptap} The projection $P\la$ is bounded from $L^2_\Delta$ onto $F^{2,\ell}_{\alpha/2,\Delta}$. \end{prop} \begin{proof} Let $\{e_m\}_{m\in\N}$ be an orthonormal basis of $F\dla$ and let $\{u_m\}_{m\in\N}$ be an orthonormal basis of the orthogonal of $F\dla$ in $L\dla$. By Theorems \ref{thm:HSL2} and \ref{thm:main4}, we have, for any $\varphi\in L^2_\Delta$, \begin{align*} \|\varphi\|_{L^2_\Delta}^2\simeq\|\varphi\|_{S_2(L\dla)}^2 &=\sum_{m=1}^\infty \|\hvla(e_m)\|_{L\dla}^2+ \sum_{m=1}^\infty \|\hvla (u_m)\|_{L\dla}^2\\ &\ge \sum_{m=1}^\infty \|\mathfrak{h}^{\ell}_{P\la(\varphi),\alpha}(e_m)\|_{F\dla}^2\simeq \|P\la(\varphi)\|_{F^{2,\ell}_{\alpha/2,\Delta}}^2. \end{align*} So we have just proved that $P\la:L^2_\Delta \to F^{2,\ell}_{\alpha/2,\Delta}$ is bounded. Let $b\in F^{2,\ell}_{\alpha/2,\Delta}$. Since $F^{2,\ell}_{\alpha/2,\Delta}\subset F^{2,\ell}_{\alpha/2}$, we have $b=P_{\alpha/2}^\ell(b)$, by \eqref{eqn:reproducing:property}. By \eqref{eqn:kernel}, $K_{\alpha/2}^\ell(z,w)=2^{-1/\ell} K^\ell_\alpha(z,2^{-1/\ell}w)$, so \begin{align*} b(z)&=2^{-1/\ell} \int_\C K^\ell_\alpha\bigl(z,2^{-1/\ell}w\bigr) b(w) e^{-\frac{\alpha}2|w|^{2\ell}}d\nu(w)\\ &=2^{1/\ell} \int_\C K^\ell_\alpha(z,u) b(2^{1/\ell}u) e^{-2\alpha|u|^{2\ell}}d\nu(u)=P\la(\varphi)(z), \end{align*} where $\varphi(u)=2^{1/\ell} b(2^{1/\ell}u) e^{-\frac{\alpha}4|2^{1/\ell}u|^{2\ell}}$, which clearly belongs to $L^2_\Delta$. \end{proof}
{ "timestamp": "2017-12-15T02:08:35", "yymm": "1712", "arxiv_id": "1712.05250", "language": "en", "url": "https://arxiv.org/abs/1712.05250" }
\section{Introduction} \label{intro} Essentially all correlated electron high temperature superconductors display an anomalous metallic state at temperatures above the superconducting critical temperature at optimal doping \cite{Keimer15,Matsuda2010,SK2011}. This metallic state has a `strange' linearly-increasing dependence of the resistivity, $\rho$, on temperature, $T$; it can also exhibit bad metal behavior with a resistivity much larger than the quantum unit $\rho \gg h/e^2$ (in two spatial dimensions) \cite{Emery95}. More recently, strange metals have also been demonstrated to have a remarkable linear-in-$B$ magnetoresistance, with the crossover between the linear-in-$T$ and linear-in-$B$ behavior occurring at $\mu_BB \sim k_B T$ \cite{Hayes2016,Giraldo2017}. \change{ This paper will present a model of a strange metal which exhibits the above linear-in-$T$ {\it and\/} linear-in-$B$ behavior. The model builds on a lattice array of quantum `dots' or `islands', each of which is described by a Sachdev-Ye-Kitaev (SYK) model of fermions with random all-to-all interactions \cite{SY93,kitaev2015talk}. The SYK models are 0+1 dimensional quantum theories which exhibit a `local criticality'. They have drawn a great deal of interest for a variety of reasons: \begin{itemize} \item The SYK models are the simplest solvable models without quasiparticle excitations. They can also be used as fully quantum building blocks for theories of strange metals in non-zero spatial dimensions \cite{PG98,Balents2017}. \item The SYK models exhibit many-body chaos \cite{kitaev2015talk,Maldacena2016}, and saturate the lower bound on the Lyapunov time to reach chaos \cite{Maldacena2016a}. So they are ``the most chaotic'' quantum many-body systems. The presence of maximal chaos is linked to the absence of quasiparticle excitations, and the proposed \cite{ssbook} lower bound of order $\hbar /(k_B T)$ on a `dephasing time'. It is important to note here that the co-existence of many-body chaos and solvability is quite remarkable: essentially all other solvable models ({\em e.g.\/} integrable lattice models in one dimension) do not exhibit many-body chaos. \item Related to their chaos, the SYK models exhibit \cite{Sonner17} eigenstate thermalization (ETH) \cite{Deutsch91,Srednicki94}, and yet many aspects are exactly solvable. \item The SYK models are dual to gravitational theories in $1+1$ dimensions which have a black hole horizon. The connection between the SYK models and black holes with a near-horizon AdS$_2$ geometry was proposed in Refs.~\cite{SS10,SS10b}, and made much sharper in Refs.~\cite{kitaev2015talk,nearlyads2,kitaev2017}. This connection has been used to examine aspects of the black hole information problem \cite{Maldacena2017}. \end{itemize} } More specifically, a single SYK site is a 0+1 dimensional non-Fermi liquid in which the imaginary-time ($\tau$) fermion Green's function has the low $T$ `conformal' form \cite{SY93,PG98,Faulkner09,Sachdev2015} \begin{equation} G(\tau) \sim \left( \frac{T}{\sin(\pi T \tau)} \right)^{1/2} e^{-2 \pi \mathcal{E} T \tau}\,, \quad 0 < \tau < 1/T \,, \label{Glocal} \end{equation} where $\mathcal{E}$ is a parameter controlling the particle-hole asymmetry. \change{ In frequency space, this correlator is $G(\omega) \sim 1/\sqrt{\omega}$ for $\omega \gg T$, and this implies non-Fermi liquid behavior.} \rchange{A Fermi liquid has the exponent 1/2 in Eq.~(\ref{Glocal}) replaced by unity, and a constant density of states with $G(\omega)$ frequency independent. The Green's function in Eq.~(\ref{Glocal})} implies \cite{SY93} a `marginal' \cite{Varma89} susceptibility, $\chi$, with a real part which diverges logarithmically with vanishing frequency ($\omega$) or $T$. Specifically, in the all-to-all limit of the SYK model, vertex corrections are sub-dominant, and \rchange{Fourier transform of} $\chi (\tau) = - G(\tau) G(-\tau)$ leads to the spectral density \begin{equation} \mbox{Im}\, \chi (\omega) \sim \tanh \left( \frac{\omega}{2 T} \right)\,, \label{chitanh} \end{equation} whose Hilbert transform leads to the noted logarithmic divergence. \rchange{In contrast, a Fermi liquid has $\mbox{Im}\, \chi (\omega) \sim \omega$.} The form in Eq.~(\ref{chitanh}) is consistent with recent electron scattering observations \cite{Abbamonte17}. A linear-in-$T$ resistivity now follows upon considering itinerant fermions scattering off such a local susceptibility, and the itinerant fermions realize a marginal Fermi liquid (MFL) with a $\omega \ln \omega$ self energy \cite{Varma89,SY93,SS10,Faulkner2013}. \change{ We now review previous approaches to building a finite-dimensional non-Fermi liquid from the $0+1$ dimensional SYK model. An early model} for a bulk strange metal in finite spatial dimensions was provided by Parcollet and Georges \cite{PG98}. They considered a doped Mott insulator described by a random $t$-$J$ model at hole density $\delta$, where $t$ is the root-mean-square (r.m.s.) electron hopping, and $J$ is the r.m.s. exchange interaction. At low doping with $\delta t \ll J$, they found strange metal behavior in the intermediate $T$ regime $E_c < T < J$, where the coherence energy $E_c = (\delta t)^2/J$. \change{ In this intermediate energy range, they found that the electron Green's function had the local form of the SYK model in Eq.~(\ref{Glocal}). Moreover, this metal had `bad metal' resistivity with $\rho \sim (h/e^2) (T/E_c) \gg (h/e^2)$. We will refer to such a strange metal as an `incoherent metal' (IM). This IM is to be contrasted from a MFL, which we will describe below; the MFL does not appear in the model of Parcollet and Georges.} Another finite-dimensional model of an IM appeared in the recent work of Song {\it et al.} \cite{Balents2017}. They considered a lattice of SYK sites, with r.m.s. on-site interaction $U$, and r.m.s. inter-site hopping $t$. \change{Each site was a quantum island with $N$ orbitals, and had random on-site interactions with typical magnitude $U$. Electrons were allowed to hop between nearest-neighbor states, with a random matrix element of magnitude $t$. Although this is a model with strong interactions, the remarkable fact is that the random nature of the interactions renders it exactly solvable. As in Ref.~\onlinecite{PG98}, Song {\it et al.}} found an IM in the intermediate regime $E_c < T < U$, with a local electron Green's function as in Eq.~(\ref{Glocal}), and a bad metal resistivity $\rho \sim (h/e^2) (T/E_c)$. Their coherence scale was $E_c = t^2/U$. (This lattice SYK model should be contrasted from earlier studies \cite{Gu2017,Sachdev2017}, which only had fermion interaction terms between neighboring SYK sites: the latter models realize disordered metallic states without quasiparticle excitations as $T \rightarrow 0$, but have a $T$-independent resistivity.) \change{Although these models \cite{PG98,Balents2017} reproduce bad metal resistivity, we will show here that they are unable to describe the experimentally observed large magnetoresistance noted earlier \cite{Hayes2016,Giraldo2017}. The random nature of the hopping between the sites, and the associated absence of a Fermi surface, results in negligible magnetoresistance. Significant orbital magnetoresistance only appears in models which have fermions with non-random hopping and a well-defined Fermi surface. Note that the existence of a Fermi surface does not directly imply the presence of well-defined quasiparticles: it is possible to have a sharp Fermi surface in momentum space (where the inverse fermion Green's function vanishes) while the quasiparticle spectral function is broad in frequency space.} \begin{figure} \begin{center} \includegraphics[height=2.1in]{Fig1a.pdf} ~~~~~ \includegraphics[height=2.1in]{Fig1b.pdf} \end{center} \caption{(a) A cartoon of our microscopic model. \rchange{Itinerant} conduction electrons (green) hop around on a lattice (black). At each lattice site, they interact locally and randomly with SYK \rchange{quantum dots} (blue) through an interaction (orange) that independently conserves the numbers of conduction and island electrons. (b) Finite-temperature regimes of the model. When the conduction electron bandwidth is large enough, it realizes a disordered marginal-Fermi liquid (MFL) for the conduction electrons for all temperatures $T\ll J$ (Sec.~\ref{infiniband}). For a finite bandwidth, there can be a finite-temperature crossover to an `incoherent metal' (IM), in which all notion of electron momentum is lost, if the coupling $g$ is large enough (Sec.~\ref{dialup}). Note that we always have $J\gg T$ and $J\gtrsim g$.} \label{Modelfig} \end{figure} \change{With the aim of obtaining a well-defined Fermi surface of itinerant electrons, in this paper we consider a lattice of SYK islands coupled to a separate band of itinerant \rchange{conduction} electrons \rchange{as illustrated in Fig.~\ref{Modelfig}}.} Our model is in the spirit of effective Kondo lattice models which have been proposed as models of the physics of the disordered, single-band Hubbard model \cite{MSB89,BF92,senthilLM}. Other two band models of itinerant electrons coupled to SYK excitations have been considered in Refs.~\onlinecite{BGG01,McGreevy2017}. Our model exhibits MFL behavior as $T \rightarrow 0$, with a linear-in-$T$ resistivity, and a $T \ln T$ specific heat. For an appropriate range of parameters, there is a crossover at higher $T$ to an IM regime, also with a linear-in-$T$ resistivity. The itinerant electrons have a {\it non\/}-random hopping $t$, the SYK sites have a random interaction with r.m.s. strength $J$, and these two sub-systems interact with a random Kondo-like exchange of r.m.s. strength $g$: see Fig.~\ref{Modelfig}a for a schematic illustration. Fig.~\ref{Modelfig}b illustrates the regimes of MFL and IM behavior in our model. \change{In the MFL regime, our model exhibits a well-defined Fermi surface, albeit of damped quasiparticles.} The magnetotransport properties of this model will be a significant focus of our analysis. \change{We will show that the MFL regime with a Fermi surface indeed has a sizeable magnetoresistance, with characteristics in accord with observations.} We find that the longitudinal and Hall conductivities, \change{of the MFL regime}, can be written as scaling functions of $B/T$, as shown in Eq.~(\ref{eq:Bscale}). In contrast, the $B$ dependence is much less singular in the IM regime. \change{Although a $B/T$ scaling is obtained in the MFL in this computation, the magnetoresistance does not increase linearly with $B$, and instead saturates at large $B$. To obtain a non-saturating magnetoresistance we} consider a macroscopically disordered sample with domains of MFLs with varying electron densities; employing earlier work on classical electrical transport in inhomogeneous ohmic conductors \cite{Dykhne1971,Stroud1975,Parish2003,Parish2005,Guttal2005,Song2015,Ramakrishnan2017}, we obtain the observed linear-in-$B$ magnetoresistance with a crossover scale at $B \sim T$. This paper is organized as follows: In Sec.~\ref{basemodel}, we introduce our basic microscopic model of a disordered MFL, and determine its single-electron properties and finite-temperature crossovers in Sec.~\ref{mflandim}. In Sec.~\ref{Transport}, we solve for transport and magnetotransport properties of this basic model exactly in various analytically-tractable regimes. In Sec.~\ref{EMARRN}, we introduce the effective-medium approximation and apply it to a macroscopically disordered sample containing domains of the basic model, obtaining analytical results for the global magnetotransport properties for certain simplified considerations of macroscopic disorder. We summarize our results and place them in the context of recent experiments in Sec.~\ref{discuss}. \section{Microscopic model} \label{basemodel} We consider $M$ flavors of conduction electrons, $c$, hopping on a lattice that are coupled locally and randomly to SYK islands on each lattice site (Fig.~\ref{Modelfig}a). The islands contain $N$ flavors of valence electrons, $f$, which interact among themselves in such a way that they realize SYK models. The Hamiltonian for our system is given by \begin{align} &H = -t\sum_{\langle rr^\prime\rangle;~i=1}^M (c^\dagger_{ri} c_{r^\prime i} + \mathrm{h.c.}) - \mu_c \sum_{r;~i=1}^M c^\dagger_{ri} c_{ri} - \mu \sum_{r;~i=1}^N f^\dagger_{ri} f_{ri} \nonumber \\ &+ \frac{1}{NM^{1/2}}\sum_{r;~i,j=1}^N \sum_{k,l=1}^M g^r_{ijkl} f^\dagger_{ri}f_{rj}c^\dagger_{rk} c_{rl} + \frac{1}{N^{3/2}}\sum_{r;~i,j,k,l=1}^NJ^r_{ijkl} f^\dagger_{ri}f^\dagger_{rj}f_{rk}f_{rl}. \label{ham} \end{align} We will take the limits of $M=\infty$ and $N=\infty$, but we will be interested in values of $M/N$ that are at most $\mathcal{O}(1)$. We choose $J^r_{ijkl}$ and $g^r_{ijkl}$ as independent complex Gaussian random variables, with $\ll J^r_{ijkl} J^{r^\prime}_{lkij}\gg = (J^2/8)\delta_{rr^\prime}$ and $\ll g^r_{ijkl} g^{r^\prime}_{jilk} \gg = g^2 \delta_{rr^\prime}$ and all other $\ll..\gg$'s being zero, where $\ll..\gg$ denotes disorder-averaging. \change{Note that $t$ is non-random, and this will lead to a Fermi surface for the $c$ fermions.} The disorder-averaged action then is \begin{align} &S = \int_0^\beta d\tau \left[ \sum_{r;~i=1}^M c^\dagger_{ri}(\tau)(\partial_\tau -\mu_c)c_{ri}(\tau)-t\sum_{\langle rr^\prime \rangle;~i=1}^M (c^\dagger_{ri}(\tau) c_{r^\prime i}(\tau) + \mathrm{h.c.})+\sum_{r;~i=1}^N f^\dagger_{ri}(\tau)(\partial_\tau-\mu)f_{ri}(\tau^\prime)\right] \nonumber \\ &-M\frac{g^2}{2}\sum_r\int_0^\beta d\tau d\tau^\prime G^c_r(\tau-\tau^\prime)G^c_r(\tau^\prime-\tau) G_r(\tau-\tau^\prime)G_r(\tau^\prime-\tau) \nonumber \\ &-N\frac{J^2}{4}\sum_r \int_0^\beta d\tau d\tau^\prime G_r^2(\tau-\tau^\prime)G_r^2(\tau^\prime-\tau) -N\sum_r \int_0^\beta d\tau d\tau^\prime \Sigma_r(\tau-\tau^\prime)\left(G_r(\tau^\prime-\tau)+\frac{1}{N}\sum_{i=1}^Nf^\dagger_{ri}(\tau)f_{ri}(\tau^\prime)\right) \nonumber \\ &-M\sum_r\int_0^\beta d\tau d\tau^\prime \Sigma^c_r(\tau-\tau^\prime)\left(G^c_r(\tau^\prime-\tau)+\frac{1}{M}\sum_{i=1}^Mc^\dagger_{ri}(\tau)c_{ri}(\tau^\prime)\right), \end{align} where we have followed the usual strategy for SYK models~\cite{Sachdev2015,Sachdev2017} and introduced the auxiliary fields $G,\Sigma,G^c,\Sigma^c$ corresponding to Green's functions and self-energies of the $f$ and $c$ fermions respectively at each lattice site. In the $M,N=\infty$ limit, the integrals over the $\Sigma,\Sigma^c$ fields enforce the definitions of $G,G^c$ at each lattice site $r$. The large $M$, $N$ saddle-point equations are obtained by varying the action with respect to these $G$ and $\Sigma$ fields after integrating out the fermions \begin{align} &\Sigma_r(\tau-\tau^\prime) = \Sigma(\tau-\tau^\prime) = - J^2 G_r^2(\tau-\tau^\prime)G_r(\tau^\prime-\tau) - \frac{M}{N} g^2 G_r(\tau-\tau^\prime)G_r^c(\tau-\tau^\prime)G_r^c(\tau^\prime-\tau) \nonumber \\ &= - J^2 G^2(\tau-\tau^\prime)G(\tau^\prime-\tau) - \frac{M}{N} g^2 G(\tau-\tau^\prime)G^c(\tau-\tau^\prime)G^c(\tau^\prime-\tau), \nonumber \\ &G(i\omega_n) = \frac{1}{i\omega_n + \mu - \Sigma(i\omega_n)}, \label{Dysonsaddle0} \end{align} \change{and} \begin{align} &\Sigma^c_r(\tau-\tau^\prime) = \Sigma^c(\tau-\tau^\prime) = -g^2G^c_r(\tau-\tau^\prime)G_r(\tau-\tau^\prime)G_r(\tau^\prime-\tau) = -g^2G^c(\tau-\tau^\prime)G(\tau-\tau^\prime)G(\tau^\prime-\tau), \nonumber \\ &G^c(i\omega_n) = \int \frac{d^dk}{(2\pi)^d} \frac{1}{i\omega_n - \epsilon_k + \mu_c - \Sigma^c(i\omega_n)} \equiv \int \frac{d^dk}{(2\pi)^d}G^c(k,i\omega_n). \label{Dysonsaddle} \end{align} \change{The last expression shows that the $c$ fermions have a dispersion $\epsilon_k$ and an associated Fermi surface; the lifetime of the Fermi surface excitations will be determined by the frequency dependence of $\Sigma^c$, which will be computed in the next section.} We define chemical potentials such that half-filling occurs when $\mu=\mu_c=0$. The islands are not capable of exchanging electrons with the Fermi sea, so there is no reason {\it a priori} to have $\mu=\mu_c$, or even for islands at different sites to have the same $\mu$. However, for convenience we will keep the $\mu$ of all the islands the same. \change{The} real system \change{would} operate at fixed {\it densities}, and $\mu$ and $\mu_c$ will appropriately renormalize as the mutual coupling $g$ is varied, in order to keep the densities of $c$ and $f$ individually fixed, \change{as the interaction between $c$ and $f$ conserves their numbers individually}. However, as we shall find, the half-filled case always corresponds to $\mu=\mu_c=0$ regardless of $g$. We will always have $J\gg T$ in this work, and also $J\gtrsim g$. A sketch of the phases realized by our model as a function of temperature is shown in Fig.~\ref{Modelfig}b. \section{Fate of the conduction electrons} \label{mflandim} \subsection{The case of infinite bandwidth} \label{infiniband} We first consider the case of infinite bandwidth, or equivalently $t \gg g, J \gg T$. The precise value of $\mu_c$ doesn't matter as long as its magnitude is not infinite, as the conduction electrons float on an effectively infinitely deep Fermi sea. Then, we can use the standard trick for evaluating integrals about a Fermi surface, and we have \begin{equation} G^c(i\omega_n) = \int \frac{d^dk}{(2\pi)^d} \frac{1}{i\omega_n - \epsilon_k + \mu_c - \Sigma^c(i\omega_n)} \rightarrow \nu(0)\int_{-\infty}^\infty\frac{d\varepsilon}{2\pi}\frac{1}{i\omega_n - \varepsilon - \Sigma^c(i\omega_n)}, \end{equation} where $\nu(0)$ is the density of states at the Fermi energy. We take the lattice constant $a$ to be $1$. This makes $k$ dimensionless by redefining $ka$ to be $k$. The energy dimension of $\epsilon_k$ then comes from the inverse band mass. The density of states $\nu(0)$ then has the dimension of 1/(energy) (on a lattice $\nu(0)\sim1/t\sim1/\Lambda$, where $\Lambda$ is the bandwidth). We will also have $\mathrm{sgn}(\mathrm{Im}[\Sigma^c(i\omega_n)]) = -\mathrm{sgn}(\omega_n)$, so \begin{equation} G^c(i\omega_n) = -\frac{i}{2}\nu(0)\mathrm{sgn}(\omega_n),~~ G^c(\tau) = -\frac{\nu(0)T}{2\sin(\pi T\tau)},~~-\beta \le \tau \le \beta, \end{equation} with other intervals obtained by applying the Kubo-Martin-Schwinger (KMS) condition $G^c(\tau+\beta)=-G^c(\tau)$. At $T=0$, we have \begin{equation} G^c(\tau,T=0) = -\frac{\nu(0)}{2\pi\tau}. \label{gc0} \end{equation} We consider $M/N=0$ to begin with. Then, the $f$ electrons are not affected by the $c$ electrons, and their Green's functions are exactly \change{of the incoherent form} of the SYK model, which, in the low-energy limit, are given by~\cite{Sachdev2015} \begin{equation} G(\tau) = -\frac{\pi^{1/4}\cosh^{1/4}(2\pi\mathcal{E})}{J^{1/2}\sqrt{1+e^{-4\pi\mathcal{E}}}}\left(\frac{T}{\sin(\pi T \tau)}\right)^{1/2}e^{-2\pi\mathcal{E}T\tau},~~0\le \tau < \beta \label{sykconf} \end{equation} where $\mathcal{E}$ is a function of $\mu$ with $\mathcal{E} \propto -\mu/J$ for small $\mu/J$. Other intervals are again obtained by the KMS condition $G(\tau+\beta)=-G(\tau)$. The zero-temperature limit of this, and similar expressions appearing later, can be straightforwardly taken~\cite{Sachdev2015} \begin{equation} G(\tau>0,T=0) = -\frac{\cosh^{1/4}(2\pi\mathcal{E})}{\pi^{1/4}J^{1/2}\sqrt{1+e^{-4\pi\mathcal{E}}}}\frac{1}{\tau^{1/2}},~~G(\tau<0,T=0) = \frac{\cosh^{1/4}(2\pi\mathcal{E})}{\pi^{1/4}J^{1/2}\sqrt{1+e^{4\pi\mathcal{E}}}}\frac{1}{|\tau|^{1/2}} \label{sykconf0} \end{equation} \change{ Now we can compute the self energy of the $c$ fermions, which is} \begin{equation} \Sigma^c(\tau) = -g^2G^c(\tau)G(\tau)G(-\tau) = -\frac{\pi^{1/2}g^2\nu(0)T^2}{4J\cosh^{1/2}(2\pi\mathcal{E})\sin^2(\pi T \tau)},~~0\le \tau < \beta. \end{equation} Fourier transforming with a cutoff of $\tau$ at $J^{-1}\ll T^{-1}$ and $\beta-J^{-1}$ gives \begin{equation} \Sigma^c(i\omega_n) = \frac{ig^2\nu(0)T}{2J\cosh^{1/2}(2\pi\mathcal{E})\pi^{3/2}}\left(\frac{\omega_n}{T} \ln \left(\frac{2\pi Te^{\gamma_E -1}}{J}\right)+\frac{\omega_n}{T} \psi\left(\frac{\omega_n}{2 \pi T}\right)+\pi \right), \label{sigf} \end{equation} where $\psi$ is the digamma function and $\gamma_E$ is the Euler-Mascheroni constant. As foreseen, this satisfies $\mathrm{sgn}(\mathrm{Im}[\Sigma^c(i\omega_n)]) = -\mathrm{sgn}(\omega_n)$ on the fermionic Matsubara frequencies. For $|\omega_n|\gg T$ \begin{equation} \Sigma^c(i\omega_n) \rightarrow \frac{ig^2\nu(0)}{2J\cosh^{1/2}(2\pi\mathcal{E})\pi^{3/2}}\omega_n\ln\left(\frac{|\omega_n|e^{\gamma_E-1}}{J}\right). \label{sigf0} \end{equation} \change{Note the MFL form of the itinerant $c$ fermion self energy, $\sim \omega \ln \omega$}. Since the large $N$ and $M$ limits are taken at the outset, this \change{MFL} is stable even as $T\rightarrow0$. For finite $N$ and $M$, the coupling $g$ is irrelevant in the infrared (IR)~\cite{McGreevy2017}, and the model reduces to a theory of non-interacting electrons as $T\rightarrow0$, with the MFL existing only above a temperature scale whose magnitude is suppressed in $N$ and the zero-temperature entropy going to zero. Upon analytically continuing $i\omega_n\rightarrow \omega+i0^+$, we get the inverse lifetime for the conduction electrons defined by \begin{equation} \gamma \equiv -2\mathrm{Im}[\Sigma^c_R(0)] \equiv -\mathrm{Im}[\Sigma^c(i\omega_n\rightarrow 0+i0^+)] = \frac{g^2\nu(0)T}{J\cosh^{1/2}(2\pi\mathcal{E})\pi^{1/2}}. \label{dumprate} \end{equation} Since the coupling of the conduction electrons to the SYK islands is spatially disordered, this rate also represents the transport scattering rate up to a constant numerical factor. The scattering of $c$ electrons off the islands requires the $f$ electrons inside the islands to move between orbitals. Hence $\gamma$ vanishes when the islands are flooded or drained by sending $\mathcal{E}\rightarrow\mp \infty$ respectively, say, by doping them. If we do not have $M/N=0$, the SYK Green's function will be affected as there is a back-reaction self-energy to the SYK islands. To see what this does when we perturbatively turn on $M/N$, we compute it with the $M/N=0$ Green's functions with a cutoff of $\tau$ at $J^{-1}$ and $\beta-J^{-1}$ \begin{equation} \tilde{\Sigma}(\tau) = -\frac{M}{N}g^2G(\tau)G^c(\tau)G^c(-\tau) \approx -\frac{M\pi^{1/4}\cosh^{1/4}(2\pi\mathcal{E})g^2\nu^2(0)T^{5/2}e^{-2\pi\mathcal{E}T\tau}}{4NJ^{1/2}\sqrt{1+e^{-4\pi\mathcal{E}}}\sin^{5/2}(\pi T \tau)}. \end{equation} If $\mathcal{E}=0$, then $\tilde{\Sigma}(i\omega_n) \propto i(M/N)g^2\nu^2(0)\omega_n$ as $T,\omega_n\rightarrow 0$, which is sub-leading to $\Sigma(i\omega_n)|_{M/N=0}\sim (J\omega_n)^{1/2}$, so the SYK character of the islands survives in the IR. \change{Now we consider the case of particle-hole symmetry breaking with a non-zero spectral asymmetry, $\mathcal{E}$ in Eq.~(\ref{Glocal}); we will find that the basic structure of the results described above persists.} If $\mathcal{E}\neq0$ but is small, then for $T\rightarrow0$, $\tilde{\Sigma}(i\omega_n\rightarrow0) \sim -(M/N)g^2\nu^2(0)J\mathcal{E} \propto (M/N)g^2\nu^2(0)\mu+\mathcal{O}(i\omega_n)$. In contrast $\Sigma(i\omega_n\rightarrow0)|_{M/N=0} \sim \mu + \mathcal{O}(\omega_n^{1/2})$. Therefore the frequency-dependent part of $\tilde{\Sigma}$ is still subleading. Hence, in the IR we may still assume that all that happens to the SYK islands is that their chemical potential $\mu$ gets renormalized. By solving $\mathrm{Re}[\Sigma(i\omega_n\rightarrow0,T=0)]=\mu$, we obtain the corrected $\mathcal{E}\leftrightarrow\mu$ relation. At small $\mu/J$, this is \begin{equation} \mathcal{E} \approx -\frac{\mu/J}{\pi^{1/4}\sqrt{2}\left(1+ \displaystyle \frac{g^2\nu^2(0)M}{6\pi^{3/2}N}\right)}. \label{newemu} \end{equation} The total particle number on each island, $\mathcal{N}_r = \sum_i f^\dagger_{ir}f_{ir}$, commutes with $H$. Since the SYK particle density $\mathcal{Q}=\mathcal{N}/N$ is a universal function of $\mathcal{E}$, independent of $\mu$ and $J$, (\ref{newemu}) just implies a renormalization of the nonuniversal UV parts of the SYK Green's function and the island chemical potential, while the particle density remains fixed. Similarly, the vanishing of the zero-frequency real part of (\ref{sigf}) regardless of $\mathcal{E}$ implies that there is no renormalization of either the density or chemical potential of the conduction electrons in this infinite-bandwidth limit, since their number is independently conserved as well. For a finite bandwidth, the chemical potential of the conduction electrons renormalizes in such a way that their density remains fixed. In Appendix~\ref{pairhop}, we consider the effects of adding a `pair-hopping' term to~(\ref{ham}), \begin{equation} H \rightarrow H + \frac{1}{NM^{1/2}}\sum_{r;~i,j=1}^N\sum_{k,l=1}^M\left[\eta^r_{ijkl}f^\dagger_{ri}f^\dagger_{rj}c_{rk}c_{rl}+\mathrm{h.c.}\right], \label{hph} \end{equation} with $\ll |\eta^r_{ijkl}|^2 \gg = \eta^2/8$, and $J\gtrsim\eta$. This term has identical power-counting to the $f^\dagger f c^\dagger c$ term, but can trade $c$ electrons for $f$ electrons and vice-versa. Since the numbers of $c$ and $f$ electrons are no longer independently conserved in this case, there is only one chemical potential, and $\mu_c=\mu$. We find that this term also generates an MFL as long as the bandwidth of the $c$ electrons is large. As is well known, the marginal-Fermi liquid self-energy we obtained (\ref{sigf},~\ref{sigf0}) also leads to the leading low-temperature contribution to the specific heat \change{coming from the itinerant electrons} scaling as $C_V^{\mathrm{MFL}} \sim M g^2 (\nu(0))^2 (T/J)\ln (J/T)$~\cite{Crisan1996}. Note that the entropy has a non-vanishing $T \rightarrow 0 $ limit from the contribution of the SYK islands in the limit of $N\rightarrow\infty$ \cite{GPS01}, but this does not contribute to the specific heat. \change{The contribution to the specific heat coming from the SYK islands scales linearly in $T$ as $T\rightarrow0$~\cite{Sachdev2017}, which is subleading to the $T\ln T$ contribution of the itinerant electrons}. \subsection{The case of a finite bandwidth} \label{dialup} \change{ This subsection will show that a finite bandwidth does not modify the basic structure of the low-temperature MFL phase described above. However, if interactions \change{between $c$ and $f$} are strong enough, a crossover into an IM phase is possible at higher temperatures. Readers not interested in the details of the arguments can move ahead to the next section.} If the bandwidth (and hence Fermi energy) of the conduction electrons is sizeable compared to the couplings, then the \change{momentum-integrated} local Green's function $G^c(i\omega_n)$ is no longer independent of the details of the self energy $\Sigma^c(i\omega_n)$. We consider two spatial dimensions, with the isotropic dispersion $\varepsilon_k = k^2/(2m)-\Lambda/2$, and a bandwidth $\varepsilon_k^{\mathrm{max}}-\varepsilon_{k=0} = \Lambda$. Since $k$ is dimensionless, the band mass $m$ has dimensions of $1/(\mathrm{energy})$. The density of states is then just $\nu(\varepsilon)=\nu(0)=m$, at all energies $\varepsilon$, and we implicitly make use of this fact while simplifying and rewriting certain expressions. On a lattice, $m\sim \nu(0) \sim 1/t\sim 1/\Lambda$. The momentum-integrated conduction electron Green's function is \begin{equation} G^c(i\omega_n) = \frac{\nu(0)}{2\pi}\left[\ln(\Lambda +2\mu_c+2i\omega_n-2\Sigma^c(i\omega_n))-\ln(2\mu_c-\Lambda+2i\omega_n-2\Sigma^c(i\omega_n))\right]. \label{fbwarctan} \end{equation} We still expect $\mathrm{sgn}(\mathrm{Im}[\Sigma^c(i\omega_n)])=-\mathrm{sgn}(\omega_n)$. The chemical potential $\mu_c$ must now take an appropriate value to reproduce the correct density of conduction electrons. The conduction band filling is given by \begin{equation} \mathcal{Q}_c = \frac{2\pi G^c(\tau=0^-)}{\nu(0)\Lambda}, \end{equation} for the exact solution to $G^c$, which can be found by the imaginary-time \verb|MATLAB| code \verb|ggc.m|~\cite{Code} (The low-energy `conformal-limit' solutions described below are not valid at the short times $0^-$, and do not display this property). In general, the Dyson equations can now only be solved numerically, which the imaginary-time \verb|MATLAB| code \verb|ggc.m|~\cite{Code} and real-time \verb|MATLAB| code \verb|ggcrealtime.m|~\cite{CodeRT} do, albeit by holding the chemical potentials $\mu$ and $\mu_c$, rather than densities, fixed. In an extreme limit where $|i\omega_n+\mu_c-\Sigma^c(i\omega_n)|$ far exceeds the bandwidth for all $\omega_n$, which can happen only at $T\neq0$, we have a simplification of (\ref{fbwarctan}), obtained by expanding in $\Lambda$, \begin{equation} G^c(i\omega_n) = \frac{\Lambda \nu(0)}{2\pi(i\omega_n+\mu_c-\Sigma^c(i\omega_n))}. \label{sfinc} \end{equation} This then leads to an SYK solution in the low-energy conformal limit for both $G$ and $G^c$, realizing a fully incoherent metal. We use the trial solutions \begin{equation} G^c(\tau) = - \frac{C_c}{\sqrt{1+e^{-4\pi\mathcal{E}_c}}}\left(\frac{T}{\sin(\pi T\tau)}\right)^{1/2}e^{-2\pi\mathcal{E}_cT\tau},~~G(\tau) = - \frac{C}{\sqrt{1+e^{-4\pi\mathcal{E}}}}\left(\frac{T}{\sin(\pi T\tau)}\right)^{1/2}e^{-2\pi\mathcal{E}T\tau},~~0\le \tau < \beta. \label{gfincconf} \end{equation} $\mathcal{E}_c$ is universally related to the conduction band filling, with $\mathcal{E}_c=0$ at half filling, and $\mathcal{E}_c\rightarrow\mp\infty$ when the band is full or empty respectively. When $M/N = 0$, there is no back-reaction to the islands, and $G$ is given by (\ref{sykconf}). We use the conditions $\mathrm{Re}[\Sigma^c(i\omega_n\rightarrow0,T=0)]=\mu_c$ and $G^c(i\omega_n\rightarrow0,T=0)=\Lambda\nu(0)/(2\pi(\mu_c-\Sigma^c(i\omega_n\rightarrow0,T=0)))$ to determine $C_c$, and also $\mu_c$ in terms of the fixed $\mathcal{E}_c$. Cutting off $\tau$ integrals in the Fourier transforms at a distance $\alpha_{\mathrm{UV}}^{-1}$ from singularities, we have \begin{equation} C_c = \frac{\cosh^{1/4}(2\pi\mathcal{E})}{2^{1/2}\pi^{1/4}J_{\mathrm{IM}}^{1/2}},~~J_{\mathrm{IM}}\equiv \frac{g^2}{J\Lambda\nu(0)}~~\mathrm{and}~~\mathcal{E}_c \approx -\frac{\pi^{1/4}\cosh^{1/4}(2\pi\mathcal{E})\mu_c}{g\Lambda^{1/2}\nu^{1/2}(0)}\sqrt{\frac{J}{\alpha_{\mathrm{UV}}}}~~(\mathrm{At~small}~\mu_c/g), \end{equation} with no feedback on the SYK islands. For~(\ref{sfinc}) to derive from~(\ref{fbwarctan}), this requires $|\mu_c-\Sigma^c(i \omega_n\rightarrow0)|\gg \Lambda$ or \begin{equation} T\gg T_{\mathrm{inc}} \equiv \frac{\Lambda J}{\nu(0)g^2}. \label{tinc} \end{equation} Furthermore, for (\ref{sykconf}) and (\ref{gfincconf}) to hold, we also need $J\gg T_{\mathrm{inc}}$ and $J_{\mathrm{IM}}\gg T_{\mathrm{inc}}$, implying $g^2\gg \Lambda J$. For $T\ll T_{\mathrm{inc}}$, we go back to the MFL, which now has a UV cutoff of $T_{\mathrm{inc}}$ instead of $J$, with its self energy going as $\Sigma^c(i\omega_n)\sim (g^2\nu(0)/J)i\omega_n\ln(|\omega_n|/T_{\mathrm{inc}})$. The choice of the UV cutoff $\alpha_{\mathrm{UV}}$ in the IM only affects the nonuniversal $\mathcal{E}_c\leftrightarrow\mu_c$ relation. An appropriate choice of the cutoff is $\alpha_{\mathrm{UV}}\sim J_{\mathrm{IM}} \lesssim J$. Turning on a small but finite $M/N$, we have to additionally use the conditions $\mathrm{Re}[\Sigma(i\omega_n\rightarrow0,T=0)]=\mu$ and $G(i\omega_n\rightarrow0,T=0)=1/(\mu-\Sigma(i\omega_n\rightarrow0,T=0))$ simultaneously to determine a renormalized $C$ and renormalized $\mu$, while keeping $\mathcal{E}$ fixed as before. We again cut off $\tau$ integrals in the Fourier transforms at a distance $\alpha_{\mathrm{UV}}^{-1}$ from singularities. This gives \begin{equation} C = \cosh^{1/4}(2\pi\mathcal{E})\frac{\pi^{1/4}}{J^{1/2}}\left(1-\frac{M}{N}\frac{\Lambda\nu(0)}{2\pi}\frac{\cosh(2\pi\mathcal{E})}{\cosh(2\pi\mathcal{E}_c)}\right)^{1/4},~~C_c = \frac{\cosh^{1/2}(2\pi\mathcal{E})\Lambda^{1/2}\nu^{1/2}(0)}{2^{1/2}Cg}, \label{ccfmn} \end{equation} and we do not show the nonuniversal $\mathcal{E},\mathcal{E}_c\leftrightarrow\mu,\mu_c$ relations because they are rather uninsightful and the physics is better described in terms of $\mathcal{E},\mathcal{E}_c$ which universally represent the conserved densities. If $M/N$ is increased to approach $(2\pi\cosh(2\pi\mathcal{E}_c))/(\Lambda \nu(0)\cosh(2\pi\mathcal{E}))$, the condition for incoherence that $|i\omega_n+\mu_c-\Sigma^c(i\omega_n)|$ exceed the bandwidth \change{for all $\omega_n$} becomes harder to fulfill, and larger and larger values of the coupling $g$ are required to achieve the IM phase at high temperatures. When $M/N> (2\pi\cosh(2\pi\mathcal{E}_c))/(\Lambda \nu(0)\cosh(2\pi\mathcal{E}))$, we still recover the MFL deep enough in the IR, due to the back-reaction self energy $\tilde{\Sigma}$ being irrelevant, and the conduction electron self energy $\Sigma^c$ also vanishing at the lowest energies. However, at values of the coupling $g$ large enough so that effects of the conduction electron bandwidth may be ignored above a certain temperature, we find a crossover into a different IM phase, with local Green's functions given by (at half-filling) \begin{equation} G^c(\tau) \sim \left(\frac{T}{\sin(\pi T\tau)}\right)^{\Delta_c},~~G(\tau) \sim \left(\frac{T}{\sin(\pi T\tau)}\right)^{1-\Delta_c},~~0<\Delta_c<1/2, \label{GIM2} \end{equation} with $\Delta_c$ given by the solution to the equation \begin{equation} \left(\frac{\Delta_c}{1-\Delta_c}\right)\cot^2\left(\frac{\pi\Delta_c}{2}\right)=\frac{M}{N}\frac{\Lambda\nu(0)}{2\pi}, \end{equation} which has the property that $\Delta_c\rightarrow0$ as $M/N\rightarrow\infty$ and $\Delta_c\rightarrow1/2$ as $M/N\rightarrow 2\pi/(\Lambda \nu(0))$. These Green's functions may be derived by solving the Dyson equations (\ref{Dysonsaddle0}, \ref{Dysonsaddle}) while ignoring both the conduction electron dispersion and the coupling $J$. Indeed, with the scalings in (\ref{GIM2}), the term proportional to $J^2$ in the expression for $\Sigma(\tau)$ is irrelevant compared to the other term. This phase has a resistivity that scales as $T^{2(1-\Delta_c)}$. Since we are only interested in models with linear-in-$T$ resistivities, we will henceforth assume that $M/N$ is small enough to avoid this regime. Since $\nu(0)\sim1/\Lambda\sim 1/t$ on a lattice, fine-tuning $g\sim J\sim \Lambda \gg T$ makes the scattering rate~(\ref{dumprate}) `Planckian', i.e. an $\mathcal{O}(1)$ number times $T$, since it is given by {\it ratios} of large quantities. The MFL doesn't break down if we do this; In~(\ref{fbwarctan}), $|\Sigma^c(i(\omega_n\sim T))|\sim T\ln T/J \ll \Lambda$, so the infinite-bandwidth result~(\ref{dumprate}) is still applicable. The crossover to the IM doesn't occur either, since $T\ll T_\mathrm{inc}$, and finally, the part of the back-reaction self-energy to the SYK islands that does not renormalize their chemical potentials is $|\tilde{\Sigma}(i(\omega_n\sim T))]| \sim (M/N)(g\nu(0))^2 T$ which is $\ll |\Sigma(i(\omega_n\sim T))| \sim (JT)^{1/2}$, i.e. the part of the internal self-energy of the SYK islands that doesn't renormalize chemical potential, as long as $M/N$ is not $\gg 1$, so the SYK character of the islands also survives. In the IM regime, since both the conduction and island electrons have local SYK Green's functions, the specific heat scales as $C_V^\mathrm{IM} \sim M T/J_{\mathrm{IM}} + N T/J$, with no logarithmic corrections~\cite{Sachdev2017}. \section{Transport in a single domain} \label{Transport} \change{In this section we consider transport in two spatial dimensions, with the isotropic dispersion $\varepsilon_k = k^2/(2m)-\Lambda/2$. We will find that many aspects of the transport can be computed in a traditional Boltzmann transport computation, due to the large $N$ and $M$ limits. In particular, quantum corrections to transport, of the type leading to quantum interference and localization, are suppressed by the local disorder, the non-quasiparticle nature of the charge carriers, and the large number of fermion flavors.} In our double large $N$ and $M$ limit, if $M/N=0$, the only vertex corrections to the uniform conductivities that aren't trivially killed by this limit are the ones that involve uncrossed vertical ladders of $f^\dagger_if_j$ propagators in the current-current correlator bubbles (First diagram of Fig.~\ref{Goodbadugly}b). However, since the $f$ propagators are purely local and independent of momentum, these diagrams vanish due to averaging of the vector velocity in the current vertices over the closed fixed-energy contours in momentum space, as the scattering of the conduction electrons is isotropic, just like in the textbook problem of the non-interacting disordered metal~\cite{Efros2012}. Unlike the non-interacting disordered metal, there is no localization in two dimensions as the crossed-ladder `Cooperon' diagrams are suppressed by the large $M$ limit. Hence, the relaxation-time-like approximation of keeping only self-energy corrections is valid. If $M/N$ is nonzero but $\mathcal{O}(1)$ or smaller, then certain 3-loop and higher order ladder insertions (Such as Fig.~\ref{Goodbadugly}c) also contribute extensively in $M$ to the current-current correlation. However, these diagrams again vanish due to the averaging of the vector velocity mentioned above. All this happens regardless of the values of $g,J,\Lambda,\mu_c$, and for both energy and electrical currents. \begin{figure} \begin{center} \includegraphics[height=2.0in]{Fig2.pdf} \end{center} \caption{(a) The uniform current-current correlation bubble used to compute conductivities. The current vertices are black squares and the black lines are conduction electron ($c$) propagators. (b) and (c) Additional diagrams forming ladder series, with ladder units of up to $3$ loops, that contribute to the conductivities and are not immediately suppressed by the large $N$ and $M$ limits. The red lines are island fermion ($f$) propagators that do not carry momentum. The dashed blue lines carry momentum and come from disorder averaging of the non-translationally invariant coupling $g^x_{ijkl}$. These diagrams however vanish upon momentum integration in the loops containing the current vertices, for reasons mentioned in the main text.} \label{Goodbadugly} \end{figure} \subsection{Marginal-Fermi liquid} \label{mfltransport} We first discuss \change{a Boltzmann transport approach in} the MFL regime. For simplicity, we consider infinite bandwidth and an infinitely deep Fermi sea. The uniform current-current correlation bubble (Fig.~\ref{Goodbadugly}a) is given by, for an isotropic Fermi surface, \begin{equation} \langle I_x I_x \rangle(i\Omega_m) = - M\frac{v_F^2}{2}\nu(0)T\sum_{\omega_n}\int_{-\infty}^\infty\frac{d\varepsilon}{2\pi}\frac{1}{i\omega_n-\varepsilon-\Sigma^c(i\omega_n)}\frac{1}{i\omega_n+i\Omega_m-\varepsilon-\Sigma^c(i\omega_n+i\Omega_m)}, \end{equation} where $v_F = k_F/m$ is the Fermi velocity (on a lattice $v_F\sim t$, since the lattice constant $a$ is set to $1$). Using the spectral representation, this can be converted to give the DC conductivity \begin{equation} \sigma_0^{\mathrm{MFL}} = M\frac{v_F^2\nu(0)}{16T}\int_{-\infty}^\infty\frac{dE_1}{2\pi}\mathrm{sech}^2\left(\frac{E_1}{2T}\right)\frac{1}{|\mathrm{Im}\Sigma_R^c(E_1)|}. \end{equation} Inserting the self energy, we can scale out $T$ and numerically evaluate the integral, giving \begin{equation} \sigma_0^{\mathrm{MFL}} = 0.120251\times MT^{-1}J\times\left(\frac{v_F^2}{g^2}\right)\cosh^{1/2}(2\pi\mathcal{E}). \label{s0MFL} \end{equation} If we want $\sigma_0^{\mathrm{MFL}}/M \ll 1$, we must have $T\gg T_{\mathrm{inc}}$, implying a crossover into the IM regime. Thus the MFL is never a true bad metal, but its resistivity can still numerically exceed the quantum unit $h/e^2$, depending on parameters. The `open-circuit' thermal conductivity $\kappa_0^{\mathrm{MFL}}$, which is defined under conditions where no electrical current flows, is given by \begin{equation} \kappa_0^{\mathrm{MFL}} = \bar{\kappa}_0^{\mathrm{MFL}} - \frac{(\alpha_0^{\mathrm{MFL}})^2T}{\sigma_0^{\mathrm{MFL}}}, \end{equation} where $\bar{\kappa}_0^{\mathrm{MFL}}$ is the `closed-circuit' thermal conductivity in the presence of electrical current, and $\alpha_0^{\mathrm{MFL}}$ is the thermoelectric conductivity. The thermoelectric conductivity vanishes when the temperature is much smaller than the bandwidth and Fermi energy, due to effective particle-hole symmetry about the Fermi surface, so $\kappa_0^{\mathrm{MFL}} = \bar{\kappa}_0^{\mathrm{MFL}}$. The Lorenz ratio is then given by \begin{equation} L ^{\mathrm{MFL}}= \frac{\kappa_0^{\mathrm{MFL}}}{\sigma_0^{\mathrm{MFL}}T} =\frac{\bar{\kappa}_0^{\mathrm{MFL}}}{\sigma_0^{\mathrm{MFL}}T} = \frac{\int_{-\infty}^\infty\frac{dE_1}{2\pi}E_1^2\mathrm{sech}^2\left(\frac{E_1}{2}\right)\frac{1}{|\mathrm{Im}[E_1\psi(-iE_1/(2\pi))+i\pi]|}}{\int_{-\infty}^\infty\frac{dE_1}{2\pi}\mathrm{sech}^2\left(\frac{E_1}{2}\right)\frac{1}{|\mathrm{Im}[E_1\psi(-iE_1/(2\pi))+i\pi]|}} = 0.713063\times L_0, \end{equation} which is smaller than $L_0=\pi^2/3$ for a Fermi liquid. In the presence of a uniform transverse magnetic field, we can use the following improved relaxation-time linearized Boltzmann equation (which incorporates an off-shell distribution function) for a temporally slowly-varying and spatially uniform applied electric field~\cite{Kamenev2011,Nave2007}, since there are no Cooperons in the large-$M$ limit, and hence none of the typical localization-related corrections~\cite{Altshuler1980} to the conductivity tensor. The Boltzmann equation reads (here, $t$ is time, not the hopping amplitude, and $\mathcal{B}$ is a dimensionless version of the magnetic field $B$ which shall be explained below) \begin{equation} (1-\partial_\omega\mathrm{Re}[\Sigma^c_R(\omega)])\partial_t\delta n(t,k,\omega) + v_F \hat{k}\cdot \mathbf{E}(t)~n_f^\prime(\omega) + v_F (\hat{k}\times \mathcal{B}\hat{z})\cdot\nabla_k \delta n(t,k,\omega) = 2\delta n(t,k,\omega) \mathrm{Im}[\Sigma^c_R(\omega)], \label{BE1} \end{equation} where $n_f(\omega)=1/\left(e^{\omega/T}+1\right)$ is the Fermi distribution, $\delta n$ is the change in the distribution due to the applied electric field, the conduction electrons are negatively charged, and the magnetic field points out of the plane of the system. This equation is derived in Appendix~\ref{BZE} from the Dyson equation on the Keldysh contour, and can be solved by the ansatz $\delta n(t,k,\omega) = k\cdot\varphi(t,\omega) = k_i \varphi_i(t,\omega)$. In the DC limit, the effective mass enhancement $(1-\partial_\omega\mathrm{Re}[\Sigma^R(\omega)])$ does not matter~\cite{Nave2007} (the effective mass enhancement is important for AC magnetotransport and affects the frequency at which the cyclotron resonance occurs; it shifts the cyclotron resonance from the cyclotron frequency defined by the bare mass to the one defined by the effective mass. The enhanced effective mass also appears in the specific heat~\cite{Crisan1996} and Lifshitz-Kosevich formula~\cite{Pelzer1991} of MFLs). We then have \begin{equation} v_F \hat{k}\cdot \mathbf{E}~n_f^\prime(\omega) +v_F (\hat{k}\times \mathcal{B}\hat{z})\cdot\nabla_k \delta n(k,\omega) = 2\delta n(k,\omega) \mathrm{Im}[\Sigma_R^c(\omega)], \label{IBE} \end{equation} We note that in~(\ref{IBE}), $\mathcal{B}$ is dimensionless in our choice of units. Since the quantities we set to $1$ were the magnitude of the electron charge $e$, the lattice constant $a$, and $\hbar$ and $k_B$, we have \begin{equation} \mathcal{B} = \frac{eBa^2}{\hbar}, \label{BcalB} \end{equation} i.e. the flux per unit cell in units of $\hbar/e$. Substituting $\delta n(k,\omega) = k_i \varphi_i(\omega)$ into (\ref{IBE}), we obtain \begin{equation} \varphi_i(\omega) = \frac{v_F}{k_F}n_f^\prime(\omega)\left(2\mathrm{Im}[\Sigma^c_R(\omega)]\delta_{ij}+\epsilon_{ij}\mathcal{B}\frac{v_F}{k_F}\right)^{-1}_{ij}E_j. \end{equation} Using the current density \begin{equation} I_i = -M\nu(0)\int_0^{2\pi}\frac{d\theta}{2\pi}\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}v_F\hat{k}_i \delta n(k_F\hat{k},\omega), \end{equation} we get the longitudinal and Hall conductivities \begin{align} &\sigma_L^{\mathrm{MFL}} = M\frac{v_F^2\nu(0)}{16T}\int_{-\infty}^{\infty}\frac{dE_1}{2\pi} \mathrm{sech}^2\left(\frac{E_1}{2T}\right)\frac{-\mathrm{Im}[\Sigma^c_R(E_1)]}{\mathrm{Im}[\Sigma^c_R(E_1)]^2+(v_F/(2k_F))^2\mathcal{B}^2}, \nonumber \\ &\sigma_H^{\mathrm{MFL}} = -M\frac{v_F^2\nu(0)}{16T}\int_{-\infty}^{\infty}\frac{dE_1}{2\pi} \mathrm{sech}^2\left(\frac{E_1}{2T}\right)\frac{(v_F/(2k_F))\mathcal{B}}{\mathrm{Im}[\Sigma^c_R(E_1)]^2+(v_F/(2k_F))^2\mathcal{B}^2}. \label{mfllcmag} \end{align} Note that, \change{given the scaling of (\ref{sigf})}, these can be immediately written as \begin{equation} \sigma_L^{\mathrm{MFL}}\sim T^{-1}s_L((v_F/k_F)(\mathcal{B}/T)),~~\sigma_H^{\mathrm{MFL}}\sim -\mathcal{B}T^{-2}s_H((v_F/k_F)(\mathcal{B}/T)). \label{eq:Bscale} \end{equation} The asymptotic forms of the functions $s_L$ and $s_H$ are \begin{equation} s_{L,H}(x\rightarrow\infty) \propto 1/x^2,~~s_{L,H}(x\rightarrow 0) \propto x^0. \end{equation} \change{So we have obtained the advertised $B/T$ scaling in the MFL regime. However, with the asymptotic forms noted above, it is not difficult to see that the magnetoresistance, $\rho_{xx}$ saturates at large $B$. Nevertheless, the results above will be useful as inputs into our consideration of the effects of macroscopic disorder in Section~\ref{EMARRN}: we will show there that the $B/T$ scaling survives, and the macroscopic disorder leads to a linear in $B$ magnetoresistance.} \change{We now show that the numerical scale of the $B/T$ crossover is in general accord with the observations.} In (\ref{mfllcmag}), for the `Planckian' choice of parameters described at the end of Sec.~\ref{dialup}, $B$ becomes `large' (i.e., the cyclotron term in the denominators overwhelms $\mathrm{Im}[\Sigma^c_R(E_1)]$ for $|E_1|\lesssim T$, causing $\sigma_H^{\mathrm{MFL}}$ to start decreasing with increasing $B$), when $eBa^2/\hbar\gtrsim k_BT/t$. Using reasonable values of the lattice constant $a = 3.82~\mathrm{\AA}$ and the hopping $t = 0.25~\mathrm{eV}$, the above inequality can also roughly be written as $\mu_B B \gtrsim k_B T$, where $\mu_B$ is the Bohr magneton, since $a^2e t/\hbar \approx 0.96 \mu_B$ for these parameters. In the analysis of the IM regime to follow, there is no such notion of `large' magnetic fields; regardless of the value of $B$, the field-dependent corrections to the conductivity tensor remain much smaller than its zero-field value. \subsection{Incoherent metal} \label{incmettransport} \change{ This subsection considers transport in the IM phase discussed earlier, in which the Fermi surface is washed out, and shows quantitatively that the orbital effects of a magnetic field on charge transport are strongly suppressed irrespective of the strength of the field. The physical reason for this effect is that the effective mean-free-path of the electrons in the IM is less than a lattice spacing, with conduction occurring locally and incoherently across individual lattice bonds. The effect of the Lorentz force on the electrons is thus negligible. If the reader is uninterested in the details of the following computations, they may move on to the next section.} In the IM regime we have \begin{equation} \sigma_0^{\mathrm{IM}} = \frac{M\Lambda^2}{32\pi T}\int_{-\infty}^{\infty}\frac{dE_1}{2\pi}\mathrm{sech}^2\left(\frac{E_1}{2T}\right)(A^c(k,E_1))^2. \end{equation} The spectral function is independent of $k$ in the IM, and we decoupled the momentum integral implicit in the above equation, generating a prefactor of $\Lambda\nu(0)/(2\pi)$. For simplicity we consider $M/N=0$ in this subsection. A small finite $M/N$ only rescales $G^c$, as shown by (\ref{ccfmn}, \ref{gfincconf}), and hence leads to no qualitative difference in any of the following results. We have \begin{align} &A^c(k,E_1) \equiv \frac{2\pi}{\Lambda\nu(0)}A^c(E_1) \equiv -\frac{4\pi}{\Lambda\nu(0)}\mathrm{Im}[G^c(i\omega_n\rightarrow E_1+i0^+)] \nonumber \\ &=-2\mathrm{Im}\Bigg[\frac{i(-1)^{3/4}\pi^{1/4}(i+e^{2\pi\mathcal{E}_c})J^{1/2}\cosh^{1/4}(2\pi\mathcal{E})}{gT^{1/2}\Lambda^{1/2}\nu^{1/2}(0)\sqrt{1+e^{4\pi\mathcal{E}_c}}} \frac{\Gamma \left(\frac{1}{4}-\frac{i(E_1-2\pi\mathcal{E}_cT)}{2 \pi T}\right)}{\Gamma \left(\frac{3}{4}-\frac{i(E_1-2\pi\mathcal{E}_cT)}{2 \pi T}\right)}\Bigg], \end{align} and we get \begin{equation} \sigma_0^{\mathrm{IM}} = (\pi^{1/2}/8)\times MT^{-1}J\times\left(\frac{\Lambda}{\nu(0) g^2}\right)\frac{\cosh^{1/2}(2\pi \mathcal{E})}{\cosh(2\pi\mathcal{E}_c)}. \label{s0IM} \end{equation} Due to the IM existing only at temperatures above $T_{\mathrm{inc}}$, given by (\ref{tinc}), we always have $\sigma_0^{\mathrm{IM}}/M \ll 1$, which makes the IM a bad metal. \change{Note that the slope of the resistivity $\rho_0(T)=1/\sigma_0(T)$ vs temperature in the IM generically differs from that in the MFL by an $\mathcal{O}(1)$ number, as can be seen by comparing (\ref{s0MFL}) and (\ref{s0IM}).} The Lorenz ratio in the IM is (here, the thermoelectric conductivity $\alpha_0^\mathrm{IM}$ does not vanish, so $\kappa_0^{\mathrm{IM}}$ and $\bar{\kappa}_0^{\mathrm{IM}}$ are distinct quantities) \begin{equation} L ^{\mathrm{IM}}= \frac{\int_{-\infty}^\infty\frac{dE_1}{2\pi}E_1^2\mathrm{sech}^2\left(\frac{E_1}{2}\right)(A^c(E_1))^2-\frac{[\int_{-\infty}^\infty\frac{dE_1}{2\pi}E_1\mathrm{sech}^2\left(\frac{E_1}{2}\right)(A^c(E_1))^2]^2}{\int_{-\infty}^\infty\frac{dE_1}{2\pi}\mathrm{sech}^2\left(\frac{E_1}{2}\right)(A^c(E_1))^2}}{\int_{-\infty}^\infty\frac{dE_1}{2\pi}\mathrm{sech}^2\left(\frac{E_1}{2}\right)(A^c(E_1))^2} = \frac{3}{8}\times L_0,~~\mathrm{regardless~of}~\mathcal{E},\mathcal{E}_c. \label{lorenzIM} \end{equation} This result was also obtained by a different method for the IM of Ref.~\onlinecite{Balents2017}, although they only analyzed the particle-hole symmetric case equivalent to $\mathcal{E}_c=0$. Another dimensionless ratio that is interesting is the thermopower, i.e. the ratio of the thermoelectric to electrical conductivities, \begin{equation} \mathcal{S}_0^{\mathrm{IM}}=\frac{\alpha_0^{\mathrm{IM}}}{\sigma_0^{\mathrm{IM}}} = \frac{\int_{-\infty}^\infty\frac{dE_1}{2\pi}E_1\mathrm{sech}^2\left(\frac{E_1}{2}\right)(A^c(E_1))^2}{\int_{-\infty}^\infty\frac{dE_1}{2\pi}\mathrm{sech}^2\left(\frac{E_1}{2}\right)(A^c(E_1))^2} = 2\pi\mathcal{E}_c. \label{dsdqIM} \end{equation} This relationship between the thermopower and the spectral asymmetry $\mathcal{E}_c$ was also found in a different \change{model of coupled SYK islands} realized in Ref.~\onlinecite{Sachdev2017}. The ratios (\ref{lorenzIM}), (\ref{dsdqIM}) hold even for a finite small $M/N$, as the effect of a finite small $M/N$ is simply a rescaling of the Green's function $G^c$. Let us describe the fate of magnetotransport in the IM regime. On a lattice, we have $\Lambda\nu(0)\sim1$. Then $J_{\mathrm{IM}} = g^2/J$, and the conduction electron self-energy is $\sim \sqrt{J_{\mathrm{IM}}T}$. We have $J_{\mathrm{IM}}T\gg t^2 \sim \Lambda^2$, so, to leading order we can neglect the dispersion in Fermion propagators. Then, there is nothing for the magnetic field to couple to, and consequently no magnetotransport. To illustrate this, let us compute the correlator of currents in perpendicular directions in real space on a square lattice. The uniform current operators are \begin{align} &I_x(\tau) \equiv \frac{1}{V^{1/2}}\sum_r I_{rx}(\tau) \equiv -\frac{it}{2V^{1/2}}\sum_{r;~i=1}^M c^\dagger_{r+\hat{x},i}(\tau)c_{ri}(\tau) + \mathrm{h.c.}, \nonumber \\ &I_y(\tau) \equiv \frac{1}{V^{1/2}}\sum_r I_{ry}(\tau) \equiv -\frac{it}{2V^{1/2}}\sum_{r;~i=1}^M c^\dagger_{r+\hat{y},i}(\tau)c_{ri}(\tau)e^{i\phi(r)} + \mathrm{h.c.}, \end{align} where we have used a gauge with the magnetic vector potential $\mathbf{A}_r$ pointing along the $y$ direction, giving rise to the phase factors $e^{i\phi(r)}$ on bonds in the $y$ direction. The system volume in units of the unit cell volume is $V$. We then have \begin{align} &\mathcal{T}_\tau\langle I_x(\tau)I_y(\tau^\prime) \rangle = -M\frac{t^2}{4V}\sum_{rr^\prime}\Big[\mathcal{T}_\tau\langle c^\dagger_{r+\hat{x}}(\tau)c_r(\tau)c^\dagger_{r^\prime+\hat{y}}(\tau^\prime)c_r^\prime(\tau^\prime) e^{i\phi(r^\prime)}\rangle-\mathcal{T}_\tau\langle c^\dagger_{r+\hat{x}}(\tau)c_r(\tau)c^\dagger_{r^\prime}(\tau^\prime)c_{r^\prime+\hat{y}}(\tau^\prime) e^{-i\phi(r^\prime)}\rangle \nonumber \\ &-\mathcal{T}_\tau\langle c^\dagger_{r}(\tau)c_{r+\hat{x}}(\tau)c^\dagger_{r^\prime+\hat{y}}(\tau^\prime)c_{r^\prime}(\tau^\prime) e^{i\phi(r^\prime)}\rangle+ \mathcal{T}_\tau\langle c^\dagger_{r}(\tau)c_{r+\hat{x}}(\tau)c^\dagger_{r^\prime}(\tau^\prime)c_{r^\prime+\hat{y}}(\tau^\prime) e^{-i\phi(r^\prime)}\rangle\Big], \label{localII} \end{align} where we have dropped the sum over flavor indices in favor of a global factor of $M$, and $\mathcal{T}$ denotes time-ordering. To leading order in $t$, since the $c$ Green's functions are completely local, \begin{equation} \mathcal{T}_\tau\langle c_r(\tau)c^\dagger_{r^\prime}(\tau^\prime)\rangle = \delta_{rr^\prime}G^c(\tau-\tau^\prime), \end{equation} none of the terms in (\ref{localII}) can be nonzero. Similarly, at $\mathcal{O}(t^2)$, there is no field-dependent correction to the $\langle I_x I_x\rangle$ correlator. Perturbing in $t$, in order for (\ref{localII}) to be nonzero, we need to insert hopping vertices in order to close the 4-point correlation functions of the $c$'s. To lowest order in $t$, this requires insertion of two hopping vertices into each of the 4-point correlation functions in (\ref{localII}), so that the connected contractions of $c$'s and $c^\dagger$'s into local $c$ Green's functions go around a single plaquette of the lattice. Again, due to our choice of gauge, hopping vertices along bonds in the $y$ direction come with phase factors. But we obtain, as we should, a gauge-invariant answer for the connected part, which is of interest to us here (the electrons are negatively charged, and $\mathcal{B}$ is defined in terms of $B$ as in Sec.~\ref{mfltransport}) \begin{equation} \langle I_xI_y \rangle(i\Omega_m) = -iM\sin(\mathcal{B})t^4T\sum_{\omega_n}[(G^c(i\omega_n))^3(G^c(i\omega_n+i\Omega_m)-G^c(i\omega_n-i\Omega_m))]. \label{localIIw} \end{equation} At $\mathcal{O}(t^4)$, vertex corrections from the coupling $g$ to this leading contribution vanish due to the non-correlation of $g$ between distinct lattice sites, i.e. $\ll g^r_{ijkl} g^{r^\prime}_{jilk} \gg = g^2 \delta_{rr^\prime}$. The DC Hall conductivity follows, \begin{align} &\sigma_H^{\mathrm{IM}} = -\lim_{\omega\rightarrow0}\frac{1}{i\omega}\left[\langle I_x I_y\rangle(i\Omega_m\rightarrow \omega+i0^+)-\langle I_x I_y\rangle(i\Omega_m\rightarrow 0+i0^+)\right] \nonumber \\ &=2M\sin(\mathcal{B})t^4\mathcal{P}\int_{-\infty}^{\infty}\frac{dE_1}{2\pi}\frac{dE_2}{2\pi}A_3^c(E_1)A^c(E_2)\frac{n_f(E_2)-n_f(E_1)}{(E_2-E_1)^2}, \end{align} where $\mathcal{P}$ denotes the Cauchy principal value, and \begin{equation} A_3^c(E_1) \equiv -2\mathrm{Im}[(G^c(i\omega_n\rightarrow E_1+i0^+))^3] = \mathrm{Im}\left[\frac{(i-1)(i+e^{2\pi\mathcal{E}_c})^3\cosh^{3/4}(2\pi\mathcal{E})}{2^{5/2}\pi^{9/4}J_{\mathrm{IM}}^{3/2}T^{3/2}(1+e^{4\pi\mathcal{E}_c})^{3/2}}\frac{\Gamma^3 \left(\frac{1}{4}-\frac{i(E_1-2\pi\mathcal{E}_cT)}{2 \pi T}\right)}{\Gamma^3 \left(\frac{3}{4}-\frac{i(E_1-2\pi\mathcal{E}_cT)}{2 \pi T}\right)}\right], \end{equation} is the spectral function of $(G^c(i\omega_n))^3$. If $\mathcal{E}_c=0$, then the Hall conductivity vanishes due to the evenness of the spectral functions $A^c$ and $A^c_3$. This corresponds to half-filling the square lattice, so this is expected. Scaling out $T$ and evaluating the integral numerically gives \begin{equation} \sigma_H^{\mathrm{IM}} = -M\sin(\mathcal{B})\frac{t^4\cosh(2\pi\mathcal{E})}{J_{\mathrm{IM}}^2T^2}\Xi^\mathrm{IM}_H(\mathcal{E}_c), \end{equation} where $\Xi^\mathrm{IM}_H(\mathcal{E}_c)$ is odd in $\mathcal{E}_c$, positive for positive $\mathcal{E}_c$, and vanishes when $\mathcal{E}_c = 0,\pm\infty$. This is a very small contribution regardless of $B$; the already small flux per unit cell $\mathcal{B}$ is further multiplied by a small parameter $t^4/(J_{\mathrm{IM}}^2T^2)$. Note that we consider $\cosh(2\pi\mathcal{E})$ to be $\mathcal{O}(1)$. If $|\mathcal{E}|$ is very large, then the conduction electrons do not scatter effectively off the islands, as discussed before, and our perturbative expansion in hopping is no longer valid, and in that case the system is once again described by the MFL. For the Hall conductivity to be comparable to the longitudinal conductivity $\sigma^{\mathrm{IM}}_0\sim t^2/(J_{\mathrm{IM}}T)$, we need $\sin(\mathcal{B})\sim J_{\mathrm{IM}} T/t^2 \gg 1$, which is not even mathematically possible. Similarly, the field-dependent correction to the $I_x$-$I_x$ correlator is \begin{equation} \Delta_B\left[\langle I_x I_x \rangle (i\Omega_m)\right] = -Mt^4\cos(\mathcal{B})T\sum_{\omega_n}(G^c(i\omega_n))^2(G^c(i\omega_n+i\Omega_m))^2, \label{localIIxB} \end{equation} leading to the field-dependent correction to the longitudinal conductivity \begin{equation} \Delta_B[\sigma_L^{\mathrm{IM}}] = \frac{M}{8}\frac{t^4}{T}\cos(\mathcal{B})\int\frac{dE_1}{2\pi}A_2^c(E_1)\mathrm{sech}^2\left(\frac{E_1}{2T}\right), \end{equation} where \begin{equation} A_2^c(E_1) \equiv -2\mathrm{Im}[(G^c(i\omega_n\rightarrow E_1+i0^+))^2] = -\mathrm{Im}\left[i\frac{(i+e^{2\pi\mathcal{E}_c})^2\cosh^{1/2}(2\pi\mathcal{E})}{2\pi^{3/2}J_{\mathrm{IM}}T(1+e^{4\pi\mathcal{E}_c})}\frac{\Gamma^2 \left(\frac{1}{4}-\frac{i(E_1-2\pi\mathcal{E}_cT)}{2 \pi T}\right)}{\Gamma^2 \left(\frac{3}{4}-\frac{i(E_1-2\pi\mathcal{E}_cT)}{2 \pi T}\right)}\right], \end{equation} is the spectral function of $(G^c(i\omega_n))^2$. Scaling out $T$ and evaluating the integral numerically gives \begin{equation} \Delta_B[\sigma_L^{\mathrm{IM}}] = M\frac{t^4\cosh(2\pi\mathcal{E})}{J_{\mathrm{IM}}^2T^2}\cos(\mathcal{B})\Xi^\mathrm{IM}_L(\mathcal{E}_c), \end{equation} where $\Xi^\mathrm{IM}_L(\mathcal{E}_c)$ is even in $\mathcal{E}_c$, positive, nonzero for $\mathcal{E}_c=0$, and vanishes as $\mathcal{E}_c \rightarrow \pm\infty$. The longitudinal conductivity is thus reduced when a field is applied, as is usually the case. It is similarly thus not possible to get a field-dependent correction to $\sigma^{\mathrm{IM}}_L$ that is comparable to its zero-field value. Thus we shall no more consider the IM regime for studying magnetotransport, as there is no qualitative difference between the regimes of `large' and small $B$ unlike in the MFL regime. For completeness, the plots of $\Xi^{\mathrm{IM}}_{H,L}(\mathcal{E}_c)$ are shown in Fig.~\ref{Xiplots}. \begin{figure} \begin{center} \includegraphics[height=1.7in]{Fig3a.pdf}~~~~\includegraphics[height=1.7in]{Fig3b.pdf} \end{center} \caption{Plots of (a) $\Xi^{\mathrm{IM}}_{H}(\mathcal{E}_c)$ and (b) $\Xi^{\mathrm{IM}}_{L}(\mathcal{E}_c)$. Both functions vanish in the limits of the fully filled and empty lattice ($\mathcal{E}_c = \mp \infty$ respectively), as they should.} \label{Xiplots} \end{figure} Before we close this section, let us comment on the controllability of the hopping expansion used to compute the nonzero field-dependent conductivity corrections. Clearly, this hopping expansion must break down when $t$ is large enough, as the MFL has a very different conductivity tensor. Going from (\ref{localII}) to (\ref{localIIw}) and (\ref{localIIxB}), we only kept those $r^\prime$ relative to $r$ that resulted in $\mathcal{O}(t^4)$ corrections for the shortest closed paths from $r$ to $r^\prime$ and back. For arbitrary $r^\prime$, one can draw infinitely many paths that go from $r$ to $r^\prime$ and back. These paths may also intersect themselves in general. For a path length $l$, there are $< 4^l$ paths for large $l$, as at each step, one has $4$ choices of direction, and not all possibilities will result in a formation of the closed path from $r$ to $r^\prime$ and back. Each step involves mulitplying an additional local Green's function and factor of $t$, or roughly a factor of $\sim t/(J_{\mathrm{IM}}T)^{1/2}\ll 1$ into the amplitude. Therefore, the total weight of paths of length $l$ should be $< (4t/(J_\mathrm{IM}T)^{1/2})^l$. The total weight of all paths between $r,r^\prime$ then is $< \sum_{l=l_{\mathrm{min}}}^\infty (4t/(J_\mathrm{IM}T)^{1/2})^l = (4t/(J_\mathrm{IM}T)^{1/2})^{l_\mathrm{min}}/(1-4t/(J_\mathrm{IM}T)^{1/2})$, where $l_\mathrm{min}$ is the length of the shortest closed path between $r,r^\prime$, which scales as the lattice distance between $r,r^\prime$. Thus, for $t/(J_{\mathrm{IM}}T)^{1/2}\ll 1$, the expansion is well behaved: as $r^\prime$ gets further away from $r$, the terms are exponentially suppressed in the distance between $r$ and $r^\prime$, whereas the number of $r^\prime$'s a given distance away from $r$ grows only linearly in that distance in two dimensions. Unsurprisingly, this is just the condition $T\gg T_\mathrm{inc}$ we obtained earlier for the crossover into the IM regime. \section{Macroscopic transport via Effective-medium/Random-resistor theory} \label{EMARRN} \change{We now return to the MFL with $B/T$ scaling that was described in Section~\ref{mfltransport}. We will show here that adding macroscopic disorder leads to a linear-in-$B$ magnetoresistance at large $B$, while preserving the $B/T$ scaling. We will treat the inhomogeneity in a classical transport framework. The quantum computation in Section~\ref{mfltransport} is used to compute a local $\sigma_{xx}$ and $\sigma_{xy}$, which is then in put into a computation of global transport in a disordered sample by composing resistivities using Ohm's and Kirchhoff's laws.} \subsection{Setup} We seek to understand the effects of additional macroscopic disorder on the transport of charge in the MFL at `large' magnetic fields $B$, in two spatial dimensions. This additional macroscopic disorder leads to the variation of the local conductivity tensor $\mathbf{\sigma}(\mathbf{x})$ across the sample. Since the conduction electrons in our model interact with valence electrons in the islands through a non-translationally invariant interaction microscopically, the Navier-Stokes equation of hydrodynamics that describes dynamics of a nearly-conserved macroscopic momentum~\cite{Hartnoll2016} is not applicable to us, since this requires microscopic equilibriation of the electron fluid through {\it momentum-conserving} interactions (the effects of weak disorder on the magnetoresistance of a generic electron fluid with macroscopic momentum were studied in Ref.~\onlinecite{Patel2017}; they did not find any regimes of linear magnetoresistance, instead finding that the magnetoresistance was quadratic with a prefactor controlled by the fluid viscosity). Thus, at the coarse-grained level, we just have the equation for charge conservation, and Ohm's law \begin{equation} \nabla\cdot\mathbf{I}(\mathbf{x}) = 0,~~\mathbf{I}(\mathbf{x}) = \mathbf{\sigma}(\mathbf{x})\cdot \mathbf{E}(\mathbf{x}),~~\mathbf{E}(\mathbf{x}) = -\nabla\Phi(\mathbf{x}). \label{basic} \end{equation} The effective local electric field $\mathbf{E}(\mathbf{x})$ (which includes the effects of Coulomb potentials generated due to charge inhomogeneities~\cite{Lucas2016}) fluctuates spatially due to the macroscopic disorder, but equals an applied external electric field $\mathbf{E}_0=\langle\mathbf{E}(\mathbf{x})\rangle \equiv \frac{1}{V}\int d^2\mathbf{x}~\mathbf{E}(\mathbf{x})$ on spatial average. We define the global conductivity tensor $\mathbf{\sigma}^e$ through the relation $\langle\mathbf{I}(\mathbf{x})\rangle = \mathbf{\sigma}^e \cdot \mathbf{E}_0$, and parameterize the deviation $\mathbf{\sigma}(\mathbf{x}) -\mathbf{\sigma}^e = \delta\mathbf{\sigma}(\mathbf{x})$. The condition $\langle \mathbf{I}(\mathbf{x})-\langle \mathbf{I}(\mathbf{x}) \rangle\rangle = 0$ then gives $\langle \mathbf{\chi}(\mathbf{x})\cdot \mathbf{E}_0 \equiv \delta\mathbf{\sigma}(\mathbf{x})\cdot \mathbf{E}(\mathbf{x})\rangle = 0$. Following Ref.~\onlinecite{Stroud1975}, without making any additional approximations, the solution of these equations can be formally cast in the form \begin{equation} \Phi(\mathbf{x}) = -\mathbf{E}_0\cdot\mathbf{x} + \int d^2\mathbf{x}^\prime~\mathcal{G}(\mathbf{x},\mathbf{x}^\prime)\nabla^\prime\cdot(\delta\mathbf{\sigma}(\mathbf{x}^\prime)\cdot\nabla^\prime\Phi(\mathbf{x}^\prime)), \end{equation} where the Green's function satisfies $\nabla\cdot(\mathbf{\sigma}^e\cdot\nabla\mathcal{G}(\mathbf{x},\mathbf{x}^\prime))=-\delta(\mathbf{x}-\mathbf{x}^\prime)$, $\mathcal{G}(\mathbf{x},\mathbf{x}^\prime) = \mathcal{G}(\mathbf{x}^\prime,\mathbf{x})$, and $G(\mathbf{x},\mathbf{x}^\prime\in\partial V)=0$, for the system boundary $\partial V$, which we take to infinity. Taking a gradient on both sides, we get \begin{align} &\mathbf{E}(\mathbf{x}) = \mathbf{E}_0 - \int d^2\mathbf{x}^\prime~[(\delta\mathbf{\sigma}(\mathbf{x}^\prime)\cdot\mathbf{E}(\mathbf{x}^\prime))\cdot\nabla^\prime]\cdot\nabla\mathcal{G}(\mathbf{x},\mathbf{x}^\prime),~~\mathrm{or} \nonumber \\ &\mathbf{\chi}(\mathbf{x}) = \delta\mathbf{\sigma}(\mathbf{x})- \delta\mathbf{\sigma}(\mathbf{x})\cdot\int d^2\mathbf{x}^\prime~\mathcal{K}(\mathbf{x},\mathbf{x}^\prime)\cdot\mathbf{\chi}(\mathbf{x}^\prime), \end{align} where the second line follows from the first by left-multiplying both sides by $\delta\mathbf{\sigma}(\mathbf{x})$, and then demanding that it hold for any $\mathbf{E}_0$, and $\mathcal{K}_{ij}(\mathbf{x},\mathbf{x}^\prime) = \partial_i\partial^\prime_j\mathcal{G}(\mathbf{x},\mathbf{x}^\prime)$. We now assume that the disorder divides the sample into macroscopic domains whose size is much smaller than the sample size, but much bigger than the smaller of the electron mean-free path and electron cyclotron radius, and the tensors $\mathbf{\chi}$ and $\delta\mathbf{\sigma}$ take on constant values in a given domain. For a given domain $p$, we can write \begin{equation} \mathbf{\chi}^p = \delta\mathbf{\sigma}^p- \delta\mathbf{\sigma}^p\cdot\int_p d^2\mathbf{x}^\prime~\mathcal{K}(\mathbf{x}\in p,\mathbf{x}^\prime)\cdot\mathbf{\chi}^p - \delta\mathbf{\sigma}^p\cdot\sum_{p^\prime\neq p}\cdot\int_{p^\prime} d^2\mathbf{x}^\prime~\mathcal{K}(\mathbf{x}\in p,\mathbf{x}^\prime)\cdot\mathbf{\chi}^{p^\prime}. \label{EMApre} \end{equation} For the second integral over domains other than the given domain, we replace $\mathbf{\chi}^n$ with its spatial average $\langle \mathbf{\chi}\rangle$. This is the `effective-medium' approximation~\cite{Stroud1975}: The equivalent conductivity of each domain is controlled in part by a `mean-field' of domains surrounding it. However, since our conventions are set up so that $\langle \mathbf{\chi}\rangle = 0$, this second term drops out. Then, spatially averaging both sides, we obtain \begin{equation} \sum_p V^p \mathbf{\chi}^p = 0~~\Rightarrow~~\sum_p V^p(\mathbb{I}+\delta\mathbf{\sigma}^p\cdot\mathcal{M}^p)^{-1}\cdot\delta{\mathbf{\sigma}}^p = 0, \label{EMA} \end{equation} where $V^p$ is the volume fraction of domain $p$ and $\mathcal{M}^p_{ij} = \oint_{\partial^\prime p}\partial_i\mathcal{G}(\mathbf{x},\mathbf{x}^\prime)\hat{n}^{\prime p}_j$, where the integral is over the primed coordinate, and $\hat{\mathbf{n}}^{\prime p}$ is the outward-pointing unit normal vector on the boundary of $p$, varying with the primed coordinate. If the local conductivity tensor $\mathbf{\sigma}(\mathbf{x})$ is known in all domains, (\ref{EMA}) can then be solved for $\mathbf{\sigma}^e$. In our two-dimensional electron problem, we expect $\mathbf{\sigma}^e_{ij} = \delta_{ij}\sigma^e_L-\epsilon_{ij}\sigma^e_H$, where $\sigma^e_L$ is even in $B$ and $\sigma^e_H$ is odd in $B$ because of Onsager reciprocity, so we obtain the Green's function $\mathcal{G}(\mathbf{x},\mathbf{x}^\prime) = -\ln(|\mathbf{x}-\mathbf{x}^\prime|0^+)/(2\pi\sigma^e_L)$. Then, for circular domains, $\mathcal{M}^p_{ij} = \delta_{ij}/(2\sigma^e_L)$ is indeed independent of $\mathbf{x}$. This makes (\ref{EMApre}) and (\ref{EMA}) self-consistent~\cite{Stroud1975}. For other domain shapes, there are corrections when $\mathbf{x}$ is near the domain boundary. For an analytically solvable toy model, we assume that the $\mathbf{\sigma}(\mathbf{x})$ can take either of two possible values $\mathbf{\sigma}^a$ and $\mathbf{\sigma}^b$ in circular domains that are spatially randomly distributed over the sample~\cite{Guttal2005,Dykhne1971} (Fig~\ref{Redpillgreenpill}a). As far as the asymptotic low and high-field magnetoresistance goes, this already yields the same qualitative behavior at large and small fields as a more complicated model with a distribution of different types of domains~\cite{Ramakrishnan2017}. Furthermore, the `mean-field' like effective-medium approximation has also been shown to produce results for the magnetoresistance equivalent to exact numerical solutions of (\ref{basic}) in random-resistor network models~\cite{Ramakrishnan2017,Parish2005,Parish2003}. In the simplified two-type scenario (\ref{EMA}) then simplifies to~\cite{Guttal2005} \begin{equation} V^a\left(\mathbb{I}+\frac{\mathbf{\sigma}^a-\mathbf{\sigma}^e}{2\sigma^e_L}\right)^{-1}\cdot(\mathbf{\sigma}^a-\mathbf{\sigma}^e) + (1-V^a)\left(\mathbb{I}+\frac{\mathbf{\sigma}^b-\mathbf{\sigma}^e}{2\sigma^e_L}\right)^{-1}\cdot(\mathbf{\sigma}^b-\mathbf{\sigma}^e) = 0. \label{EMAsimple} \end{equation} If $V^a=1/2$, this yields an unsaturating high-field linear magnetoresistance~\cite{Guttal2005}. For the model with a distribution of domains, the equivalent condition is that the distribution is symmetric about its mean~\cite{Ramakrishnan2017}. For $V^a$ detuned from $1/2$, the magnetoresistance saturates, but there is an intermediate regime of fields in which the magnetoresistance is approximately linear, and the saturation field becomes arbitrarily large as $V^a$ approaches $1/2$~\cite{Guttal2005}. The rough reasoning behind the saturation appears to be that, if one type of domain is far more common than the other, the current flowing through the sample mainly finds paths involving only one type of domain, and hence the global magnetoresistance behaves like that of a single domain, which saturates at high fields~\cite{Parish2005}. We will do our analysis with the symmetric distribution $V^a = 1- V^a = 1/2$. A physical picture for the high-field linear magnetoresistance was provided in Ref.~\onlinecite{Parish2003}, and involves the contribution of the local Hall resistance (which is linear in $B$) to the global longitudinal resistance due to the distortion in current paths arising from spatial fluctuations of the local Hall resistance: In a uniform sample, charge accumulation at the edges of the sample parallel to the applied electric field produces a global Hall electric field perpendicular to the applied electric field that cancels out Hall currents throughout the sample. On the other hand, if the sample has a disordered local conductivity tensor, the global Hall electric field no longer cancels out local Hall currents throughout the sample. Thus, the global longitudinal resistance becomes dependent on the local Hall resistances. \subsection{Application} We note that in (\ref{mfllcmag}), the $\mathrm{sech}$ is strongly peaked near $E_1=0$, whereas for a finite temperature, $\mathrm{Im}[\Sigma^c_R(E_1)]$ does not vary drastically with $E_1$ near $E_1=0$ over the range which the $\mathrm{sech}$ is appreciable. We can thus replace $\mathrm{Im}[\Sigma^c_R(E_1)]$ with $\gamma/2$ from (\ref{dumprate}). Regardless of this approximation, we note from (\ref{mfllcmag}) that $\sigma^{\mathrm{MFL}}_L \sim T/B^2$ and $\sigma^{\mathrm{MFL}}_H \sim 1/B$ at large $B$, which is what the effective-medium theory needs to produce linear magnetoresistance at large $B$. This asymptotic scaling holds even if we had multiple MFL bands, thus adding their conductivity tensors to get the appropriate local conductivity tensor. \begin{figure} \begin{center} \includegraphics[height=2.5in]{Fig4a.pdf}~~~~~~~~~~\includegraphics[height=2.5in]{Fig4b.pdf} \end{center} \caption{(a) A cartoon of a two-dimensional sample with a random distribution of approximately equal fractions of two types of domains, for which an exact analytic solution of the effective-medium equations for magnetotransport is possible. The magnetic field $B$ points out of the plane of the sample. (b) Plots of the normalized change in global longitudinal resistance due to dimensionless magnetic field $\mathcal{B}$ (orange) and due to temperature $T$ (blue), obtained from (\ref{mtp2}). We use $E^F_b/E^F_a=0.8$ and $\gamma_b/\gamma_a=0.8$. The dimensionless magnetic field $\mathcal{B}$ is the flux per unit cell $Ba^2$ in units of $\hbar/e$ (\ref{BcalB}). We use $m=0.005\sim 1/E^F_{a,b}$. The orange ($\mathcal{B}$) curve is evaluated at $T=1.0$ and $\gamma_a=0.1$ and the blue ($T$) curve is evaluated at $\mathcal{B}=0.0025$ and $\gamma_a=0.1 T$. The curves are slightly offset for visualization, but actually lie on top of each other, demonstrating a scaling between magnetic field and temperature. Both the $\mathcal{B}$ and $T$ dependencies are quadratic at small fields \change{or} temperatures and cross over to linear at large fields \change{or} temperatures.} \label{Redpillgreenpill} \end{figure} We thus input the following conductivity tensors into the effective-medium calculation (we take the band mass $m=k_F/v_F$ to be the same in both types of electron-like domains $a$ and $b$): \begin{equation} \sigma^{a,b}_{ij} = \frac{\sigma^{\mathrm{MFL}}_{0a,b}}{1+\mathcal{B}^2/(m\gamma_{a,b})^2}\left(\delta_{ij} + \epsilon_{ij} \frac{\mathcal{B}}{m\gamma_{a,b}}\right). \label{mflsigma} \end{equation} The scattering rate $\gamma$ can fluctuate across domains due to fluctuations in $g$, induced by fluctuations in the densities of islands, and the base conductivity $\sigma^{\mathrm{MFL}}_{0a,b}$ can fluctuate across domains due to fluctuations in both $g$ and in the electron density. Then, solving~(\ref{EMAsimple}) for $V^a=1-V^a=1/2$, we get the global longitudinal and Hall resistances respectively, \begin{align} &\rho^e_L \equiv \frac{\sigma^e_L}{\sigma^{e2}_L+\sigma^{e2}_H} = \frac{\sqrt{(\mathcal{B}/m)^2\left(\gamma_a \sigma^{\mathrm{MFL}}_{0a}-\gamma _b \sigma^{\mathrm{MFL}}_{0b}\right)^2+\gamma _a^2 \gamma _b^2 \left(\sigma^{\mathrm{MFL}}_{0a}+\sigma^{\mathrm{MFL}}_{0b}\right)^2}}{\gamma _a \gamma _b (\sigma^{\mathrm{MFL}}_{0a}\sigma^{\mathrm{MFL}}_{0b})^{1/2} \left(\sigma^{\mathrm{MFL}}_{0a}+\sigma^{\mathrm{MFL}}_{0b}\right)}, \nonumber \\ &\rho^e_H \equiv -\frac{\sigma^e_H/\mathcal{B}}{\sigma^{e2}_L+\sigma^{e2}_H} = \frac{\gamma _a+\gamma _b}{m \gamma _a \gamma _b \left(\sigma^{\mathrm{MFL}}_{0a}+\sigma^{\mathrm{MFL}}_{0b}\right)}. \label{mtp1} \end{align} The magnetoresistance $\rho^e_L(\mathcal{B})-\rho^e_L(0)$ is thus linear as promised at high fields, and is quadratic at low fields. Considering the isotropic parabolic dispersion $\varepsilon_k = k^2/(2m)-\Lambda/2$, and using (\ref{s0MFL}), (\ref{dumprate}), and $\nu(0)=m$, we can write $\sigma^{\mathrm{MFL}}_{0a,b}= M w_\sigma E_F^{a,b}/\gamma_{a,b}$, where $w_\sigma=0.135689$ and $E_F^{a,b} = m v_{Fa,b}^2/2$ are the Fermi energies. We can then rewrite (\ref{mtp1}) as \begin{equation} w_\sigma\rho^e_L = \frac{\left(\gamma_a^2+\left(\frac{\mathcal{B}}{m}\right)^2\frac{\left(1-E^F_b/E^F_a\right)^2}{\left(\gamma_b/\gamma_a+E^F_b/E^F_a\right)^2}\right)^{1/2}}{M(\gamma_a/\gamma_b)^{1/2}(E^F_aE^F_b)^{1/2}},~~w_\sigma\rho^e_H = \frac{(1+\gamma_b/\gamma_a)}{M mE^F_a(E^F_b/E^F_a+\gamma_b/\gamma_a)}. \label{mtp2} \end{equation} Plots of the normalized change in $\rho^e_L$ due to $\mathcal{B}$ and $T$ are shown in Fig.~\ref{Redpillgreenpill}b. This simplified model with two types of domains thus leads to a global longitudinal resistance that adds $T$ and $B$ in quadrature\footnote{Holographic realizations of a variety of magnetoresistance scalings, including quadrature, were found in Ref.~\onlinecite{Kiritsis2017}.}, as seen in the experiment of Ref.~\onlinecite{Hayes2016}. A continuous gaussian distribution of electron densities across the domains will also yield a qualitatively similar scaling function to the above quadrature function~\cite{Ramakrishnan2017}. In general, the zero-field linear-in-$T$ and high-field linear-in-$B$ behavior (as well as the scaling between $B$ and $T$) will emerge universally from such resistor-network models, but the interpolation between the two regimes is sensitive to the distribution of the local conductivity tensors. The Hall resistance is $\rho^e_H$ is sensitive to the disorder distribution and thus is not trivially controlled by the average carrier density $\propto Mm(E^F_b+E^F_a)/2$ even for the isotropic Fermi surfaces we consider, unless $\gamma_a=\gamma_b$. In this simplified version of the problem, $\rho^e_H$ is independent of temperature. However, we expect that more complicated disorder distributions generically give rise to some temperature dependence of $\rho^e_H$, which would depend on the disorder distribution even at a qualitative level. A detailed analysis of such effects is beyond the scope of the present work, and will be considered in the future. Since $\gamma_{a,b} \propto T$, the crossover from quadratic to linear magnetoresistance occurs at a field scale proportional to temperature. Additionally, if we use the `Planckian' choice of parameters, and if the disorder distribution is such that $|1-E^F_b/E^F_a|/(\gamma_b/\gamma_a+E^F_b/E^F_a)$ is an $\mathcal{O}(1)$ number, the crossover occurs at a field scale given by $\mu_B B \sim k_B T$, as discussed at the end of Sec.~\ref{mfltransport}. While this is most definitely a fine-tuned situation, and would require substantial variation in the charge densities between domains, it is within the scope of our theory. Alternatively, if $\gamma_a(\gamma_b/\gamma_a+E^F_b/E^F_a)/(k_B T |1-E^F_b/E^F_a|)$ is an $\mathcal{O}(1)$ quantity (but $\gamma_a\propto T$ is much smaller than $k_B T$), then $\rho^e_L$ can still be controlled by the approximate scaling function $\sqrt{1+(\mu_B B)^2/(k_B T)^2}$ for much smaller variations in the charge densities between domains. The effective-medium theory is applicable when the domain sizes are much greater than the smaller of the electron mean free path and electron cyclotron radius in a single domain. At low temperatures and weak fields, electrons can move through a domain without significant loss or deflection of momentum, and the effects of scattering off the boundaries between domains then become important, adding a temperature-independent residual resistivity to the result of the above computation. In our analysis, we have neglected the effects of the feedback of heat currents on charge transport. In general, one would have an additional analogous set of equations to (\ref{basic}) for heat currents and temperature gradients in place of charge currents and electric fields. Since there is no concept of bulk fluid motion due to translational symmetry breaking at the microscopic level, the equations for heat currents and charge currents would only be coupled if the local thermoelectric tensor $\mathbf{\alpha}(\mathbf{x})$ were nonzero. However, in the MFL, with $T\ll E^F_{a,b}$, $\mathbf{\alpha}(\mathbf{x})$ is negligible as discussed in Sec.~\ref{mfltransport}, and our decoupled analysis of charge currents is hence still applicable. Somewhere in the crossover region between the MFL and the IM, a regime may exist where both $\mathbf{\alpha}(\mathbf{x})$ and the effects of magnetic fields on the local conductivity tensors are simultaneously significant, and there may be a significant feedback of thermoelectric effects on the charge magnetotransport. We leave a detailed study of such effects for future work. \section{Discussion} \label{discuss} The strange metal phases of the cuprate and pnictide high-$T_c$ superconductors occur at finite dopings, and consequently display significant amounts of disorder. Experimentally, there is direct evidence for disorder at (i) microscopic levels, due to irregular placements of dopant atoms~\cite{McElroy2005}, and (ii) meso- and macroscopic levels, due to a variety of factors ranging from crystalline imperfections to charge puddles caused by impurities and non-isovalent dopants~\cite{Hanaguri2007,Kohsaka2012}. Additionally, due to these materials being layered, with relatively poor interlayer conductivities, imperfections in a layer may further induce heterogeneities in the charge distributions of adjacent layers through Coulomb forces. We have attempted to paint an impressionist picture of transport and magnetotransport in a strange metal by developing a solvable model that incorporates disorder at both microscopic and macroscopic levels. At the microscopic level, we built off remarkable recent developments~\cite{Gu2017,Sachdev2017,Balents2017,Zhang2017,Haldar2017,McGreevy2017} in realizing \rchange{solvable} field-theoretic descriptions of extended non-Fermi liquid phases using SYK models. \rchange{These models couple together SYK quantum islands without quasiparticle excitations, and show how this can lead to non-Fermi liquid transport in an extended finite-dimensional phase. In our model we} locally and randomly couple mobile conduction electrons to immobile \rchange{quantum} islands described by SYK models in a particular way. \rchange{In this manner we realized a disordered marginal Fermi liquid} (MFL) phase \change{at low temperatures} with a linear-in-$T$ resistivity, \change{and an identifiable Fermi surface.} We determined the two-point functions, conductivities, and magnetotransport properties of this phase exactly in two spatial dimensions, finding a scaling between magnetic field and temperature in the conductivity tensor. Additionally, we showed that nearly-local `incoherent-metal' (IM) phases, \change{with no identifiable Fermi surface}, are also realized in our model \change{at higher temperatures} in certain parameter regimes; \change{these} IMs can also have linear-in-$T$ resistivities, but have very weak effects of magnetic fields on their charge transport properties, making them unlikely candidates for a description of the strange metals seen in experiments at lower temperatures, \change{ which is where the large linear-in-$B$ magnetoresistances are also observed}. However, the IMs may still be the correct concept at high temperatures, due to strong bad-metallic behavior displayed through their large resistivities, \change{as is seen in experiments. It should also be noted that the large linear magnetoresistances are {\it not} observed in experiments performed at high temperatures where the system is a bad metal, with a zero-field resistivity much larger than the quantum unit $h/e^2$~\cite{Hayes2016,Giraldo2017}, which is consistent with the behavior of an IM.} \change{While the MFL regime of our model does indeed have a linear-in-$T$ resistivity, and also a $B/T$ scaling at approximately the observed $B$ scale, it yields a magnetoresistance which saturates at large $B$. To obtain a non-saturating magnetoresistance, we argued for the importance of macroscopic disorder in the MFL regime. To model such effects,} we applied the effective-medium approximation to a sample containing domains of our disordered linear-in-$T$ MFLs with varying \change{electron} densities. While the effective-medium approximation is a mean-field theory at the level of Kirchhoff's and Ohm's laws for current flow, it has shown to be equivalent to exact numerical simulations of random-resistor networks for magnetotransport~\cite{Ramakrishnan2017}, and has also had remarkable successes in describing experimentally observed magnetoresistances in other two-dimensional disordered materials~\cite{Ramakrishnan2017,Ping2014,Ramakrishnan2015}. For certain simplified disorder distributions, the effective-medium equations for magnetotransport are analytically solvable. These exactly solvable equations yield, in our case, a magnetoresistance that is quadratic in field at low fields, crosses over to linear in field at high fields, and is controlled by a scaling function between field and temperature, as seen in recent experiments on the pnictide and cuprate strange metals~\cite{Hayes2016,Giraldo2017}. On the experimental front, the anomalous high-field linear magnetoresistance in the cuprate and pnictide strange metals is already known to be dependent on the component of the magnetic field perpendicular to the sample plane~\cite{AnalytisPrivate}, a feature that our model reproduces, since it is based on orbital effects of the magnetic field on charge transport. Furthermore, a strong linear component of the high-field magnetoresistance is seen even away from the critical doping at which the zero-field resistance is almost exactly linear-in-$T$~\cite{Hayes2016,Giraldo2017}. The disorder based mechanism considered by us would be consistent with this observation, as the zero-field linear-in-$T$ behavior is not a prerequisite for high-field disorder-induced linear magnetoresistance; all that is required is that the local conductivity tensor behaves like (\ref{mflsigma}) as a function of magnetic field. On the theoretical front, we have been able to analytically calculate non-trivial magnetotransport properties in a somewhat contrived, but solvable, model of a disordered non-Fermi liquid. Studies along the lines of Refs.~\onlinecite{MSB89,BF92,senthilLM} could show how such models emerge naturally as effective theories of realistic, disordered, single-band Hubbard models. We hope that our study motivates further investigations into the interplay of disorder and strong interactions in the transport properties of the strange metal phases of the pnictides and cuprates. \section*{Acknowledgements} We thank J. G. Analytis, L. Balents, J. C. S\'eamus Davis, E.-A. Kim, A. Lucas and B. Ramshaw for useful discussions. AAP was supported by the NSF Grant PHY-1125915 via a KITP Graduate Fellowship. This research was also supported by the NSF Grant DMR-1664842, by the MURI grant W911NF-14-1-0003 from ARO, and by funds provided by the U.S. Department of Energy (DOE) under cooperative research agreement DE-SC0009919. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. SS also acknowledges support from Cenovus Energy at Perimeter Institute and from the Hanna Visiting Professor program at Stanford University. As this work was nearing completion, we learned of related but independent work by Chowdhury et. al. on realizing translationally-invariant microscopic models of non-Fermi liquids using SYK models~\cite{Chowdhury2018}.
{ "timestamp": "2018-05-24T02:08:31", "yymm": "1712", "arxiv_id": "1712.05026", "language": "en", "url": "https://arxiv.org/abs/1712.05026" }
\section*{Potential items to be added in journal paper:} \section{Introduction} A key feature of the upcoming 5G technology is the support for Ultra-Reliable and Low Latency Communication (URLLC) \cite{carvalho2016random}. URLLC may be supported both through the 5G new air interface \cite{ji2017introduction} or through the integration of different existing communication technologies \cite{andrews2014will} \cite{monserrat2015metis}. URLLC will enable the support of new use cases with required packet delivery success probability as high as 5-nines ($1\!-\!10^{-5}$) to 9-nines ($1\!-\!10^{-9}$), while at the same time the acceptable latency may be at the sub-second level or even down to a few milliseconds \cite{ratasuk2015recent}. There are proposals for how to decrease the latency in future cellular systems, e.g., by reducing the \ac{TTI} \cite{lahetkangas2014achieving,tullberg2014towards}, fast uplink access \cite{3GPPTR-36881}, or by puncturing URLLC resources on top of eMBB \cite{ji2017introduction}. While 5G with URLLC support (rel. 16) is still several years from deployment, URLLC can already be achieved through integration of multiple communication technologies. The use of multiple communication technologies is conceptually very similar to many existing multipath protocols that increase end-to-end reliability~\cite{qadir2015exploiting}. However, low latency requirements exclude reactive protocols that rely on, e.g. retransmission or backup paths. For low latency, we consider \emph{interface diversity} which is in fact a type of path diversity \cite{apostolopoulos2000reliable}, where each path must use a different communication interface. The closest examples of related work that we have identified are the following. In \cite{yap2012making,yap2013scheduling}, the authors demonstrate the use of Software Defined Networking to distribute application packets across multiple available interfaces to increase application throughput. In \cite{singh2016optimal}, the authors consider fairness optimized multi-link aggregation in heterogeneous wireless systems. Candidate architectures for enabling multi-connectivity and high reliability in 3GPP cellular systems are studied in \cite{michalopoulos2016user} and \cite{ravanshid2016multi}. Most recently, in \cite{wolf2017diversity}, the authors present a physical layer analysis of outage probability in multi-connectivity scenarios. While the use of multiple interfaces, based on different technologies and potentially using independent paths, clearly improves reliability, we are in this work studying how also latency can be reduced using this technique. If the payload is split in parts and different parts are sent over each interface, it is possible to trade-off latency and reliability according to the targeted application. We demonstrated this principle very simply in previous work \cite{nielsen2016latency} and for the present paper we explore the principle in more details. Specifically, we extend our previous analyses as follows: 1) we demonstrate how coding can be exploited to enable flexible splitting of payload across interfaces; 2) we focus the analysis on $N$ independent wireless interfaces, whereas the previous work focused on a specific scenario with only two wireless interfaces; 3) we formulate the optimization problem of the optimal payload splitting problem as well as the generic evaluation method and present corresponding numerical results; and 4) we provide an analytic solution for the optimal split of data between two interfaces that minimizes the expected latency. 5) Finally, we use experimental latency data to validate the proposed methodology. We initially present the system model and transmission strategies in sec.~\ref{sec:system_model}. The methodology for calculating reliability of the considered strategies is presented together with the optimization problem in sec. \ref{sec:reliability_miftx}. In the following sec. \ref{sec:analysis} we provide an analytical solution to the sub problem of splitting between two interfaces. Numerical results are given and discussed in sec. \ref{sec:results}, after which an experimental validation is presented in sec. \ref{sec:exp_results}. Conclusions are given in sec. \ref{sec:conclusion}. \section{System model}\label{sec:system_model} We consider a Machine-to-Machine (M2M) device, equipped with $N$ wireless communication interfaces that communicates critical information, e.g. sensor measurements or alarms messages, to a remote host. The model is depicted in Fig. \ref{fig:network_diagram}. In this work we assume that interface failures occur independently and that measurements of end-to-end delay and packet loss are available for the considered interfaces, e.g. through continual network monitoring. \begin{figure}[htb] \centering \includegraphics[width=\linewidth]{network_diagram} \caption{Multiple paths between M2M device (left) and remote host (right).} \label{fig:network_diagram} \vspace{-9pt} \end{figure} \subsection{Transmission Strategies}\label{sec:strategies} For transmitting the stream of messages from M2M device to end-host, we consider the following strategies (see Fig.~\ref{fig:strategies}): \subsubsection{Cloning} In this simple approach, the source device sends a full copy of each message through each of the $N$ available interfaces. Since only one copy is needed at the receiver to decode the message, cloning makes the communication robust at the expense of $N-$fold redundancy. \subsubsection{Splitting} Instead of sending a full copy on each interface, only a fraction of the message is sent on each interface with this strategy. This allows to trade-off reliability and latency through the selection of the fraction sizes. We assume that the payload is encoded, such that we can generate a desired number of coded fragments to be sent through different interfaces. This can be achieved using for example rateless codes \cite{mackay2005fountain} or Reed Solomon codes \cite{wicker1999reed}. The receiver will be able to decode the encoded message with very high probability as long as it receives coded fragments corresponding to approximately $100 (1+\epsilon) \%$ of the initial message size. A typical value is $\epsilon=0.05$ \cite{mackay2005fountain} and we denote this threshold as $\gamma_\text{d}=1.05$. The coded fragments of a message that are to be sent over the same interface, are grouped together in a single packet to avoid excess protocol overhead. We assume that for a specific payload message, we let the used code (e.g. rateless or Reed Solomon based) generate coded fragments of a relatively small size, e.g. 10 bytes. When nonuniform, \emph{weighted} splitting is used, the challenge is to determine how many fragments to assign to each interface. Depending on whether identical or different types of interfaces are used, splitting can be realized through either $\bm{k}$-out-of-$\bm{N}$ splitting or weighted splitting, respectively: \begin{description} \item [{$\bm{k}$-out-of-$\bm{N}$}] splitting generates $n$ equally sized coded fragments from the payload and the receiver needs to receive at least $k$ of them in order to decode the message. This strategy allows to trade off reliability and latency, since large redundancy leads to higher reliability but longer transmission times, whereas small redundancy offers a lower error protection but shorter transmission times. \item [{Weighted}] the payload is split across interfaces so that the size of the per-interface packet is optimized according to a specific objective. That objective could be to minimize the expected overall transmission latency or to maximize the reliability for a given latency constraint. The optimal solution is, however not trivial, as our analysis shows. \end{description} \begin{figure}[t] \centering \subfigure[Cloning]{\includegraphics[width=0.4\textwidth]{strategies_cloning}} ~ \subfigure[2-out-of-3]{\includegraphics[width=0.4\textwidth]{strategies_2-out-of-3}} \subfigure[Weighted]{\includegraphics[width=0.4\textwidth]{strategies_weighted}} \caption{Transmission strategies, with 2-out-of-3 as example of $k$-out-of-$N$. The time instant $\tau$ is when the payload can be successfully decoded.} \label{fig:strategies} \vspace{-9pt} \end{figure} \subsection{Latency-reliability Function} Typically, the duration of a packet transmission is depending on the packet size $B$. As a result, we specify the latency-reliability function of interface $i$ as $F_i(x,B)$. This gives the probability of being able to transmit a data packet of $B$ bytes from a source to a destination via interface $i$ within a latency deadline $x$. In other words, the value of $F_i(x,B)$ is the achievable reliability $P(X \leq x)$ for a latency $x$ and payload size $B$. In the following, let $\gamma_i$ specify the fraction of coded payload assigned to interface $i$, where $\gamma_i=[0,\gamma_\text{d}]$. Also, let $P_\text{e}$ refer to the long-term error or packet loss probability of an interface, as defined in references \cite{strom20155g,nielsen2016latency}. \section{Reliability of interface diversity}\label{sec:reliability_miftx} This section presents the proposed methodologies for achieving reliability through interface diversity. Generally, we assume that the interfaces fail independently, i.e. that the interfaces do not have common error causes. \subsection{Evaluating reliability for weight assignment}\label{sec:evaluating_weight} The general approach to evaluating the latency-reliability function for a specific transmission strategy, is that we consider for each possible outcome (in terms of packet losses) if enough payload has been received to decode the message and then sum up the success probability according to the law of total probability. The steps to do this are explained in the following. Note that payload assignments where $\sum_{i=1}^N \gamma_i < \gamma_\text{d}$ should be avoided, as in such cases, the coded packets can never be decoded. For enumeration of all possible events, let $\mathbf{C}$ be a $2^N \times N$ matrix listing all possible outcomes for the $N$ interfaces, where a 0 or 1 denotes the successful or failed reception of a packet from the interface of that column: \begin{equation} \textbf{C} = \begin{bmatrix} 0 & \cdots & 0\\ 0 & \cdots & 1\\ \vdots & \vdots & \vdots \\ 1 & \cdots & 1 \end{bmatrix}. \end{equation} The element $c_{h,i}$ in the $h$th row and $i$th column of $\mathbf{C}$, refers to the $i$th interface in the $h$th outcome. For a specific choice of $\bm{\gamma}$, we use the law of total probability to evaluate the resulting latency-reliability function by summing the probability of all successful events. The successful events are the outcomes where the received coded packets can be decoded. The resulting latency-reliability function is: \begin{equation}\label{eq:weighted_eval} F_\text{weighted}(x,\bm{\gamma},B) = \sum\limits_{h=1}^{2^N} d_h \prod\limits_{i=1}^N G_i(x,\gamma_i B) \end{equation} where \begin{equation} d_h = \left\{ \begin{array}{lr} 1, & \text{if } \sum_{i=1}^N c_{h,i} \cdot \gamma_i \geq 1\\ 0, & \text{otherwise} \end{array}\right. \end{equation} ensures that we only include outcomes where at least the minimal number of payload fragments are received that allow to decode the payload. Further, $G_i(x)$ is defined as: \begin{equation} G_i(x,\gamma_i B) = \left\{ \begin{array}{ll} F_i(x,\gamma_i B), & \text{if } c_{h,i} = 1\\ 1-F_i(x,\gamma_i B), & \text{if } c_{h,i} = 0 . \end{array}\right. \end{equation} \subsection{Cloning} For transmissions using packet cloning over $N$ interfaces that can justifiably be considered independent, e.g. cellular connecting to different eNBs or cellular from different operators, we can either use the method presented above or we can use the easier traditional parallel systems \cite{rausand2004system} method to combine the latency-reliability functions as: \begin{equation} F_\text{$N$-clon}(x,\bm{\gamma}, B) = 1-\prod\limits_{i=1}^N (1-F_i(x,\gamma_i B))\label{eq:f_k_par}. \end{equation} In either case $\gamma_i=1$ for $i={1, \ldots , N}$. \subsection{$k$-out-of-$N$ splitting} While the $k$-out-of-$N$ splitting strategy is only optimal for the case of identical interfaces, it can in principle be used in any case, but with best results in situations where the properties of the available interfaces are comparable. Generally, we can evaluate the latency-reliability function using the method in sec. \ref{sec:evaluating_weight}, with $\gamma_i = \sfrac{1}{k}$ for $i={1, \ldots, N}$. In the special case of $N$ identical interfaces, the resulting latency-reliability function can be calculated as: \begin{equation} F_\text{$k$-of-$N$}(x,\bm{\gamma}B) = \sum\limits_{r=k}^N \binom{N}{r} F(x,\gamma B)^r (1-F(x,\gamma B))^{n-r} \end{equation} where $\gamma=\sfrac{1}{k}$ and $F(x,\gamma B)$ is the latency-reliability function that represents the identical interfaces. \subsection{Weighted splitting} The challenge of the weighted splitting scheme is to determine how many coded fragments to send on each interface to optimize a given utility function. This problem has $N$ degrees of freedom in the form of the payload allocation vector $\bm{\gamma}=\{\gamma_1, \ldots, \gamma_N\}$. Formally, this optimization problem can be phrased in the following way: \begin{equation}\label{eq:opt_weighted} \begin{array}{rl} \underset{\bm{\gamma}}{\max} & \sum\limits_{r=1}^R F_\text{weighted}(l_r,\bm{\gamma}) \cdot w_r \\ \text{s.t.} & \gamma_i \leq \gamma_\text{d} \\ & \sum\limits_{i=1}^N \gamma_i \geq \gamma_\text{d}.\\ \end{array} \end{equation} where $F_\text{weighted}(l_r,\bm{\gamma})$ is evaluated using eq. \eqref{eq:weighted_eval} and the vectors $\mathbf{l}=\{ l_1, \ldots, l_R\}$ and $\mathbf{w}=\{ w_1, \ldots, w_R\}$ specify the targeted latency values to be maximized and their corresponding importance, respectively. For example, $\mathbf{l}=\{ 0.2, 0.5\}$ and $\mathbf{w}=\{ 1, 10\}$ would mean that reliability at 0.5 s is 10x more important than reliability at 0.2 s. Assuming that the optimization is solved using a brute-force search, the search space grows as $\left(\sfrac{1}{\delta_\gamma}\right)^N$, where $\delta_\gamma$ is the step size between $\gamma$-values. In practice, the computational tractability of a brute-force search is therefore limited by the number of interfaces $N$ and choice of step size $\delta_\gamma$. The problem in eq. \eqref{eq:opt_weighted} does not immediately have an analytical solution, since the payload assignment weights in $\bm{\gamma}$ do not translate linearly into specific reliability values. Specifically, when increasing the $\gamma$ value for an interface and thereby increasing the amount of coded payload, the reliability for a specific latency is going to decrease at some point due to the increasing packet size. However, at the same time a combination of two or more interfaces' increasing $\gamma$-values can add up to $\gamma_\text{d}$ and thereby improve the overall reliability, even if the reliability of the individual interfaces is decreasing as $\gamma$ goes up. This behavior, that the overall reliability decreases before it suddenly jumps up, combined with the fact that the $\gamma$ value should be adjusted for each interface individually, narrows the possibilities for analytical solutions. Therefore, for the numerical results, we include results from a brute-force search that tries out all combinations of $\gamma$-values on the different interfaces, with a step size that is coarse enough to make the search computationally tractable. While we have not managed to solve the whole optimization problem in eq. \eqref{eq:opt_weighted} analytically, we present in the following section an analytical solution to a subproblem of eq. \eqref{eq:opt_weighted}. specifically, we consider how to optimally split coded payload between two interfaces A and B, so that the latency is minimized. \section{Analysis of splitting between two interfaces}\label{sec:analysis} In the optimization problem, we assume the latency of each interface is represented by two Gaussian random variables $ X_{A} \sim \mathcal{N} (\mu_{A}, \sigma_{A}^{2})$ and $ X_{B} \sim \mathcal{N} (\mu_{B}, \sigma_{B}^{2})$. In the following we assume that $\sigma_{A}$ and $\sigma_{B}$ are constant and independent of $\mu_{A}$ and $\mu_{B}$. When splitting the payload between two interfaces, the latency is defined by the time at which the last fragment is received. The expected latency is thus the expectation of $\max (X_{A},X_{B})$, which is also the first moment of the random variable $\max (X_{A},X_{B})$. By using the approximation of the expectation of the maximum of two normal random variables from \cite{clark1961greatest}, we obtain \begin{align} L = \mathbb{E}[ \max (X_{A},X_{B}) ] = \mu_{A} \Phi (\eta) + \mu_{B} \Phi (-\eta) + \xi \phi (\eta) \end{align} where $\phi(x)\!=\!\frac{1}{\sqrt{2 \pi}} \exp^{ -\frac{x^2}{2} }$, $\Phi (x)\!=\!\int_{-\infty}^{x} \phi (t) \mathrm{d}t $, $\eta\!=\!\frac{ \mu_{A}-\mu_{B} }{ \xi }$, and $ \xi\!=\!\sqrt{ \sigma_{A}^{2} + \sigma_{B}^{2} } $. To find the minimum of the expected latency, we differentiate $L$ with respect to $\gamma$: \scalebox{0.825}{\parbox{.5\linewidth}{% \begin{align} \frac{\mathrm{d}L}{\mathrm{d}\gamma} &= \frac{\mathrm{d}\mu_{A}}{\mathrm{d}\gamma} \Phi (\eta) + \mu_{A} \phi (\eta) \frac{\mathrm{d}\eta}{\mathrm{d}\gamma} + \frac{\mathrm{d}\mu_{B}}{\mathrm{d}\gamma} \Phi (-\eta) - \mu_{B} \phi (-\eta) \frac{\mathrm{d}\eta}{\mathrm{d}\gamma} + \xi \phi^{\prime} (\eta) \frac{\mathrm{d}\eta}{\mathrm{d}\gamma} \notag \\ &= \frac{\mathrm{d}\mu_{A}}{\mathrm{d}\gamma} \Phi (\eta) + \frac{\mathrm{d}\mu_{B}}{\mathrm{d}\gamma} \Phi (-\eta) + (\mu_{A} \phi (\eta) - \mu_{B} \phi (-\eta) + \xi \phi^{\prime} (\eta)) \frac{\mathrm{d}\eta}{\mathrm{d}\gamma}\notag. \end{align}}} Since $\mu_{A} \phi (\eta) - \mu_{B} \phi (-\eta) + \xi \phi^{\prime} (\eta) = 0$, and by using the definition of $\mu$ from eq. \eqref{eq:latency_B} we obtain: \begin{equation} \frac{\mathrm{d}L}{\mathrm{d}\gamma} = \frac{\mathrm{d}\mu_{A}}{\mathrm{d}\gamma} \Phi (\eta) + \frac{\mathrm{d}\mu_{B}}{\mathrm{d}\gamma} \Phi (-\eta) = \frac{\alpha_{A}}{2} \Phi (\eta) - \frac{\alpha_{B}}{2} \Phi (-\eta). \end{equation} In order to get the optimal solution, $\frac{\mathrm{d}L}{\mathrm{d}\gamma} = 0$ must hold. So we have the solution as follows: \begin{align} \left\{ \begin{array}{lcc} \Phi (-\eta) = \frac{\alpha_{A}}{\alpha_{A}+\alpha_{B}}, & \mbox{if} & \eta \geq 0 \notag \\ \Phi (\eta) = \frac{\alpha_{B}}{\alpha_{A}+\alpha_{B}}, & \mbox{if} & \eta < 0 \notag \end{array} \right. \end{align} which is equivalent to: \begin{align}\label{eq:analytic_splitting} \left\{ \begin{array}{lcc} \gamma= \frac{\alpha_{B} + \beta_{B} - \beta_{A} - 2 \xi \Phi^{-1} (\frac{\alpha_{A}}{\alpha_{A}+\alpha_{B}}) }{ \alpha_{A} + \alpha_{B} }, & \mbox{if} & \mu_{A} \geq \mu_{B} \\ \gamma= \frac{\alpha_{B} + \beta_{B} - \beta_{A} + 2 \xi \Phi^{-1} (\frac{\alpha_{B}}{\alpha_{A}+\alpha_{B}}) }{ \alpha_{A} + \alpha_{B} }, & \mbox{if} & \mu_{A} < \mu_{B}. \end{array} \right. \end{align} \section{Numerical results} \label{sec:results} For the numerical results we will consider the different scenarios specified in Table \ref{tab:scenarios}. The considered technologies are using the reliability specifications shown in Table \ref{tab:pl_to_rss_and_reliability}. \begin{table}[bt] \centering \caption{Linear regression parameters and reliability values.} \label{tab:pl_to_rss_and_reliability} \begin{tabular}{lcccccc} \toprule & GPRS & EDGE & UMTS & HSDPA & LTE \\ \cmidrule{2-6} $\alpha$ & 0.70 & 0.46 & 0.43 & 0.35 & 0.0067 \\ $\beta$ & 400 & 230 & 200 & 178 & 41 \\ $P_\text{e}$& 0.984 & 0.983 & 0.982 & 0.981 & 0.980 \\ \bottomrule \end{tabular} \vspace{-9pt} \end{table} \begin{table*}[bt] \centering \caption{Interface and parameter specifications of scenarios $\mathcal{A}$, $\mathcal{B}$, and $\mathcal{C}$.} \label{tab:scenarios} \begin{tabular}{ccccccccccc} \toprule & IF1 & IF2 & IF3 & IF4 & IF5 & & $B$ & & $\bm{l}$ & $\bm{w}$ \\ \cmidrule{2-6} \cmidrule{8-8} \cmidrule{10-11} $\mathcal{A}$ & UMTS & GPRS & - & - & - & & 1500 bytes & & $[0 \ldots 1]$ s & $[0 \ldots 1]$ \\ $\mathcal{B}$ & LTE & HSDPA & UMTS & EDGE & GPRS & & 1500 bytes & & $[0.1, 0.4, 0.9^*]$ s & $[1, 10, 100^*]$ \\ $\mathcal{C}$ & HSDPA & HSDPA & GPRS & GPRS & GPRS & & 1500 bytes & & $[0.5]$ s & $[1]$ \\ \bottomrule \end{tabular} \vspace{-9pt} \end{table*} While the distribution of latency measurements is usually long-tailed \cite{borella1997self,jacko2000effect}, we will for simplicity use the normal probability distribution to generate latency distributions in the numerical results. While the used probability distribution of influences the specific results, the methods and general tendencies presented in this paper do not change. Specifically, we assume that the latency of transmissions of packet size $\gamma B$ through a specific interface/path is Gaussian distributed with mean $\mu$ defined as: \begin{equation}\label{eq:latency_B} \mu = \frac{\alpha \cdot \gamma B + \beta}{2} [ms] \end{equation} and due to lack of information about the distribution, we assume $\sigma = \frac{\mu}{10}~[ms]$. The parameters $\alpha$ and $\beta$ characterize the assumed linear relationship between packet size and delay for an interface. The values of $\alpha$ and $\beta$ are shown in Table \ref{tab:pl_to_rss_and_reliability}. The values are derived from field measurements conducted by Telekom Slovenije within the SUNSEED project \cite{sunseed2014web}. Initially, we study the simple scenario $\mathcal{A}$, for which we solved the weighted splitting between two interfaces analytically in sec. \ref{sec:analysis}. That is, we used eq. \eqref{eq:analytic_splitting} to determine the optimal splitting threshold $\gamma$. Notice that $\bm{l}$ and $\bm{w}$ are parametrized so that the numerical optimization calculates the expected latency as the analytical optimization. The results are shown in Fig. \ref{fig:scenarioA}, and show a visually good correspondence between the analytical result and the brute-force search. The brute-force search has a slightly lower expected latency, due to the weight assignment being different. We attribute this minor difference to the use of the approximation of $\mathbb{E}[ \max (X_{A},X_{B}) ]$ from \cite{clark1961greatest}. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{scenario5_anl} \caption{Reliability results for scenario $\mathcal{A}$.} \label{fig:scenarioA} \vspace{-9pt} \end{figure} In relation to the general idea of splitting, the most important question we seek to answer, is if it makes sense to spend the additional effort required to find the optimal $\gamma$-values for a weighted splitting or if it suffices to use one of the simpler $k$-out-of-$N$ strategies. It is intuitively clear that if the used technologies are all identical, then a $k$-out-of-$N$ strategy will be optimal. But how much better is a weighted scheme in a heterogeneous scenario? To answer this we study three different scenarios that are specified in Table \ref{tab:scenarios}. The results for scenario $\mathcal{B}$ in Fig. \ref{fig:scenarioB} show two examples of latency-reliability trade-offs that are achieved by considering both when the starred $l$ and $w$ values in Table \ref{tab:scenarios} are included and excluded. In both cases the weighted strategy achieves some reliability in the low latency region ($x<0.2$~s) similar to the 1-out-of-5 strategy and it has the reliability of the 2-out-of-5 strategy around $x=0.4$~s. The difference between the 2 results is that the last one transmits more redundancy data and achieves higher reliability in the $x>0.4$~s region. \begin{figure}[htb] \centering \includegraphics[width=\linewidth]{scenario2} \caption{Reliability results for scenario $\mathcal{B}$. Note: the target latency $l_2=0.9$~s only applies to the last strategy.} \label{fig:scenarioB} \vspace{-9pt} \end{figure} The results concerning scenario $\mathcal{C}$ that are shown in Fig. \ref{fig:scenarioC} are interesting since they demonstrate a mixed data allocation. This results in the reliability at $x=0.5$~s being 0.9999, which is one decade better than any of the $k$-out-of-$N$ strategies that only go up to 0.999. \begin{figure}[htb] \centering \includegraphics[width=\linewidth]{scenario3} \caption{Reliability results for scenario $\mathcal{C}$.} \label{fig:scenarioC} \vspace{-9pt} \end{figure} \section{Experimental validation}\label{sec:exp_results} In addition to the theoretical and model-based results presented above, we have also validated the proposed methods using traces of latency measurements for different communication technologies. Such traces were obtained by sending small (128~bytes) UDP packets every 100~ms between a pair of GPS time-synchronized devices through the considered interface (LTE, HSPA, or Wi-Fi) during the course of a work day at Aalborg University campus. Each trace file can thus be used to playback a time sequence of one-way end-to-end latencies. Our experimental results of multi-interface transmissions are obtained by playing back the three trace files at the same time time in a simulation, where for each 100 ms, the outcome of each considered strategy is recorded. When the playback simulation is done, a latency-reliability curve is calculated for each strategy as the cdf of the recorded outcomes in each 100 ms timestep. This is shown with crosses in Fig. \ref{fig:experimental_lcdfs}. The validation consists in comparing these results to the results that are obtained by using the curves in Fig. \ref{fig:experimental_ifs} to compute the resulting latency-reliability curves using the methods described in sec. \ref{sec:reliability_miftx}. Those results are shown as lines in Fig. \ref{fig:experimental_lcdfs}. When considering the latency-reliability curves of the interfaces in Fig. \ref{fig:experimental_ifs} it is interesting that HSPA actually performs better than LTE. We believe that this is due to the fact that the majority of current mobile devices connect through LTE if it is available. Thus, the collocated HSPA network experiences a lighter load and allows for quicker access. Another interesting observation is that the Wi-Fi network delivers very low latencies down to below 4~ms for 60\% of packets. However, the 99th percentile latency of 75~ms is higher than both HSPA and LTE. \begin{figure}[htb] \centering \includegraphics[width=0.9\linewidth]{experimental_IFs} \caption{Interfaces' latency-reliability curves. Wi-Fi is IEEE 802.11n.} \label{fig:experimental_ifs} \vspace{-9pt} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.9\linewidth]{experimental_lcdfs} \caption{Resulting performance of considered strategies. The lines show the results computed using the method presented in sec. \ref{sec:reliability_miftx}, whereas the crosses show the results of playback-simulation.} \label{fig:experimental_lcdfs} \vspace{-9pt} \end{figure} From the results in Fig. \ref{fig:experimental_lcdfs}, we see how the 1-out-of-3 strategy is able to outperform any individual interface, as expected. The plot does not include any result for the Weighted scheme, since the small payload size does not allow for any gain through payload splitting. The lines that represent the theoretical calculation of performance are practically coinciding with the crosses representing the experimental results. This shows that the methods for calculating the resulting performance by relying on the latency-reliability curves of the interfaces, as described in Sec. \ref{sec:reliability_miftx}, indeed produces accurate results when used with actual traffic traces. \section{Conclusions and Outlook} \label{sec:conclusion} One of the most demanding modes in the upcoming 5G systems will be Ultra-Reliable Low Latency Communication (URLLC). In many cases it should be provided by taking advantage of the fact that multiple communication interfaces are available at the devices. In this work we have studied the concept of interface diversity, where multiple communication interfaces and paths are used simultaneously to communicate between two end devices. The use of coding allows us to assign an arbitrary amount of coded payload data to each interface, allowing to trade-off latency and reliability. We have formulated the optimization problem to find the payload allocation weights (denoted $\bm{\gamma}$) that maximize the reliability at specific target latency values. We have provided and validated an analytic solution to the subproblem of splitting between two interfaces so that the expected latency is minimized. By considering different scenarios and numerically solving the full optimization problem for specific target latencies, we have found that optimized strategies can significantly outperform $k$-out-of-$n$ strategies, where the latter do not account for the characteristics of the different interfaces. Finally, we have experimentally validated the proposed method of computing the resulting performance, and demonstrated the practical gains of interface diversity. \section*{Acknowledgment} This work is partially funded by EU, under Grant agreement no. 619437. The SUNSEED project is a joint undertaking of 9 partner institutions and their contributions are fully acknowledged. The work was also supported in part by the European Research Council (ERC Consolidator Grant no. 648382 WILLOW) within the Horizon 2020 Program. \bibliographystyle{IEEEtran}
{ "timestamp": "2017-12-15T02:05:54", "yymm": "1712", "arxiv_id": "1712.05148", "language": "en", "url": "https://arxiv.org/abs/1712.05148" }
\section{Introduction \label{Introduction}} Supernova remnants (SNRs) are the visible manifestation of the interaction between material ejected in a supernova explosion (SN) and the surrounding circumstellar and interstellar medium. SNRs radiate across the entire magnetic spectrum from radio wavelengths to $\gamma$-rays. They provide the working surface where the elements produced in stars and supernovae (SNe) and the kinetic energy of SN explosions mix with and stir the interstellar medium (ISM). Shocks in SNR are responsible for the cosmic rays. SNRs are heterogenous. The observational appearance of a SNR depends in a complex manner upon local factors such the nature of the SN explosion, the presence or absence of an active pulsar, the time since the explosion, the mass loss history of the progenitor, the presence or absence of earlier SN, and the density and complexity of the surrounding medium. Their appearance also depends on external factors such as the amount of absorption along the line of sight and the distance to the object. Samples of SNRs provide an instantaneous picture of where stars are exploding in galaxies. It is only a partial picture though, because many SNe explode in young massive star clusters, where other SNe have gone off recently. Superbubbles, the emission nebulae created by the collective interaction of stellar winds and multiple SN from young star clusters on the ISM \cite{chu90}, are excluded from this discussion, and usually, though not always, have different observational characteristics. For the purpose of this review, we define a SNR as the remnant of a single SN explosion. Although two SNRs - the Crab Nebula and Kepler's SNR - were identified earlier \cite{minkowski64}, the study to SNRs really began with the advent of radio astronomy, as it became clear that a significant number of bright sources in the plane of the Galaxy were indeed SNRs. Today, there are about 300 identified Galactic SNRs, most within 90 degrees of the Galactic Center, and thus affected by interstellar absorption \cite{green14}. Most were first identified through their radio properties. The first extragalactic SNRs were identified in the Magellanic Clouds in the 1960's and 1970's \cite{mathewson63,mathewson73} through a combination of radio and optical techniques. Since then it has become possible to assemble large samples of samples of SNRs in galaxies out to a distance of about 10 Mpc. Today there are about 59 SNRs and SNR candidates identified in the Large Magellanic Cloud (LMC) at distance 50 kpc \cite{maggi15}, 217 in M33 at 812 kpc \cite{long10,lee_m33}, nearly 300 in M83 at 4.6 Mpc \cite{blair12,blair15}, and 93 in M101 at 6.7 Mpc \cite{matonick97}. The total number of SNRs and credible SNR candidates in nearby galaxies exceeds 1200, four times the Galactic sample (see, e.g. the compilation of {Vu{\v c}eti{\'c}} $et\: al.$\ \cite{vucetic15} and Table \ref{table_extragalactic}). With the exception of SNRs in the Magellanic Clouds, nearly all of the extragalactic SNRs have first been identified optically. The goals of research on SNRs are to understand what factors cause SNRs, individually and collectively, to appear as they do, and to separate the environmental factors from the astrophysics, such as the nature of the SN explosion and the effects of the explosion on the ISM as a whole. Both the Galactic and the extragalactic samples are important in this regard. Because the SNRs in the Galactic and Magellanic Clouds samples are nearby and bright, they provide the most direct confrontations of observations and theory. For example, in Cas A, where spectra of the light echoes from the explosion show the SN to have been of type IIb \cite{krause08}, Doppler imaging has allowed 3d reconstruction of the the ejecta at IR, optical and X-ray wavelengths \cite{fesen06, delaney10} and the spatial distribution of radioactive Ti from the explosion has been mapped \cite{grefenstette14}. And in SN1006, where spatially resolved X-ray images were used to show that emission from the bright radio rims was synchrotron dominated and hence that SN shocks are capable of accelerating electrons to TeV energies \cite{koyama95}, high spatial resolution X-ray images obtained with {\em Chandra}\ are being used to limit the magnetic field amplification in the shock precursor \cite{winkler14}. And, with a few exceptions, only in Galactic SNR is it possible to to identify pulsars and pulsar wind nebulae within a SNR \cite{gaensler06}, as seen, for example, in G292+1.8 \cite{park07}. Extragalactic samples are also important: First, all of the SNRs observed in an external galaxy are effectively at the same distance and thus it is straightforward to translate observed fluxes and angular sizes to the physically more relevant quantities luminosity and diameter. Second, the effects of line of sight absorption on the appearance of a SNR are generally less severe and less variable than in the Galaxy, because one can choose to study external galaxies that are relatively face-on. Third, it is easier, at least in principle, to account for observational selection effects in extragalactic samples because one can often conduct studies of SNRs in external galaxies with a single instrument at one time. The SNRs in the Magellanic Clouds merit special mention in terms of their utility; they are all at about the same distance along lines of sight with relatively little interstellar absorption so that it is fairly straightforward to examine them as a class, and close enough so that detailed multi-wavelength studies can be carried out of individual objects. The purpose of this article is to describe how the SNRs in the Galaxy and external galaxies were and are continuing to be found, and to discuss the degree to which these samples are actually helping to address the goals of research on SNRs. We will conclude that we have accumulated much useful information about SNRs as a class of objects, but that simple interpretations of the data, especially those that use diameter as a proxy for effective age, are naive. Multifrequency studies, involving X-ray, optical, IR, and radio observations, of galaxies where SNR samples already exist are the best hope for gaining a more complete picture of SNRs as a class of objects. \input{table.tex} \begin{figure}[b] \includegraphics[scale=.6]{fig1.eps} \caption{An example of a SNR in M33 as described by Long $et\: al.$\ \cite {long10}. From left to right, the panels show the field of the SNR as observed in X-rays with {\em Chandra}, and in {H$\alpha$}, [S II] and the V-band continuum as observed in ground-based images from the Local Group Galaxy Survey of Massey $et\: al.$\ \cite{massey06}. Stars have been subtracted from the emission line images. Notice that the {H$\alpha$}\ region seen in the lower left corner of the {H$\alpha$}\ fades compared the the SNR in the [S II] image. Had this object not been known as a SNR as a result of the optical observations, it would have been discovered as an X-ray SNR due to its soft spectrum and spatial extent in the X-ray image. } \label{fig_m33_example} \end{figure} \section{Techniques for finding SNRs and SNR candidates} Most of the SNRs in the Galaxy were initially identified as extended radio sources with non-thermal radio spectra. However, most extragalactic SNRs, and SNR candidates, an example of which is shown in Fig.\ \ref{fig_m33_example}, have been identified optically using narrow band imaging. Progress has been rapid, due to the development of CCD detectors, which coupled with the angular resolution of optical telescopes, allowed one to isolate SNR candidates from H II regions. X-ray and radio discovery of SNRs in external galaxies has largely, though not exclusively, been limited to the Magellanic Clouds, where limitations associated with angular resolution and sensitivity are less severe. Some progress in detecting SNRs in X-rays has been made with the launch of {\em Chandra}\ and XMM-Newton and in the radio with the increasing sensitivity of the Jansky Very Large Array (JVLA) and the Multi Element Radio Linked Interferometer Network (MERLIN). The most useful studies of SNRs, especially in galaxies beyond the Magellanic Clouds, will be those that involve observations in at least these three wavebands, so it is important to pursue each of them vigorously. \subsection{Optical Identification of SNRs \index{Optical Identification of SNRs} \label{optical}} \begin{figure}[b] \includegraphics[scale=.4]{fig2.eps} \caption{The spectra of a typical SNR candidate and a bright H II region in M83 as observed by Blair \& Long \cite{blair04}. The SNR shows much more \MULT{[S II]}{6717,6731} compared to {H$\alpha$}\ than the H II region, as well as emission from \MULT{[O I]}{6300,6363} and \MULT{[N II]}{6549,6583}. The quality of the spectra is also fairly typical of those observers try to obtain to confirm line ratios from imaging observations. } \label{fig_opt} \end{figure} Optically, SNRs are extended sources, which must be distinguished from the other type of emission nebulae -- H II regions -- that exist in galaxies. In SNRs, optical emission normally arises from shocks, most commonly from radiative shocks driven into relatively dense clouds in the ISM by the primary shock wave. These secondary shocks, with typical velocities $v$ of 200 $\:{\rm km\:s^{-1}}$, heat the post-shock gas to a temperature of order 500,000 $(v/200 \:{\rm km\:s^{-1}})^2$ K, ionizing it to a degree which depends on the shock velocity. However, at these temperatures, the plasma radiates very efficiently. As a result, gas cools behind the shock, increasing further in density, recombining to the neutral state on a timescale that is short compared to the cloud crossing time. As a consequence, models predict \cite{dopita77,raymond79,allen08} and observations show optical spectra containing forbidden lines from a wide range of ionization states , including in the optical, \MULT{[O III]}{4959,5007}, \MULT{[O I]}{6300,6363}, \MULT{[N II]}{6549,6583}, and \MULT{[S II]}{6717,6731}. Unlike SNRs, the optical emission in H II regions arises from gas photoionized by UV photons from hot stars. In H II regions, most of the optical emission is produced by recombination and emerges in the the Balmer lines. Most of the material in H II regions is too highly ionized to produce forbidden lines of O I, S II, and N II. Furthermore, at least in bright H II regions, there is a sharp boundary between fully ionized gas inside the so-called Str{\"o}mgren sphere, and unionized gas in the region outside the sphere, so there is relatively little gas at intermediate ionization states. As a result, as shown in Fig.\ \ref{fig_opt}, the spectra of SNRs and H II regions differ. First suggested as a technique by Mathewson \& Clark \cite{mathewson73}, essentially all SNRs that have been identified optically in external galaxies have been identified as emission nebulae with elevated [S II]:{H$\alpha$}\ ratios compared to H II regions. In bright H II regions, the [S II]:{H$\alpha$}\ is typically about 0.1, whereas in SNRs the ratio is typically 0.4 or greater. Searches are conducted using interference filter imaging, with filters centered on {H$\alpha$}\ (often also including a contribution from [NII]), [S II] and a continuum band. One inspects these images for emission nebulae that show elevated [S II] compared to {H$\alpha$}, designating as candidates extended objects with [S II]:{H$\alpha$} $>0.4$. (Occasionally, slightly lower or higher values have been used.) An example of a SNR discovered in this way is shown in Fig.\ \ref{fig_m33_example}. The SNR is recognized by the fact that it is relatively much brighter in the [S II] image than the H II region in the lower left hand corner. Often, follow-up spectroscopy is carried out, which not only confirms the [S II]:{H$\alpha$}\ ratios, but also allows searches for additional SNR indicators, usually [O I] emission, and in rare cases, velocity broadening of the lines. The technique works best for isolated SNRs, where one can measure the ratio of [SII]:{H$\alpha$}\ emission without the diluting effect of an adjacent/underlying H II region, and for high surface brightness nebulae. Lower surface brightness H II regions tend to have higher [S II]:{H$\alpha$}\ ratios. In particularly, Blair \& Long \cite{blair97} found that in NGC7793, the [S II]:{H$\alpha$}\ ratio often exceeds 0.5 in nebulae with surface brightnesses of less than \POW{-15}{erg~cm^{-2} s^{-1}arcsec^{-2}}. The Str{\"o}mgren sphere is simply not as well defined in low density H II regions and the ionization levels drop more slowly with distance from the ionizing stars than with higher density, so that there is an extended region where ions, such as S II, are prevalent. Partly to address these problems, many observers eliminate from consideration nebulae with high [S II]:{H$\alpha$}\ ratios with obvious evidence of a concentration of blue stars. The advantage of this strategy is that it makes it more likely that an object identified as a SNR, actually is a SNR. The disadvantage is that SN do explode in regions with blue stars, and one's candidate list is less complete. Observers also have to decide whether to include or exclude nebulae which satisfy the [S II]:{H$\alpha$}\ test, but which are larger than expected from a single SN with a typical explosion energy. Many of these objects are, as argued recently by Franchetti $et\: al.$\ \cite{franchetti12}, superbubbles or collections of SNRs. Consequently, some observers have excluded objects larger than (typically) 100 pc from SNR candidate lists \cite{lee_m33,lee_m31}. Others have retained them \cite{matonick97,long10}, feeling any particular diameter arbitrary and arguing that over time the reality or not of any particular candidate with be determined by future observations. Although the optical emission from most SNRs arises from radiative shocks in gas with near interstellar abundances, several other types of optical emission are observed less commonly in SNRs: (a) \index{Balmer-dominated SNRs} A small number of SNRs exist, notably SN1006 and Tycho's SNR, which radiate only in the Balmer lines \cite{chevalier78,raymond10,winkler14}. In these SNRs, the optical emission arises from a so-called non-radiative shock, in which a fast shock, typically $>$1,000 $\:{\rm km\:s^{-1}}$, encounters a partially neutral ISM. In these situations, the cooling time behind the shock is long compared to the age of the SNR, and the only optical radiation arises as the plasma is ionizing. The surface brightness of the optical emission from these Balmer-dominated SNRs is low compared to those that that emit via radiative shocks, and the spectra are not easy to distinguish from H II regions. Consequently, the only extragalactic SNRs to have been identified of this type have been in the LMC \cite{tuohy82}, objects which were first detected as X-ray sources \cite{long81}. A few remnants of this type continue to be discovered in the Galaxy, including recently G70.0-21.5 \cite{fesen15}. In principle, such objects could be discovered in other galaxies if observed with sufficient spectral resolution to detect large velocity broadening; in practice, it is more likely that a Balmer-dominated SNR will be identified first in another wavelength range. (b) SNRs also exist in which line emission arises from interactions with the ejecta from core-collapse SNe, such as is the case for the Galactic SNRs, Cas A \cite{kirshner77} and G292+1.5 \cite{goss79}. Optical emission from the ejecta of such SNe is characterized by very strong emission from forbidden lines of O II and O III, which are very efficient coolants for a plasma with abundances expected in the ejecta of core-collapse objects \cite{dopita84}. A number of searches for SNRs of this type have been carried out. A few objects have been found, e. g. E0102-72.9 in the SMC \cite{finkelstein06} and the remnants of some very young SNe, such as SN1957D in M83 \cite{long89} and the very bright SNR in NGC4449 \cite{kirshner80}. The numbers are, however, very small, and all of these objects were first discovered by other means. Optical searches for these SNRs are difficult because they are expected to be small diameter objects, and easy to confuse with planetary nebulae and certain stars with strong emission lines.. (c) Finally, there are some SNRs, often referred to a pulsar wind nebulae, \index{Pulsar wind nebulae} where optical line emission arises from circumstellar material/ejecta photoionized by synchrotron radiation due ultimately to the active pulsar. Unlike the photoionization produced by thermal emission from hot stars, the hard power law synchrotron spectrum is capable of leaving the plasma in a large variety of ionization states, which results in emission line spectra that look significantly different from a normal H II region. In principle, such SNRs could be discovered through measurements of the [S II]:{H$\alpha$}\ ratio, or could be buried in existing catalogs of extragalactic planetary nebulae. To date, none has been recognized beyond the Magellanic Clouds, with the possible exception of SN1957D in M83 \cite{long12}. \begin{figure}[b] \includegraphics[scale=.2,angle=-90,origin=c]{fig3a.eps} \includegraphics[scale=.2,angle=-90,origin=c]{fig3b.eps} \caption{X-ray spectra of the two brightest X-ray SNRs in M33 as observed with {\em Chandra}\ by Long $et\: al.$\ \cite{long10}. The spectra show clear evidence of the line emission expected from a shocked thermal plasma.} \label{fig_m33_spectra} \end{figure} \subsection{Radio identification of SNRs \index{Radio identification of SNRs} \label{radio}} At radio wavelengths, SNRs in the Galaxy are extended, non-thermal radio sources. Shell-like SNRs in particular typically have radio spectral indices $\nu^{-\alpha}$ of about 0.5 though with considerable dispersion (see, e. g. Fig. 6 of Dubner \& Giaconi \cite{dubner15}). H II regions, which are also extended sources at radio wavelengths, are the main source of confusion. These are thermal radio sources, radiating primarily by free-free emission, which has a spectral index of 0.1. This means that shell-like SNRs can in principle be separated from H II regions if the spectral index can be measured. The pulsar-dominated SNRs, like the Crab Nebulae, have flatter spectral indices from 0.0 to 0.3 \cite{kargaltsev15} and are harder to identify on this basis, but these constitute a relatively small portion of the total sample of the Galactic sample, and would presumably be a similarly small portion of any complete extragalactic sample as well. Not surprisingly the first extragalactic radio SNRs to be identified/detected are located in the Magellanic Clouds \cite{mathewson63,mathewson73}. Indeed, with the availability of the Australia Telescope Compact Array (ATCA), all known SNRs in the Large and Small Magellanic Clouds have been detected at radio wavelengths \cite{maggi15,filipovic05}. Identification at radio wavelengths of SNRs in more distant galaxies has been hampered by a number of factors. First, until very recently radio observations did not generally have the combination of sensitivity and angular resolution necessary to detect and measure the spectral indices of potential SNR candidates in more distant galaxies. Secondly, SNRs are often found in regions with other diffuse emission and this can dilute the spectral index of putative SNRs, especially in the absence of multi-frequency maps with the same spatial resolution. Finally, as surveys have grown more sensitive contamination from background sources has become an issue, particularly in Local Group galaxies, which have substantial angular diameters. Consequently, most of the radio-detected SNR candidates are sources which were identified as SNR candidates optically and then detected as radio sources (see, e. g. Gordon $et\: al.$\ \cite{gordon99} for the case of M33). There have been some searches particularly in galaxies outside the Local Group where observers have identified as SNRs non-thermal radio sources with associated {H$\alpha$}\ emission (see, e.g. \cite{lacey01,chomiuk_search}). This mitigates the background source problem, and is positive in the sense that it does not depend an optical SNR identification, but it also introduces spatial biases into the sample that are difficult to quantify. Objects such as RXJ1713-39 \cite{pfeffermann96} and RX J0852.0-4622 (also known as Vela Junior) \cite{aschenbach98}, which have no associated {H$\alpha$}\ emission, and the historical SNRs, SN1006 and Tycho, which are very faint in {H$\alpha$}\ would almost certainly be missed. Lacey \& Duric \cite{lacey01} used this approach to identify 35 radio point sources in NGC6946 as SNRs, almost none of which were in Matonick \& Fesen's \cite{matonick97} list of 27 optical SNR candidates. The radio sample in NGC6946 has radio fluxes corresponding to 0.1-2 times that of Cas A, and is systematically brighter than the radio-detected optical sample in M33. The radio candidates are more closely associated with bright H II regions and with the spiral arms than the optically identified SNRs, which Lacey \& Duric suggest is at least partially due to observational biases associated with identification of optical SNR candidates. They suspect SNRs in the radio sample in NGC6946 are evolving in denser interstellar environments than SNRs in the optical sample.\index{NGC6946} \subsection{X-ray identification of SNRs \index{X-ray identification of SNRs} \label{xray}} \begin{figure}[b] \includegraphics[scale=.38]{fig4a.eps} \includegraphics[scale=.38]{fig4b.eps} \caption{Predicted {\em Chandra}\ count rates in 3 bands (0.35-1.1 keV, 1.1-2.6 keV, and 2.6-8 keV) for SNRs at a distance of 1 Mpc assuming they are in the Sedov phase and assuming a line of sight absorption of \EXPU{5}{20}{cm^{-2}}. All of the curves terminate at a kT of about 0.09 keV (because {\sc XSPEC} models do not exist at lower kT); this however, is well before the beginning of the radiative phase.} \label{fig_xray_sensitivity} \end{figure} SNRs, as indicated in the leftmost panel of Fig.\ \ref{fig_m33_example}, are also extended sources at X-ray wavelengths. Most have soft, line dominated X-ray spectra, arising from hot \EXPN{1}{6} to \EXPU{5}{7}{K} gas produced by the reverse shock interaction with SN ejecta or the primary shock interaction with the ISM. Even if the shocks speeds are high enough to produce a plasma hotter than this, the spectra looks as if it has a temperature in this range due to ionization equilibration effects. {\em Chandra}\ spectra obtained by Long $et\: al.$\ \cite{long10} of the two brightest SNRs in M33 are shown in Fig.\ \ref{fig_m33_spectra}. A small number of SNRs, those powered by pulsars, such as the Crab and 3C58 in the Galaxy, and a few young synchrotron-dominated, SNRs, such as RXJ1713-39 and Vela Jr, have power law spectra. To give an indication of what one expects to see from an X-ray SNR in a nearby galaxy, we show, in Fig.\ \ref{fig_xray_sensitivity}, estimated {\em Chandra}\ count rates for SNRs in the Sedov phase at a distance of 1 Mpc as function of age and size, as calculated with the program {\sc XSPEC}, a routine used widely in the astrophysics community to fit X-ray spectra \cite{xspec}. The three sets of curves are for SNRs expanding into ISM with densities of 10, 1, and 0.1 cm$^{-3}$. These rates are indicative of a number of important ``facts'' about the expected detectability of SNRs in X-rays. SNRs brighten through much of their Sedov phase and are easiest to detect at ages of 10,000-20,000 years. In the early Sedov phase, SNRs are relatively faint because they have not swept up enough material; in the late Sedov phase; they fade because the post-shock plasma temperature has dropped. SNRs expanding into a dense ISM are brighter, but evolve more rapidly. The spectra are soft, at least at ages greater than 1,000 years. H II regions also contain thermal plasma, but they typically are much less luminous ($<10^{34} \:{\rm ergs\:s^{-1}}$) than SNRs. Giant H II regions do have X-ray luminosities of up to \POW{37}{\:{\rm ergs\:s^{-1}}}, but are easy to isolate based on their {H$\alpha$}\ luminosities and stellar content. Wind blown bubbles are occasionally not distinguishable from SNRs, especially if one allows large objects ($>$100 pc diameter) in the sample. Although most Galactic SNRs were first identified as radio sources, a number were first detected or suggested as SNRs as a result of their detection as extended X-ray sources from the ROSAT all-sky survey\cite{schaudel02}. These include the aforementioned RXJ1713-39 and Vela Jr, as well as more typical thermal plasma dominated objects, such as G38.7+1.4 \cite{huang14}, G296.7-0.9 \cite{robbins12}, G299.2-2.9 \cite{busser96}, and G308-1.4 \cite{hui12}. It is quite likely that additional Galactic SNRs will be identified as part of the eROSITA all-sky survey \cite{erosita14}, which will be about 30 times more sensitive than ROSAT. In other galaxies, especially those beyond the Local Group, SNRs are relatively faint and difficult or impossible distinguish from point sources on the basis of spatial extent. Fortunately, most other galactic X-ray sources (neutron star and black hole binaries) and most background sources (AGN and galaxy clusters) have relatively featureless hard spectra that with spectral resolution and counting statistics are easy to distinguish from the thermal plasma-dominated spectra of most SNRs, simply on the basis of hardness ratios. However, this still leaves a group of compact sources, the so-called supersoft sources thought to be white dwarf binaries that have luminosities as high as \POW{38}{\:{\rm ergs\:s^{-1}}}. These objects have very soft hardness ratios (corresponding to effective temperatures of \POW{5}{} to \POW{6}{K}) \cite{kahabka97}, which makes them hard to separate from SNRs (given limited source counts). Stiele $et\: al.$\ \cite{stiele11} identify 30 sources in their survey of M31 with XMM as supersoft sources, which is comparable to the number of objects they suggest are SNRs. Many of these supersoft sources are variable, but the fact that this source population exists means that it is dangerous to assume that all soft X-ray sources in a galaxy are SNRs. Consequently, most observers require something other than a hardness ratio to declare an X-ray source as a SNR candidate, usually association with an optical or radio source that has the properties of a SNR. There have been a few X-ray only searches for SNRs, where observers have tried to establish a set of X-ray candidates without complementary information at other wavelengths. Leonadaki $et\: al.$\ \cite{leonidaki10} identified 37 objects in archival {\em Chandra}\ data as SNR candidates in six nearby galaxies (NGC2403, NGC3077, NGC4214, NGC4449, NGC4395, and NGC5204) based on their X-ray hardness ratios or colors. Of these, only 7 had been previously suggested as SNR candidates. This is a useful exercise since one is not biased by the characteristics expected for SNRs at other wavelengths, though it remains to be seen how many of these objects actually turn out so be SNRs as more sensitive observations are carried out. \subsection{IR Identification of SNRs \index{IR identification of SNRs}} Historically, very few SNRs have been identified via observations in the IR. However, the [Fe II] $\lambda\lambda$1.27,1.64 $\mu$m is, like [S II], a tracer of radiative shocks, and can be used in conjunction with, for example, Pa$\beta$ to separate H II regions from SNRs (see, e.g. Mouri $et\: al.$\ \cite{mouri00} for a discussion of the shock models and Oliva $et\: al.$\ \cite{oliva89,oliva90} for early IR spectroscopy of Galactic SNRs ). Greenhouse $et\: al.$\ \cite{greenhouse97} used Fabry-Perot imaging of the [Fe II] $\lambda$1.64 $\mu$m line to identify 6 sources in M83 which they argued were an older population of SNRs than those identified in the radio. Subsequently, Morel $et\: al.$\ \cite{morel02} imaged 42 [S II]-identified SNRs in M33, detecting about 10, with [Fe II] $\lambda$1.64 $\mu$m luminosities of \EXPU{0.2-27}{35}{\:{\rm ergs\:s^{-1}}}. The advantage of NIR imaging is that line of sight absorption is less of a problem in the IR; the disadvantages are that the night sky is much more of a problem in the IR and until recently, advances in detector technology were delayed compared to CCDs. However, the picture is changing with improvements in IR detectors. Blair $et\: al.$\ \cite{blair15}, as part of of Hubble Space Telescope (HST) imaging study of M83, has found a number of emission nebulae in M83 coincident with X-ray sources that are apparent in [Fe II] $\lambda$1.64 $\mu$m, but not in [S II], that are likely to be SNRs. An example is shown in Fig.\ \ref{fig_fe2}. Some impetus for such studies should arise both from the more systematic studies of [Fe II] emission in Galactic SNRs that are currently underway \cite{lee14}, and from assertions that the [Fe II] imaging of galaxies can be used an an estimator of the SN rates \cite{rosenberg12}. \begin{figure}[b] \includegraphics[scale=.38]{fig5.eps} \caption{A 16" by 20" portion of M83 as observed with HST and {\em Chandra} \cite{blair15}. The upper left panel shows the [Fe II] image, the upper right panel shows a composite {H$\alpha$}, [SII], [O III] image; the lower left panel shows a composite of U, B, V images; and the lower light panel shows the {\em Chandra}\ image. Objects identified as SNRs from [S II] imaging by Blair $et\: al.$\ \cite{blair12} are shown in green; X-ray sources from the catalog of Long $et\: al.$\ \cite{long14} are shown red, one of which the X-ray counterpart to an optical SNR. All of the SNRs have [Fe II] counterparts. The field also contains one other object, identified in yellow, that is bright in [Fe II] and is most likely a SNR behind a dust lane. } \label{fig_fe2} \end{figure} \section{The Samples Today\label{Indiviidual}} \subsection{The Galaxy \index{Galactic SNR sample}} According to Green \cite{green14}, there are now 294 identified SNRs in the Galaxy. Nearly all have been detected at radio wavelengths, while 40\% have been detected in X-rays and 30\% have been detected in the optical (the low fraction being a result of the effects of absorption in the plane of the Galaxy). Of the SNRs, 79\% are shell-like (though in many cases the actual shell-structure is quite complex), and 5\% are center-filled (dominated by emission from the ``wind'' of a central pulsar like the Crab). The remaining SNRs have a composite morphology, with evidence for emission from both the central pulsar and a shell. Usually, SNRs with radio shells detected at X-ray wavelengths also show X-ray shells, but there is a group comprising some 20\% of the total that have center-filled X-ray morphologies. Mid-IR emission, which arises both from shock-heated dust grains and from IR lines in hot gas, from SNRs was first detected in the all sky survey conducted with the Infrared Astronomical Satellite (IRAS) at 12, 25, 60, and 160 $\mu$. Arendt \cite{arendt89} and later Saken $et\: al.$\ \cite{saken92} claimed detections of about 30\% of the SNRs known at the time. They found that the morphologies of SNRs in the mid-IR were similar to that observed at X-ray and radio wavelengths and established that the IR luminosities of SNRs were in some cases comparable to their X-ray luminosities. With the Spitzer Space Telescope, new surveys of SNRs were undertaken with considerably higher precision. In particular, {Pinheiro Gon{\c c}alves} $et\: al.$\ \cite{pinheiro11} found 39 counterparts to the 121 SNRs contained in the region surveyed with Multiband Imaging Photometer for Spitzer (MIPS) as part of the MIPSGAL Survey (at 24, 60, and 160 $\mu$); they argued that the detection rate was primarily limited by confusion in the plane of the Galaxy, that X-ray bright SNRs were found preferentially and that the IR luminosities of the detected SNRs were comparable to X-ray luminosities. At shorter Infrared Array Camera (IRAC) wavelengths (3.6, 4.5, 5.8, and 8.0 $\mu$), where emission from SNRs can arise from shock-heated dust, atomic fine-structure lines, molecular lines and occasionally synchrotron emission, Reach $et\: al.$\ \cite{reach06} reported 18 detections in the GLIMPSE survey region containing 95 SNRs. Many of the other SNRs in the survey region could have significant infrared emission but are located along lines of sight with large amount of emission due to H II regions and atomic and molecular clouds. At least 30 of the Galactic SNRs have been detected in $\gamma$-rays (1-100 GeV) with Fermi LAT (see Acero $et\: al.$\ \cite{acero15} for a good summary of the current list of detections, and an overall interpretation). \index{Gamma-ray detection of SNRs} The detected SNRs, which have typical $\nu L_{\nu}$ luminosities of \POW{35}{\:{\rm ergs\:s^{-1}}}, fall into two subclasses, ``young SNRs" in the free expansion or Sedov phases, and SNRs interacting with molecular clouds. $\gamma$-rays can be produced from relativistic electrons either through inverse Compton emission or by bremsstrahlung radiation, or alternatively from relativistic protons (and other hadrons) which created pions which then decay to $\gamma$ rays. Both processes are thought to play a role. To date, a correlation between the radio and $\gamma$-ray fluxes has not been demonstrated \cite{acero15}. The sample of SNRs in the Galaxy is not really complete, at least in the sense that all SNRs of certain intrinsic properties in the Galaxy have been discovered. Green \cite{green14_distribution} estimates that the radio sample is approximately complete to a surface brightness limit of \POW{-20}{Watts~m^{-2}Hz^{-1}sr^{-1}}. There are 68 SNRs brighter than this limit, but Green notes that a selection bias still exists. Specifically, it is hard to recognize small diameter, distant SNRs which will be in the Galactic plane along a line of sight near the Galactic Center where source confusion is likely. The situation is clearly worse at optical and X-ray wavelengths, where absorption is more of a problem. The number of Galactic SNRs should continue to grow with better all sky surveys at X-ray wavelengths, such as eROSITA \cite{erosita14}, and emission line surveys, including the Isaac Newton Telescope Photometric {H$\alpha$}\ Survey IPHAS \cite{IPHAS,sabin13}, its southern hemisphere VLT counterpart VPHAS \cite{VPHAS}, and the UKIRT wide field imaging survey of Fe+ (UWIFE) \cite{UWIFE}. The follow-up from detections of $\gamma$-ray sources is also likely to continue to pay dividends in this regard. \subsection{Magellanic Clouds \index{Magellanic Clouds}} Because the Large and Small Magellanic Clouds are nearby (50 and 60 kpc, respectively) and because they lie along lines of sight with very low Galactic and internal absorption, more is known about the SNRs in the Magellanic Cloud as a group than in any other galaxy. Not only is it fairly straightforward to study the SNRs, it is also possible to study the environments around them as result of the large amount of ancillary data that has been accumulated on the Magellanic Clouds for a variety of other purposes. The first SNRs in the LMC were detected as non-thermal radio sources by Mathewson \& and Healey \cite{mathewson64} and subsequently confirmed as SNRs on the basis of strong [SII]:{H$\alpha$}\ ratios by Westerlund \& Mathewson \cite{westerlund66}. The numbers grew during the 1970s, primarily as a result of work by Mathewson \& Clarke \cite{mathewson73}, as radio and optical instrumentation became more sensitive and as systematic searches for SNRs were carried out. Long, Helfand \& Grabelsky \cite{long81} used {\em Einstein}\ to carry out the first X-ray imaging survey of the LMC; of the 97 X-ray sources detected, they found 26 SNRs, including a number which had not been known previously. According to Maggi $et\: al.$\ \cite{maggi15}, there are currently 59 confirmed SNRs in the LMC. \index{Large Magellanic Cloud} Nearly all have been detected at X-ray, optical, and radio wavelengths. Most of the SNRs have optical spectra. Russell \& Dopita \cite{russell90} (see, also \cite{payne08}) have used the spectra to measure ISM abundances in the LMC. Echelle spectra exist for a significant fraction of the SNRs, allowing one to study the expansion velocity of the optical filaments \cite{chu97}. One of these SNRs B0540-69.3 \cite{mathewson80,brantseg14} contains an 80 ms pulsar \cite{seward84} which produces $\gamma$-ray pulses 20x brighter than the Crab pulsar \cite{LMC_pulsar}. One young SNR, N49, is coincident with a soft $\gamma$-ray repeater \cite{cline80,kulkarni03,guver12}. Light echoes from the SN explosion have been seen from three \cite{rest05}. At least one X-ray bright SNR, N132D, has portions of its optical spectrum dominated by emission from the shocked ejecta \cite{danziger76,vogt11} About 60\% of the SNRs known in the LMC have been detected with Spitzer \cite{seok13} in the NIR where emission can arise from molecular shocks, synchrotron radiation, ionic lines, or PAH emission and/or in the MIR (mainly) from shock-heated dust. Seok $et\: al.$\ \cite{seok13} use these data to argue that LMC SNRs are fainter on average than Galactic SNRs in the IR, presumably due to lower dust to gas ratios in the LMC than the Galaxy, and that the SNRs of Type Ia SNe are significantly fainter than those arising from core-collapse SNe. This situation is similar for the Small Magellanic Cloud (SMC) \index{Small Magellanic Cloud} although the total number of SNRs (25) is smaller, as one would expect since the SMC is less massive the LMC. The first optical/radio SNR in the SMC was discovered by Mathewson \& Clarke \cite{mathewson72}); the first X-ray SNR. the second brightest X-ray source in the SMC, was found by Seward \& Mitchell \cite{seward81}, as part of the first X-ray imaging survey carried out with the {\em Einstein}\ Observatory. Nearly all of the SMC SNRs have been detected in X-rays \cite{haberl12} with XMM, and most have radio fluxes \cite{filipovic05,payne07} and optical spectra \cite{russell90,payne07}. Many of the SNRs have been studied in detail. The remnant E0102-72.3, discovered with {\em Einstein}, is one of the very small number of SNRs showing emission from ejecta at optical wavelengths \cite{blair00}. One SNR, HFPC 334, appears to be a composite SNR, comprised of an active pulsar inside a shell-like radio source \cite{crawford14}. \subsection{M33 \index{M33} \label{m33}} The first three SNRs in M33 were identified by {D'Odorico}, {Benvenuti}, \& {Sabbadin} \cite{dodorico78} using interference filters and image tube photography, and the numbers grew significantly with the advent of CCDs \cite{long90,gordon98}. A detailed study of SNRs in M33 using a combination of deep {\em Chandra}\ exposures and optical data from the Local Group Galaxy Survey (LGGS) \cite{massey06} survey was carried out by Long $et\: al.$\ \cite{long10}. They found 137 optical SNR candidates (with [S II]:{H$\alpha$} $>$0.4) in M33, with diameters ranging from 8 to 179 pc. Of these, 82 were detected in X-rays with 0.35-2 keV luminosities in excess of \EXPU{2}{35}{\:{\rm ergs\:s^{-1}}}, and of these seven were bright enough for detailed spectral analysis. Based on a spectral analysis of all of the sources detected in M33, Long $et\: al.$\ estimated that they had identified all of the thermal plasma-dominated X-ray SNRs brighter than \EXPU{4}{35}{\:{\rm ergs\:s^{-1}}}, at least in the region covered by the {\em Chandra}\ survey. Subsequently, Lee \& Lee \cite{lee_m33} reexamined the LGGS data and produced a larger sample of 199 optical SNR candidates. Their sample is larger in part because they surveyed a larger region of M33 and pushed to fainter surface brightnesses, and somewhat different because they excluded objects with diameters greater than 100 pc. They argued that objects with these characteristics were unlikely to be SNRs and should be excluded. Long $et\: al.$\ discussed some of these concerns, but felt that excluding such objects as candidates was premature. However, the differences also reflect the subjective aspects of identifying optical SNR candidates, the majority of which are very faint and near the sky background limit in the LGGS and other ground based data. If the two lists are combined, there are 217 optically-identified SNRs and SNR candidates in M33. Of these, 86 of these, all from the list of Long $et\: al.$, have optical spectra. Most recently, Williams $et\: al.$\ \cite{williams15} have described the results of their analysis of a new deep set of XMM observations covering a larger region of M33 than was observed with {\em Chandra}; in addition to recovering most of the SNRs reported as X-ray sources by Long $et\: al.$, they detected 8 new X-ray SNRs, three of which are in the outskirts of the galaxy. {D'Odorico}, Goss \& Doptia \cite{dodorico82} carried out the first successful radio search for SNRs in M33 using the Westerbork Synthesis Radio Telescope (WSRT) at 21 cm. They reported five certain and three probable detections of sources at the positions of the 12 optically-identified SNRs known at the time. Subsequently, Gordon $et\: al.$\ \cite{gordon99} used the a combination of Very Large Array (VLA) and WSRT observations obtained at 6 and 20 cm with an angular resolution of 7", or 30 pc, to construct a catalog of 186 sources in M33. Of these sources, they identified 53 sources as spatially coincident with one of the 98 optically-identified SNRs known at this later date. The mean radio spectral index of the radio sources identified as SNRs was 0.5, and the summed radio luminosity of SNRs in M33 comprised 2-3\% of the total synchrotron emission in M33. There were a number of other non-thermal sources detected above their surface brightness limit of 0.2 mJy along the line of sight to M33, but they noted that most of these were likely background sources. None of SNRs identified in M33 is very young. There are no objects, like N132D or E0102-72.3, in the Magellanic Clouds, whose optical emission is dominated by emission from shocked ejecta, or even any SNRs with broad optical emission lines. There is an X-ray source with a power-law spectrum coincident with a small-diameter radio source that Long $et\: al.$\ suggest may be a pulsar-wind nebula. \subsection{M31 \index{M31} \label{m31}} The first few SNRs in M31 were identified Kumar \cite{kumar76}, and subsequent image tube photography by D'Odorico $et\: al.$\ \cite{dodorico80} and by Blair $et\: al.$\ \cite{blair81}, respectively, expanded the number of spectroscopically confirmed SNRs to 14. Because of its very large size, M31 was actually less surveyed than several other nearby galaxies for many years. Braun \& Walterbos \cite{braun93} found 52 SNR candidates in the first CCD-based search for SNRs in M31, but surveyed only a portion of the galaxy. Magnier $et\: al.$\ \cite{magnier95} identified 179 candidates in seventeen fields totaling a square degree of the galaxy, but their selection of candidates was based on morphology in {H$\alpha$}. They did not use the [SII]:{H$\alpha$}\ ratio as a criterion, and as a result, there was no direct evidence that the majority of the nebulae selected by Magnier $et\: al.$\ actually contained shocks. The situation has changed recently however, as Lee \& Lee \cite{lee_m31} have searched the LGGS survey images of M31 for SNRs, just as they had done for M33. They identified 156 emission nebulae with diameters less than 100 pc as SNRs or SNR candidates on the basis of [S II]:H{$\alpha$} $>$0.4 and circular morphology. Most of the candidates are associated with the spiral arms of M31. Although the first X-ray detection of a SNR in M31 was most likely made by Blair $et\: al.$\ \cite{blair81} using {\em Einstein}, the first reliable characterization of the X-ray properties of SNRs in M31 has required the greater sensitivity of {\em Chandra}\ and especially XMM \cite{pietsch05, stiele11}. According to Sasaki $et\: al.$\ \cite{sasaki12}, there are now 26 confirmed X-ray SNRs in M31, 21 of which had been thought to be SNRs based on earlier observations, and six which were discovered as a result of the XMM studies. These SNRs are confirmed in the sense that they have both the X-ray and optical characteristics of SNRs. The X-ray luminosities of the SNRs range from \EXPU{2}{35}{\:{\rm ergs\:s^{-1}}} to \EXPU{8}{36}{\:{\rm ergs\:s^{-1}}} in the 0.3-2 keV band. There are also 20 candidate SNR, objects that either have soft X-ray spectra, but ambiguous evidence from other wavelength bands as to whether the object is a SNR, or hard X-ray spectra but evidence for a radio source or a nebula with high [S~II]:{H$\alpha$}\ ratios at that position. The first radio search for SNRs in M31 was carried out Dickel $et\: al.$\ \cite{dickel82} who used the VLA at 20 cm and reported the radio detection of 7 SNRs identified earlier by {D'Odorico} $et\: al.$\ \cite{dodorico80}. Although some other efforts to characterize small numbers of SNRs in M31 have taken place since then \cite{braun93,sjouwerman01}, the SNR population of M31 is still not well-characterized at radio wavelenths. Galvin \& Filipovic \cite{galvin14} have published a catalog of 916 point sources in 20 cm radio images of M31 constructed from archival VLA data and compared the positions of these point sources to SNR candidates suggested by others. With a flux limit of about 2 mJy, they find 13 objects whose position matches those contained in the list of optical candidates produced by Lee \& Lee \cite{lee_m31}. Of the 47 SNRs and SNR candidates reported by Sasaki $et\: al.$\ \cite{sasaki12} in X-rays with XMM, they find 11 overlaps. As is true of M33, no very young SNRs have been identified as yet, with the exception of the remnant of SN 1885, which Fesen $et\: al.$\ \cite{fesen15_m31} have imaged in absorption with HST against the stars in bulge of M31. \subsection{Supernova Remnants Beyond the Local Group} The first 17 SNR candidates in six galaxies beyond the Local Group were identified by {D'Odorico} $et\: al.$\ \cite{dodorico80} using photogaphic plates. The first large CCD-based searches were carried out by Blair \& Long \cite{blair97} who identified 56 SNR candidates in the Sculptor Group Galaxies, NGC300 and NGC7793, and by Matonick \& Fesen \cite{matonick97} who identified a total of about 400 SNR candidates in NGC5204, NGC5585, NGC6946, M81, and M101. As shown in Table \ref{table_extragalactic}, optical samples of varying depths now exist for more than 20 galaxies within 10 Mpc \cite{vucetic15}. These include NGC2403 with 150 candidates \cite{leonidaki13}, M83 with nearly 300 candidates \cite{blair12,blair14}, and M101 with 93 candidates \cite{matonick97}. A necessary next step for improving the reliability of these samples is to obtain the spectra of as many of these candidates as possible. This is underway for many of these galaxies, including M81, where Lee $et\: al.$\ \cite{lee15_m81} have obtained spectra of 28 of 41 optically identified SNR candidates; they find that 26 of their 28 objects should be retained as candidates. As noted earlier, there have been relatively few dedicated radio searches for SNRs outside of the Local Group. \index{Radio identification of SNRs} In the Sculptor group spiral NGC300, Payne $et\: al.$\ \cite{payne04} used data from the VLA and from ATCA to identify 18 non-thermal radio sources associated with {H$\alpha$}\ emission or an X-ray point source in XMM data as SNRs; five of these were in Blair \& Long's list of optical SNRs, but 13 were new. Of the 18 sources, six were also detected with XMM. In another Sculptor group spiral NGC7793, Pannuti $et\: al.$\ \cite{pannuti02} identified five radio SNR candidates. Lacey $et\: al.$\ \cite{lacey01}, as discussed earlier, identified 35 objects in the starburst galaxy NGC6946 as radio SNRs, six of which Pannuti $et\: al.$\ \cite{pannuti07} found to be X-ray sources in {\em Chandra}\ images. M82 \index{M82} is an exception in terms of the importance of radio observations. At 3.2 Mpc, M82 is the closest example of a prototypical starburst galaxy, that is a galaxy undergoing a huge burst of star formation (due in the case of M82 to a near collision with M81). Such galaxies contain a large amount of dust, and this dust, heated by early type stars, radiates strongly in the FIR. The current star formation rate (SFR) in M82 is about 10 $\MSOL ~yr^{-1}$, much greater than the Galaxy. With this SFR, M82 produces a large number of SNe, 1 every 10 to 20 years, and hence a large number of very young SNRs. The SNRs are mostly buried behind large amounts of dust and hence primarily accessible at radio wavelengths. The first radio studies of M82 were carried out by Kronberg \& Wilkinson \cite{kronberg85}, and the galaxy has been monitored since that time with the VLA and MERLIN. Huang $et\: al.$\ \cite{huang94} observed the highly reddened galaxy with the VLA at a resolution of 0.2" and identified 50 sources near the center of the galaxy with diameters all less than that of Cas A, and with higher radio surface brightness as well. They argued the vast majority of these were SNRs expanding into the high pressure ISM of the central region of M82. They found these sources obey a $\Sigma$-D relationship that extrapolates to that of the $\Sigma$-D of Galactic and Magellanic Cloud SNRs. Repeated observations with Merlin and the VLA and with very long baseline interferometry have allowed one to measure the time evolution of the radio fluxes and, in many cases, the expansion velocities of the SNRs. For example, Fenech $et\: al.$\ \cite{fenech08} used MERLIN to detect about 35 SNR in M82 ranging in diameter from 0.3 to 6.7 pc, with a mean of 2.9 pc. Most of the sources show shell-like morphologies. They measured expansion velocities ranging from 2200 $\:{\rm km\:s^{-1}}$ to 10,500 $\:{\rm km\:s^{-1}}$ in 10 SNRs. These velocities are significantly larger than predicted by Chevalier \& Franson \cite{chevalier01} who suggested that the radio SNRs in M82 were expanding into a dense \POW{3}{cm^{-3}} ISM and mostly in their radiative phase. The distribution of diameters in this SNRs measured by Fenech $et\: al.$\ suggests that most of the SNRs are in the free expansion phase and that the SNRs are expanding into the region carved out by the winds of a progenitor red giant star. X-ray identifications of SNR candidates in galaxies beyond the Local Group have mostly proceeded from attempts to identify X-ray sources with nebulae satisfying the optical criteria for SNRs or as radio SNRs. Deep observations with {\em Chandra}\ are required to see all but the most luminous SNRs in Galaxies beyond the Local Group. \index{X-ray identification of SNRs} Matonick \& Fesen \cite{matonick97} had identified 93 emission nebulae as optical SNR candidates in M101. Franchetti $et\: al.$\ \cite{franchetti12} re-examined the 55 objects in the sample that were contained in archival {H$\alpha$}\ images obtained with HST, ranging in size from 20 to 330 pc, including 16 with diameters greater than 100 pc. They found that these that 21 of the 55 candidates had X-ray counterparts in very deep (1 Ms) {\em Chandra}\ observations \cite{kuntz10}. And Long $et\: al.$\ \cite{long14} analyzed a series of {\em Chandra}\ observations of M83 totaling 729 ks. They found 378 point sources within the D$_{25}$ contour of the galaxy, including 87 sources which appeared to be SNRs based on a combination of their X-ray properties and coincidence with either an optical SNR candidate or a radio source within the galaxy.\index{M83} Smaller numbers of X-ray detected SNRs exist in other galaxies outside the Local Group. Not surprisingly, most of the SNRs that have been discovered in galaxies beyond the Local Group appear, with the limited information available, to be older, larger diameter SNRs, since one typically looks for objects with some indication of spatial extent, and small diameter SNRs radiating primarily in the oxygen lines are hard to separate from planetary nebulae. There have been nine SNe in NGC6946, and six in M83 in the last 100 years, which implies there are about 90 and 60 SNRs in these galaxies of age less than 1,000 years. But this also means there are many more SNRs with ages of up to 20,000 years, so young SNRs are going to be rare. A few may have been found depending on one's decision about when to declare an object a SNR, as opposed to late time emission from a SN since some SNe have been observed essentially continuously from the time they exploded \cite{danny12}. The bright optical, radio, and X-ray SNR in NGC4449 was probably due to a SN that was missed in the last 50-100 years \cite{danny08}. One pathway forward to identifying younger SNRs is through higher angular resolution observations at optical or radio wavelengths. There are now some searches that are being carried out with the WFC3 on HST, which has the requisite filters, that might address this problem. For example, in M83, Blair $et\: al.$\ \cite{blair14} found 26 new small diameter SNRs (by carefully inspecting HST images near X-ray point sources) and measured small ($\leq$ 0.5 arcsec or 10 pc) diameters for 37 others. However, so far only one of these objects is known to have very broad optical emission lines \cite{blair15}; this particular object, which was also detected as an X-ray source and a radio source, was most likely another example a SN in the last 100 years that was missed. The rest of the small diameter objects look to be SNRs that are evolving in a denser ISM than the typical SNR in other galaxies. \section{What the samples tell us about SNRs as a class} The past 50 years have seen a lot of progress in terms of identifying samples of SNRs both in the Galaxy and nearby galaxies. However, we now need to ask what we can learn from these samples. \subsection{Luminosity Function \index{SNR luminosity function}} \begin{figure}[b] \includegraphics[scale=.38]{fig6.eps} \caption{The radio, {H$\alpha$}, and X-ray luminosity functions of SNRs in several nearby galaxies. The various luminosity functions M31, M33, and the Large and Small Magellanic Clouds are shown in blue, black, green and red, respectively. Data for M33 was taken from Gordon $et\: al.$\ \cite{gordon99} and Long $et\: al.$\ \cite{long10}; for M31 from Lee $et\: al.$\ \cite{lee_m31} and for the Large and Small Magellanic Clouds from Badenes $et\: al.$\ \cite{badenes10} and Maggi $et\: al.$\ \cite{maggi15}. The radio luminosity shown is the specific luminosity at 20 cm.} \label{fig_lumfunc} \end{figure} The luminosity function of SNRs\index{SNR luminosity function}, expressed as the number of SNRs with a luminosity less than a specific value, at radio, optical ({H$\alpha$}) and X-ray wavelengths for several galaxies is shown in Fig.\ \ref{fig_lumfunc}. In all cases, the shapes of the luminosity functions are affected by sample completeness at low luminosity. To the extent that conditions are similar in these galaxies, one might expect that the normalization of the luminosity functions would reflect the number of SNe in each galaxy, and, since most SNe arise from relatively young stellar populations, the overall SFR. Leonidaki $et\: al.$\ \cite{leonidaki10}, for example, assert that the number of X-ray SNRs brighter than \POW{36}{\:{\rm ergs\:s^{-1}}} in a galaxy is proportional to the SFR. They also suggest that SNRs in irregular galaxies tend to be more luminous than in spiral galaxies, possibly due to the fact that the lower metal abundances observed in irregular galaxies results in more massive SN progenitors. Maggi $et\: al.$\ \cite{maggi15} find the X-ray luminosity functions of both M31 and M33 can be fitted as a power law with a slope about 0.8, but that the SMC is significantly flatter (0.5). They find that the luminosity function of the LMC is complex, with 13 SNRs brighter than \POW{36}{\:{\rm ergs\:s^{-1}}}. Like Leonidaki $et\: al.$, they attribute this due to low metallicity, but suggest that the winds of more massive stars in the LMC create low density cavities, which result in luminous SNRs when the SN shock reaches the cavity walls at ages of a few thousand years \cite{dwarkadas05}. \index{M31} \index{M33} \index{Magellanic Clouds} Similarly, Chomiuk \& Wilcots \cite{chomiuk09}, using a sample of radio SNR candidates in 18 galaxies, ranging from the Magellanic Clouds to galaxies like M51 and M82, argue SNRs generally have a power distribution of luminosities with a scaling that is proportional to the SFR. Specifically, they find overall \begin{equation} \frac{dN}{d L_{\nu} }= 92 ~ SFR~ L_{\nu} ^{-2.02} \end{equation} where $L_{\nu} $ is the specific luminosity at 1.4 GHz (20 cm) in mJy and SFR is in units $\MSOL~yr^{-1}$. Unlike Thompson $et\: al.$\ \cite{thompson09}, they find no indication that the peak (or average) luminosity is related to the gas density of the galaxy. At optical and X-ray wavelengths, luminosity functions are difficult to interpret in terms of physical models because the amount of emission is very dependent on the local ISM density. However, this may not be the case at radio wavelengths. Following early theoretical work by Reynolds \& Chevalier \cite{reynolds81} and {Berezhko} \& {V{\"o}lk} \cite{berezhko04}, Chomiuk \& Wilson argue that the slope of the radio luminosity function can be understood in terms of a model in which (a) most of the SNRs are in the Sedov phase, (b) the cosmic ray energy is a fixed fraction of the SN explosion energy throughout the Sedov phase, and (c) the magnetic field energy density behind the shock is amplified to $\sim 0.01 \rho_o v_s^2$. Whether this particular interpretation of the radio luminosity function is actually physically correct is difficult to determine; as the specific luminosities of SNRs with the same diameter vary substantially (see below), so a complete interpretation of the radio emission from SNRs has to account for these differences. Nevertheless, luminosity functions at all wavelengths are clearly useful for estimating completeness of samples, and the total amount of radiation arising from SNRs at the various wavelengths. \subsection{The Diameter Distribution of SNRs \index{SNR diameter distribution}} A natural question arising in any attempt to explain the properties of any SNR sample is how to explain the distribution of diameters in the sample. If most SNRs are in the Sedov phase, then one would naively expect the number of SNRs in a sample with diameters less than D (N$<D$) to increase as D$^{5/2}$. However, early versions of the N$<$D - D relationship for the Large Magellanic Clouds \cite{mathewson73,clarke76} and for M33 \cite{blair85} showed that the numbers increased roughly with D, as if SNRs expanded without significant deceleration to fairly large diameter. The current N$<$D - D relationships for the Magellanic Clouds, M31 and M33 are shown in Fig.\ \ref{fig_number_diam}. The naive expectation that N should be proportional to D$^{5/2}$ depends on whether a SNR sample is complete over a significant range of diameters. Hughes, Helfand \& Kahn \cite{hughes84} showed that for reasonable variations in SN energy and ISM density, one could produce a flat N$<$D - D relation if the sample was X-ray flux limited even if the majority of the SNRs were in the Sedov phase. The reason for this is illustrated in the right hand panel of Fig.\ \ref{fig_xray_sensitivity}; a SNR expanding into a higher density ISM has higher peak luminosities but fades away at lower diameters than a SNR from an identical SN expanding into a lower density medium. As a result, in an X-ray flux limited sample, SNRs expanding into a lower density medium are visible at larger diameters. Although the number of SNRs known in the LMC has grown since the work of Hughes, Helfand \& Kahn, the N$<$D - D relationship for the Large Magellanic Clouds remains relatively flat. A recent analysis of the N$<$D - D relationship was carried out by Badenes, Maoz \& Draine \cite{badenes10} in the Magellanic Clouds. They argue that the current sample is mostly complete, and that most of these SNRs are in Sedov phase, but that the relation is largely governed by variations the density of the ISM. \begin{figure}[b] \includegraphics[scale=.38]{fig7.eps} \caption{The number of identified SNRs and SNR candidates in the Magellanic Clouds, M31 and M33 as a function of their diameter. The data for the SMC and LMC is from Badenes Maoz \& Draine \cite{badenes10}. For M31, diameters were taken from the catalog of Lee \& Lee \cite{lee_m31}. For M33, the catalogs of both Long $et\: al.$\ \cite{long10} and Lee \& Lee \cite{lee_m33} are shown. Lines indicating the expected slopes of the curves for free expansion and Sedov evolution are also plotted. Note that although the cumulative N$<$D - D relation is shown here, the analysis of the distribution is more properly done on the differential number density, since the data points in the cumulative distribution are not statistically independent.} \label{fig_number_diam} \end{figure} The observational situation in other galaxies is less clear. Badenes, Maoz \& Draine \cite{badenes10} also analyzed the M33 SNR sample of Long $et\: al.$\ \cite{long10} and found it to resemble that of the Magellanic Clouds in the range of 10-30 pc, while Lee \& Lee \cite{lee_m33} using their sample of 199 SNRs and SNR candidates find a power slope of 2.4 in the range 17 to 50 pc, identical to that expected for a pure Sedov expansion. Lee \& Lee attribute this discrepancy to differences in the sample and also to differences in the diameter measurements. Interestingly, Lee \& Lee \cite{lee_m31} also find that the size distribution of all of the SNRs they identified in M31 is consistent with a Sedov expansion law. A thorough analysis, with a detailed assessment of completeness and contamination, of both the M33 and M31 samples would be desirable. \subsection{Radio Surface Brightness - Diameter Relationship $\Sigma$-D \index{SNR Radio Surface Brightness - Diameter Relation}} \begin{figure}[b] \includegraphics[scale=.38]{fig8a.eps}\includegraphics[scale=.38]{fig8b.eps} \caption{Left: The radio surface brightness of Galactic and extragalactic SNRs as a function of SNR diameter. The Galactic sample comprises 65 SNRs with distance estimates as compiled by Pavlovi{\'c} $et\: al.$\ \cite{pavlovic14}. The extragalactic sample, divided into objects from the Magellanic Clouds, the Local Group galaxies M31 and M33, and M83 is taken from the compilation of {Uro{\v s}evi{\'c}} $et\: al.$\ \cite{urosevic05}. The data for M82 is from Huang $et\: al.$\ \cite{huang94}. The solid curves indicate power law slopes of -2 and -4. Right: The radio luminosity of Galactic and extragalactic SNRs as a function of SNR diameter. } \label{fig_sigma_d} \end{figure} As noted earlier, the study of SNRs as a class of objects began with the identification of SNRs in the Galaxy and the Magellanic Clouds as extended radio sources. The relationship between radio surface brightness and SNR diameter, the so-called $\Sigma$ - D relationship, was one of the first correlations to be identified in early SNR samples \cite{clark76}, and remains one whose importance continues to be discussed. Small diameter SNRs typically have higher surface brightnesses than larger ones. Sklovskii \cite{sklovskii60} pointed out that this fact, expected on theoretical grounds, could be used to estimate distances to SNRs if it could be calibrated with SNRs of known distances. For Galactic SNRs, some of which have only been observed at radio wavelengths, it continues to be advocated by some for this purpose, if there is no alternative \cite{case98,pavlovic13,pavlovic14}. The radio surface brightness as a function of diameter for a collection of Galactic SNRs (with distance estimates) and of objects in the Magellanic Clouds, M31, M33, and M82 is shown in the left panel of Fig.\ \ref{fig_sigma_d}. It illustrates both the fact that there is an apparent relationship between size and radio surface brightness and the problem, that the radio surface brightness at a particular diameter varies by large factors. Clearly as pointed out by many, distance estimates using a $\Sigma$ - D relationship, any distance estimate based on this method will be crude at best. The $\Sigma$ - D relation is usually expressed in terms of a power law of the form $\Sigma = A D^{\beta}$. If all SNRs had the same radio luminosity, then $\Sigma \propto D^{-2}$. Errors arise not only from the fact that SNRs from identical SNe evolving in different ISM may have different radio luminosities at the same diameter, but also from sample completeness biases, and in some cases, the way in which the power law index is derived from the observational data \cite{green14_distribution}. The advantage of the Galactic sample is that one has more detailed information about the SNRs, and can weed out pulsar dominated SNRs, e. g. the Crab Nebula, which are not expected to follow the same relationship as the shell-like SNRs; the disadvantage is that the uncertainties in the distance measurements in the SNRs used in determining the $\Sigma$ - D are quite difficult to quantify. Most estimates of the power law exponent in the Galactic sample fall in the range of -4$\pm$1. Clark \& Caswell \cite{clark76} in an early analysis of the Galactic sample derived a value of about -3, similar to the value Mathewson \& Clark \cite{mathewson73} had found for LMC SNRs. Case \& Bhattacharya \cite{case98} found a value 2.64 for a sample of 37 SNRs, which included Cas A, although their analysis approach has been criticized by Green \cite{green14_distribution}. {Pavlovi{\'c} $et\: al.$\ \cite{pavlovic13,pavlovic14}, who attempt to account for the distance uncertainties for individual objects, find steeper exponents of order -5. Extragalactic samples of SNRs are all at known relative distances, but have the disadvantage that completeness and sample purity (especially beyond the Local Group) are difficult to estimate. One of the earliest attempts to study the $\Sigma$ - D relationship in galaxies beyond the Local Group was made by Berkhuijsen \cite{berkuijsen86} who used a sample of 86 SNRs from the Galaxy, LMC, M33 to derive a $\Sigma$ - D relationship. She found that her sample could be defined by a power law index of -3.5, but that the dispersion in surface brightness at any diameter was of order a factor of 10, something that is clear from the data plotted in Fig.\ \ref{fig_sigma_d}. Huang $et\: al.$\ \cite{huang94} obtained measurements of a group of small diameter SNRs in star-burst galaxy M82, and also found a $\Sigma$ - D power law index of -3.5$\pm$0.1. However, {Uro{\v s}evi{\'c}} $et\: al.$\ \cite{urosevic05} recently analyzed the data for 11 galaxies, and conclude that an exponent of -2 is consistent with the data for 10 of these galaxies. If this is correct, then it simply says that the luminosity of a SNR does not vary strongly with size, and that factors other than size are more important in determining the radio luminosity. To illustrate this, the radio luminosity of the same sample of objects is shown in the right panel of Fig.\ \ref{fig_sigma_d}. The M82 SNRs are clearly brighter than those in the rest of the sample, but the dispersion in luminosity of the other samples at any diameter is so large that there are no obvious trends with diameter. \begin{figure}[b] \includegraphics[scale=.38]{fig9a.eps} \includegraphics[scale=.38]{fig9b.eps} \caption{Left: The {H$\alpha$}\ luminosities of SNRs in M31 \cite{lee_m31} and M33 \cite{long10} as a function of diameter. Right: The X-ray luminosities of SNRs in the LMC \cite{maggi15} and M33 \cite{long10}. Note that this is a semilog plot, unlike Fig. \ref{fig_sigma_d}. } \label{fig_xlum_diam} \end{figure} \subsection{X-ray - Optical Comparisons \index{ SNRs: X-ray/Optical Comparisons} } \begin{figure}[b] \includegraphics[scale=.38]{fig10a.eps}\includegraphics[scale=.38]{fig10b.eps} \caption{Left: The ratio of {H$\alpha$}\ to X-ray luminosities of SNRs and SNR candidates in M33 as a function of SNR diameter, as described by Long $et\: al.$\ \cite{long10}. Right: The {H$\alpha$}\ luminosities as a function of X-ray luminosity, also from Long $et\: al.$\ \cite{long10}. The blue data points are for SNRs that were detected in X-rays; the red data points represent upper limits for the X-ray detection.} \label{fig_ha} \end{figure} As we have discussed, optical emission from SNRs arises, in most cases, from radiative shocks driven into denser clouds in the ISM by the primary shock of a SN explosion, while X-ray emission arises from material heated by the primary shock. As a result, one might guess that there would not be much correlation of {H$\alpha$}\ luminosity with diameter or with X-ray luminosity. This is indeed the case. The left panel of Fig.\ \ref{fig_xlum_diam} shows {H$\alpha$}\ luminosities of SNRs and SNR candidates in M31 and M33 as a function of SNR diameter from the studies of Lee \& Lee \cite{lee_m31} and Long $et\: al.$\ \cite{long10}, respectively. The right panel of the same figure shows X-ray luminosities of SNRs in the LMC and M33 from the work of Maggi $et\: al.$\ \cite{maggi15} and Long $et\: al.$\ \cite{long10}. In both cases, there is a lot of dispersion in luminosity at smaller diameters, but the amount of dispersion decreases towards higher diameters. For the {H$\alpha$}\ images, the decrease in dispersion arises directly from the fact that optical searches for SNRs are surface brightness limited. The X-ray surveys, by contrast, tend to be flux limited. The upper envelopes of the two plots are not subject to selection effects. Qualitatively at least, one can interpret the flatness that the {H$\alpha$}\ upper envelope of the {H$\alpha$}\ luminosity distribution as being due to the fact that the emission is arising from slower secondary shocks; the decline in the X-ray luminosities is due to the fact that at X-ray wavelengths, a SNR expanding into a denser medium cools to the point that the SNR is no longer detected even in soft X-rays. The {H$\alpha$}\ luminosities of SNRs are generally larger than their soft X-ray luminosities, as is indicated in Fig.\ \ref{fig_ha}, at least if simple count rate conversions are used to estimate X-ray luminosities. This is not a selection effect; if SNRs existed that populated the high X-ray luminosity - low {H$\alpha$}\ luminosity section of the right panel of Fig.\ \ref{fig_ha}, they would have been detected as X-ray SNRs, even if they had not been identified as SNRs from their optical spectra. There is clearly no correlation between X-ray and optical luminosity. \subsection{The progenitors of SNRs and the Type of the SN explosion \index{SNR progenitors}} The most basic fact one would like to know about a SNR is whether the SNR is the product of Type Ia or core-collapse SN. The best way to do this is to witness the SN and then to watch the development of the SNR as we are currently able to do with SN1987a, and a group of other less than 100 year old SNRs (see e. g. Milisavljevic $et\: al.$\ \cite{danny12}. Second best, perhaps, is to obtain spectra of the light echo of the SN explosion, which as has been done for one SN in the LMC \cite{rest08}, confirming it to be the result of a type Ia explosion. The others are also thought to be of type Ia on the basis of their Balmer-dominated optical and X-ray Fe-rich spectra. The mere detection of a light echo tends to favor a Ia explosion because core collapse SNe are generally fainter than Ia SNe. } However, it is much more common to attempt this determination from the properties of the SNR itself or by studies of the local stellar population. Core collapse SNe ejecta are rich in CNO processed material, while ejecta from Ia SNe are rich in Fe, Si and other elements produced by explosive nucleosynthesis. This material is heated as the reverse shock traverses the ejecta, and slowly mixes with shocked interstellar gas as the SNR expands. It is more easily detectable in X-rays than at optical wavelengths since the material remains hot after the passage of the reverse shock and most optical emission is produced at the edge of the SNR in secondary shocks passing through material with interstellar abundances. What is required are X-ay spectra with sufficient high signal to noise obtained with high enough spectral resolution to carry out a detailed abundance analysis, and in some cases, enough spatial resolution to isolate ejecta material in the SNR interior from material emitting at the shock front. With current instrumentation, this implies X-ray typing of SNRs is really only possible for SNRs in the Galaxy, in the Magellanic Clouds, and the brightest SNRs in M31 and M33. The first X-ray determinations of progenitor type in SNRs outside the galaxy were made by Hughes $et\: al.$\ \cite{hughes95} who used {\em ASCA}\ to show that two Balmer-dominated SNRs in the LMC were, as had been previously suggested by Tuohy $et\: al.$\ \cite{tuohy82}, the remnants of SNe of type Ia. As better X-ray spectra have been obtained, many more SNRs have been typed. Maggi $et\: al.$\ recently concluded from an analysis of the XMM X-ray spectra that about 20 SNRs in the LMC can be classified as core-collapse or Ia SNRs on the basis of their X-ray spectra. and that six others can be classified (as core-collapse objects) on the basis of other properties, mainly related to the existence of a pulsar or pulsar wind nebula. They use these results to argue that the ratio of CC:Ia SNRs is between 1.2 and 1.8, which is greater than the ratio of about 3:1 that Li $et\: al.$\ \cite{li11} measured from a volume limited sample of SNe or from abundances derived from studies of gas in galaxy clusters. Although Maggi $et\: al.$\ consider the possibility that the unusual ratio of CC to Ia SNRs could be due to multiple core-collapse SN exploding in the same star-formation region, they favor an explanation associated with the star formation history of the LMC. The progenitors of a small number of SNRs can also be typed on the basis of their optical spectra. These include Cas A \cite{kirshner77}, G292+1.8 \cite{goss79} and Puppis A \cite{winkler85} in the Galaxy, and N132D \cite{morse95} in the LMC and E0102-72.3 \cite{blair00} in the SMC, all of which show spectra dominated by very strong oxygen lines, and many of which have other lines in their spectra, e.g. [Ne III], [Ne IV] and [Ar IV], not normally seen in SNRs. All of the young SNRs whose spectra are dominated by Balmer lines are most likely due to Ia explosions; a pre-requisite for a Balmer-dominated shock is a partially neutral ISM, and the UV emission type II explosions is so strong that it should completely ionize the surrounding ISM \cite{tuohy82}. (Portions of the the Cygnus Loop, which is thought to be the result of a core-collapse explosion, are Balmer-dominated, but the majority of the filaments are radiative and this is a middle-aged SNR.) But SNRs which can be typed optically are rare. X-ray techniques are generally more powerful, and are likely to become even more powerful with the next generation of experiments, e. g. Athena \cite{athena}, which will include non-dispersive, high resolution microcalorimeters that will make elemental abundance determinations far more straightforward. An alternative way to learn about the nature of the progenitor of a SNR is to study the underlying stellar population. In the Magellanic Clouds, for example, Badenes $et\: al.$\ \cite{badenes09} has shown that all of the core-collapse SNRs are associated with regions of active star formation, while three out of the four Ia SNRs are associated with old metal poor populations. Jennings $et\: al.$\ \cite{jennings12,jennings14} used HST data to construct color-magnitude diagrams of 33 M33 SNRs and 83 M31 SNRs and estimate the age and mass of the progenitor in each case. They conclude that the distribution of masses is inconsistent with that expected from a Salpeter mass function; there are too few systems associated with very massive stars, suggesting an upper limit to the mass of stars that explode as SNe. \section{Conclusion} A great deal of progress in understanding SNRs and how they affect the Galaxy and the ISM has been made since it was recognized about 50 years ago that SNRs constitute a large fraction of the brighter radio sources in the Galaxy. Observationally, SNRs are very diverse, partly because the SNe that produces SNRs are diverse, but perhaps more importantly because the circumstellar and interstellar environments into which SNRs explode is diverse. Large (but incomplete) samples of SNRs exist not only for the Galaxy but also for many nearby ($<$10 Mpc) galaxies. Though most of the extragalactic SNRs have been discovered optically (through interference filter imaging which detects SNRs as emission nebulae with elevated [S II]:{H$\alpha$}\ ratios), it is important to continue to exploit techniques in other wavelength ranges -- radio, IR, X-ray, and $\gamma$-rays -- in the future. Searches in multiple wavelength bands help to assure we have more complete SNR samples, and detection in multiple bands is the best way to be sure a SNR candidate is actually a SNR. Studies in multiple bands are the best way to understand samples as a whole. Galactic samples are important, despite uncertainties in distance to individual objects and the effects of absorption in the plane; this is the only sample where we can study individual objects in detail and study the oldest, faintest objects. The extragalactic samples are important, despite our being able to learn less about individual objects, because the samples in each galaxy are all at the same distance and can be observed and compared much more uniformly than the Galactic sample. There is still much to do in terms of interpretation of the existing samples. Because of the diversity of SNe and the environments in which they evolve, it is not straightforward interpret the existing data. Although one can certainly expand the numbers of galaxies in which SNRs are identified in the future, the path to better understanding of SNRs as a class of objects is a more complete multiwavelength study of SNRs in the nearby galaxies with known SNR samples, especially at X-ray and radio wavelengths. \begin{acknowledgement} Partial support for this work was provided by NASA through grants HST-GO-12462 and HST-GO-12762 issued by the Space Telescope Science Institute and through grant {\em Chandra}\ G0-13060 issued by the {\em Chandra}\ X-ray Center. My understanding of SNRs as a class is largely due to conversations with my many collaborators, most notably William P. Blair and P. Frank Winkler. \end{acknowledgement}
{ "timestamp": "2017-12-15T02:10:29", "yymm": "1712", "arxiv_id": "1712.05331", "language": "en", "url": "https://arxiv.org/abs/1712.05331" }
\section{Introduction} In \cite{FLZalternating} Friedl, Livingston and Zentner studied the knot concordance group $\mathcal{C}$ modulo the subgroup $\mathcal{C}_\text{alt} \subset \mathcal{C}$ spanned by alternating knots. In particular they ask the following question. \begin{Question} Which sums of torus knots are concordant to alternating knots? \end{Question} According to Murasugi \cite{MurasugiAlternatingKnots} the Alexander polynomial of an alternating knot is alternating, meaning that its coefficients are non-zero and alternate in sign. Using this criterion one can see that a $(p,q)$ torus knot is alternating if and only if $(p,q)=(2n+1,2)$ for some $n\geq 0$. Murasugi's theorem was re-proved by Ozsv\' ath and Szab\' o in \cite{OS8} where they find that the knot Floer homology $\widehat{HFK}_{i,j}(K)$ of an alternating knot $K$ is supported on the diagonal $i-j= \sigma/2$, where $\sigma$ denotes the knot signature. As a consequence of this theorem one can further prove that a linear combination of torus knots is alternating if and only if is a sum of $(2n+1,2)$ torus knots. Based on these facts we na\" ively formulate the following conjecture. \begin{conj}\label{mainconjecture} A sum of torus knots is concordant to an alternating knot if and only if it is a sum of $(2n+1,2)$ torus knots. \end{conj} Since both the Alexander polynomial, and the rank of the knot Floer homology groups are \textit{not} concordance invariants, the arguments based on these basic tools cannot be used to prove Conjecture \ref{mainconjecture}. It is therefore necessary to look for concordance invariants which admit relatively simple descriptions for alternating knots. In \cite{OSS4} Ozsv\' ath, Stipsicz and Szab\' o associate to a knot $K \subset S^3$ a continuous piecewise linear function $\Upsilon_K : [0, 2] \to \mathbb{R}$ only depending on the concordance type of $K$. Making use of the computation in \cite{OS8} one can prove \cite[Theorem 1.14]{OSS4} that for an alternating knot $K$ \begin{equation}\label{mainobstruction} \Upsilon_K(t)= \frac{\sigma(K)}{2} \cdot (1- |t-1|) \ . \end{equation} Based on the restriction imposed by Equation \eqref{mainobstruction}, Friedl, Livingston and Zentner \cite{FLZalternating} proved that torus knots of the form $(n+1,n)$ with $n \geq 3$ are linearly independent in $\mathcal{C}/\mathcal{C}_\text{alt}$. Their method can be adapted to prove the following. \begin{prop}\label{luckycases}Let $K= m_1 T_{p_1,q_1}\# \dots \# m_k T_{p_k,q_k}$ be a sum of torus knots. Suppose that $p_i >q_i$, and that there is no $q_i$ coefficient appearing with repetitions in the list of coefficients $\{q_1, \dots , q_k \}$. Then $K$ is concordant to an alternating knot if and only if it is a multiple of a $(2n+1,2)$ torus knots. \end{prop} Using the obstruction of Equation \eqref{mainobstruction} one can further see that the following holds. \begin{prop} \label{positive} A sum of positive torus knots is concordant to an alternating knot if and only if it is a sum of $(2n+1,2)$ torus knots. \end{prop} Note that Proposition \ref{luckycases} and \ref{positive} hold more generally for algebraic knots. It follows from Proposition \ref{positive} that in order to prove Conjecture \ref{mainconjecture} one has to deal with connected sums where both positive and negative torus knots occur. Going in this direction, as a by-product of his connected sum formula for knots in involutive knot Floer homology, Zemke \cite{zemke} proved that the knot $T_{ 6,5} \# -T_{ 4, 3} \# -T_{ 5,4}$ is not concordant to a thin knot, and hence to an alternating knot. In \cite{allen}, using the Kim-Livingston secondary upsilon invariant \cite{KimLiv}, Allen proved that a knot of the form $K=T_{p+2,p}\# - T_{p+1,p}$, $p \geq 5$ odd, is not concordant to a thin knot. In order to find further evidence for Conjecture \ref{mainconjecture} we study sums of two torus knots. Here is our main result. \begin{thm}\label{mainresult} If a non-alternating sum of two torus knots is concordant to an alternating knot then it is of the form $T_{6n+2,3}\#-T_{6n+1,3}$ with $n \geq 1$. \end{thm} We do not know if the knots in the family $T_{6n+2,3}\# -T_{6n+1,3}$ are concordant to alternating knots. However, eventhough their topological significance is unclear these knots may serve as useful test cases for the implementation of further obstructions. We now summarize the crucial steps leading to Theorem \ref{mainresult}. A knot $K \subset S^3$ whose upsilon function satisfies Equation \eqref{mainobstruction} is said \textit{upsilon-alternating}. In Section \ref{sectionone} we give a complete characterization of upsilon-alternating sums of two torus knots. \begin{lem}\label{sums} Let $K$ be a non-slice, non-alternating, connected sum of two torus knots. If $K$ is upsilon-alternating then one of the following holds: \begin{enumerate} \item $K=T_{2mr+2,r} \# -T_{2mr+1,r}$ with $m\geq 1$, and $r\geq 5$odd, \item $K=T_{6c-1,3} \# -T_{6c-2,3}$ with $c\geq 1$, \item $K=T_{6c+2,3} \# -T_{6c+1,3}$ with $c\geq 1$. \end{enumerate} \end{lem} Recall that, since torus knots are linearly independent in the concordance group, the non-sliceness assumption simply means that the connected sum is algebraically nontrivial i.e. it is not of the form $T_{p,q}\# -T_{p,q}$. In Section \ref{sectiontwo} using a connected sum formula for the Kim-Livingston secondary invariant \cite{alfieri1} we prove the following theorem, which deals with the first family in Lemma \ref{sums}. \begin{thm}\label{uno} For $q \geq 1$, and $r\geq 5$ odd, $K=T_{qr+2,r} \# -T_{qr+1,r}$ is not concordant to a Floer thin knot. In particular, a knot of this form is not concordant to an alternating knot. \end{thm} The methods of Proposition \ref{uno} cannot be used to prove that the knots in the other families of Lemma \ref{sums} are not concordant to alternating knots. In fact, we have the following proposition. \begin{prop}\label{failure}For $q \geq 1$ one can find an acyclic, $\mathbb{Z}$-graded, $(\mathbb{Z} \oplus \mathbb{Z})$-filtered chain complex $A_*$ and a filtered chain homotopy equivalence \[CFK^\infty(T_{3q+2,3}) \oplus A_* \simeq CFK^\infty(T_{3q+1,3}) \otimes CFK^\infty(T_{3,2}) \ . \] The same holds if we substitute $CFK^\infty$ and the tensor product with their involutive counterparts $CFKI^\infty$ \cite{involutive1,zemke}. \end{prop} Consequently, the knots belonging to family (2) and (3) of Lemma \ref{sums} (corresponding respectively to the cases when $q$ is odd and even) cannot be distinguished from a Floer thin knot by means of any upsilon type invariant \cite{alfieri1}. Studying the Heegaard Floer correction terms \cite{OS24} of double branched covers and using a result by Owens and Strle \cite{OwensStrle}, we prove the following theorem which deals with the second family in Lemma. \ref{sums}. \begin{thm}\label{firstfamily} Knots of the form $K=T_{6c-1,3} \# -T_{6c-2,3}$ are not concordant to alternating knots. \end{thm} Combining Lemma \ref{sums} with the results of Theorem \ref{uno} and \ref{firstfamily} we obtain a proof of Theorem \ref{mainresult}. {\small \subsection*{Acknowledgements} The authors would like to thanks Andr\' as I. Stipsicz, Francesco Lin, Brendan Owens, Jennifer Hom, Irving Dai, and Ian Zemke for many useful conversations. Special thanks are also due to Andr\' as Nemethi for generously sharing his expertise. The first authour was partially supported by ERC grant LTDBud and by MPIM. The second author was partially supported by the NKFIH grant K112735. \par} \section{Uspilon-alternating knots}\label{sectionone} \subsection{Preliminaries on the upsilon invariant} The upsilon invariant, introduced by Ozsv\' ath, Stipsicz and Szab\' o \cite{OSS4}, associates to a knot $K \subset S^3$ a continuous piecewise linear function $\Upsilon_K : [0, 2] \to \mathbb{R}$ with the following properties: \begin{itemize} \item (Invariance) $\Upsilon_K(t)$ is a knot concordance invariant, \item(Relation with $\tau$) $\Upsilon_K(t)=- t \cdot \tau(K) $ for $t$ near to zero, where $\tau$ denotes the concordance invariant introduced by Ozsv\' ath and Szab\' o in \cite{tau}, \item (Symmetry) $\Upsilon_K(t)=\Upsilon_K(2-t)$ for all $t \in [0,2]$, \item (Additivity) if $K= K_1 \# K_2$ is a connected sum then \[\Upsilon_K(t)= \Upsilon_{K_1}(t) + \Upsilon_{K_2}(t) \ , \] \item(Mirror) $\Upsilon_{-K}(t)= - \Upsilon_{K}(t)$ where $-K$ denotes the mirror of $K$, \item (Slice Genus) if $g_s$ denotes the smooth slice genus then \[ \left| \Upsilon_K(t) \right| \leq t g_s(K) \ .\] \end{itemize} Thus $K \mapsto\Upsilon_K(t)$ descends to a group homomorphism from the concordance group to $C^0_{PL}[0,2]$ the vector space of continuous piecewise linear functions $[0,2] \to \mathbb{R}$. We will review the definition of $\Upsilon_K(t)$ in Section \ref{sectiontwo} following \cite{Livingston1}. In this section we only need Proposition \ref{orsobalosso} below which provides an algorithm for computing the upsilon function of torus knots. \begin{prop}[Bodn\' ar \& N\' emethi, Feller \& Krcatovich \cite{feller, Bodnar1}]\label{orsobalosso} Denote by $\Upsilon_{a,b}(t)$ the upsilon function of the torus knot $T_{a,b}$ with $a>b$, then \[ \Upsilon_{a,b}(t)= \Upsilon_{a-b,b}(t) +\Upsilon_{a+1,a}(t) \ . \] Consequently, if $q_i$ and $r_i$ denote respectively the quotients and the remainders occurring in the Euclidean algorithm for $a$ and $b$ (so that $r_0=a$, $r_{-1}=b$, and $r_{i-1}= q_i r_i + r_{i+1}$), we have that \[ \Upsilon_{a,b}(t)= \sum_{i=0}^n q_i \cdot \Upsilon_{r_{i}+1, r_i}(t) \ .\] \end{prop} The functions $\Upsilon_{i+1,i}(t)$ can be explicitly computed: for $t \in [2n/i, 2n+2/i]$ we have that $\Upsilon_{i+1,i}(t)=-n(n+1)-i(i-1-2n)t/2$. Notice that $\Upsilon_{i+1,i}(t)$ has its first singularity at $t=2/i$. It follows that: \begin{itemize} \item the functions $\left\{ \Upsilon_{i+1,i}(t) \right\}_{i=2}^\infty$ are linearly independent in $C^0_{PL}[0,2]$, \item if $K$ is a $(p,q)$ torus knot then $\Upsilon_K(t)$ has its first singularity at $t=2/\min(p,q)$. \end{itemize} \begin{proof}[Proof of Proposition \ref{luckycases}] The upsilon function of a linear combination \[K= m_1 T_{p_1,q_1}\# \dots \# m_k T_{p_k,q_k}\] ($p_i>q_i$) has its first singularity at $t=2/q$, where $q= \max_i \big\{q_i \ | \text{ such that } m_i\not=0 \big\}$. Since the the upsilon function of an alternating knot has at most one singularity at $t=1$, $\Upsilon_K(t)$ has the form of the upsilon function of an alternating knot only if $q=q_1=2$, meaning that $K=m_1 \cdot T_{p_1,2}$. \end{proof} \begin{proof}[Proof of Proposition \ref{positive}] As a consequence of Proposition \ref{orsobalosso}, the upsilon function of a sum of positive torus knots $K$ can be uniquely written as sum \[ \Upsilon_K(t)= \sum_{i=2}^\infty m_i \cdot \Upsilon_{i+1,i}(t) \] with finitely many non-zero $m_i$'s, $m_i \geq 0$, and $m_i=0$ if and only if $\Upsilon_{i+1,i}$ does not appear in any of the expressions of the upsilon functions of the summands of $K$ in terms of the basis $\{\Upsilon_{i+1,i} \}_{i=2}^{\infty}$ of Proposition \ref{orsobalosso}. Since the $\Upsilon_{i+1,i}$'s are linearly independent and have their first singularity at $t=2/i$, we have that $\Upsilon_K(t)$ is a multiple of $f(t)= (1-|1-t|)$ if and only if $m_i=0$ for $i >2$. This forces $K$ to be a sum of $(n,2)$ torus knots proving the claim. \end{proof} \subsection{Upsilon-alternating linear combinations of two torus knots} Linear combinations of two torus knots with the upsilon function of an alternating knot can be characterized as follows. \begin{prop}\label{upsilon} Let $K$ be a linear combination of two torus knots. Then $K$ is upsilon-alternating if and only if (up to mirroring) one of the following holds \begin{enumerate} \item $K$ is slice (since torus knots are linearly independent in the concordance group this can only happen if $K$ is zero as linear combination), \item $K$ is alternating, more specifically it is of the form $K=aT_{n,2}\#bT_{m,2}$ for some $a, b \in \mathbb{Z}$ and $m,n >0$ odd, \item $K=aT_{cbr+1,r}\# -bT_{car+1,r}$ with $a,b,c>0$, $r \geq 3$, and either: \begin{itemize} \item $r$ is even \item $r$ is odd, and $c$ is even, \item $r$ and $c$ are odd, $a$ and $b$ are even, \end{itemize} \item $K=aT_{cbr+2,r}\# -bT_{car+2,r}$ with $a,b,c>0$, $r \geq 3$ odd, and either \begin{itemize} \item $c$ is even \item $c$ is odd, $a$ and $b$ are even, \item $c$ and $a$ are odd, and $r \equiv 1 \ (\text{mod 4})$, \end{itemize} \item $K=aT_{cbr+2, r}\# -bT_{car+1,r}$ with $a,b, c>0$, $r \geq 3$, and either: \begin{itemize} \item $r$ is odd, and either $c$ is even, or $a$ and $b$ are even, \item $b$ and $c$ are odd, $a$ is even, and $r \equiv 1 \ (\text{mod 4})$, \item $a,b$ and $c$ are odd, $r \equiv -1 \ (\text{mod 4})$, and $b(r-1)=2a$. \end{itemize} \end{enumerate} \end{prop} \begin{proof} Let $K$ be a non-zero linear combination of two torus knots. We first characterize those $K$ with $\Upsilon_K(t)$ multiple of $f(t)=1-|1-t|$, then we compute their signature to prove that in fact the relation $\Upsilon(t)= \sigma/2 \cdot (1-|1-t|)$ holds only in the cases displayed in the statement. Suppose that $\Upsilon_K(t)$ has at most one singularity at $t=1$, meaning that $\Upsilon_K'(t)$ is discontinuous at most at $t=1$. Notice that this is the same as saying that $\Upsilon_K(t)$ is a multiple of $\Upsilon_{3,2}(t)=1-|1-t|$. Note that as a consequence of Proposition \ref{positive} and Proposition \ref{luckycases} we can assume that $K$ is in the form $aT_{m,n}\# -bT_{m',n}$ for some positive integers $a, b \in \mathbb{Z}$, and a pair of coprime positive integers $(m,n)$ and $(m',n)$. Assume $m>n$ and $m'>n$. Denote by $\textbf{r}=(m,n, r_1, \dots, r_s, 1)$, $\textbf{q}=(q_0, \dots, q_s)$ and $\textbf{r}'=(m',n, r_1', \dots, r_s', 1)$, $\textbf{q}'=(q_0', \dots, q_{s'}')$ the vectors of residues and quotients of the Euclidean algorithm for the pairs $(m,n)$ and $(m',n)$ respectively. Set $r_{-1}=m$, $r_{-1}'=m'$, $r_{0}=r_{0}'=n$ and $r_s=r_{s'}=1$ so that \[ r_{i-1}= q_i r_i +r_{i+1} \ \ \ r_{j-1}'= q_j' r_j' +r_{j+1}' \ , \] for $i=0, \dots , s$ and $j=0, \dots, s'$. According to Proposition \ref{orsobalosso} we have \[\Upsilon_{m,n}(t)= \sum_{i=0}^s q_i \cdot \Upsilon_{r_i+1, r_i}(t) \ \ \ \ ; \ \ \ \Upsilon_{m',n}(t)= \sum_{i=0}^{s'} q_i \cdot \Upsilon_{r_i'+1, r_i'}(t) \] and consequently \begin{equation}\label{important} \Upsilon_K(t)= a \left(\sum_{i=0}^s q_i \cdot\Upsilon_{r_i+1, r_i}(t) \right) - b \left( \sum_{j=0}^{s'} q_i \cdot \Upsilon_{r_j'+1, r_j'}(t) \right)\ . \end{equation} We now want to solve the functional equation $\Upsilon_{m,n}(t)- \Upsilon_{m',n}(t)= C \cdot \Upsilon_{3,2}(t)$ for $m$ and $m'$. We distinguish three cases. \textit{Case I.} $r_s=r_{s'}>2$. By linear independence of the $\Upsilon_{i+1,i}$'s, from Equation \ref{important} we can conclude that $s=s'$, $r_i=r_i'$ for $i=0, \dots, s$ and $a \mathbf{q}= b \mathbf{q}'$. Assume that $s \geq 1$. By imposing the condition \[q_sr_s+1=r_{s-1}=r_{s-1}'=q_s'r_s'+1=q_s'r_s+1 \] one can conclude that $q_s=q_s'$, $a=b$ and $m=m'$, \text{i.e.} $K=aT_{m,n}-aT_{m,n}$ is slice. If $s=0$ we can conclude that $K$ is in the form $aT_{nbc+1,n}\# -bT_{nac+1,n}$. \textit{Case II.} $r_s=2$ and $r_{s'}>2$. By inspecting Equation \ref{important} we see that $s'=s-1$, $r_i=r_i'$ for $i=0, \dots, s-1$ and $b \mathbf{q}'= a \cdot (q_0, \dots, q_{s-1}) $. If $s\geq 2$ then we obtain \[q_{s-1}r_{s-1}+1=r_{s-2}=r_{s-2}'=q_{s-1}'r_{s-1}'+2=q_{s-1}'r_{s-1}+2\] which is a contradiction $(\text{mod } r_s)$. Thus $s=1$ and $K$ is in the form $aT_{nbc+2,n}\# -bT_{nac+1,n}$ with $n$ odd. \textit{Case III.} $r_s=r_{s'}=2$. Because of Equation \ref{important} we have $s=s'$, $r_i=r_i'$ for $i=0, \dots, s$ and $a \cdot (q_0, \dots, q_{s-1})= b \cdot (q_0', \dots, q_{s-1}')$. Assume that $s \geq 2$. By imposing the condition \[q_{s-1}r_{s-1}+2=r_{s-2}=r_{s-2}'=q_{s-1}'r_{s-1}'+1=q_{s-1}'r_{s-1}+1\] one can conclude that $q_s=q_s'$, $a=b$ and $m=m'$, \text{i.e.} $K=aT_{m,n}\# -aT_{m,n}$ an hence that $K$ slice. Thus either $s=0$ and $K=aT_{m,2}\# -bT_{m',2}$ ($m$ and $m'$ odd), or $s=1$ and $K=aT_{nbc+2,n}\# -bT_{nac+2,n}$. Summarising, we have shown that if $K$ is a non-slice linear combination of two torus knots with $\Upsilon_K(t)= C \cdot (1-|1-t|)$ then either: \begin{itemize} \item $K=aT_{n,2}\# bT_{m,2}$ with $a, b \in \mathbb{Z}$ and $m,n >0$ odd, \item $K=aT_{cbr+1,r}\# -bT_{car+1,r}$ with $a,b,c>0$, and $r \geq 3$, \item $K=aT_{cbr+2,r}\# -bT_{car+2,r}$ with $a,b,c>0$, and $r \geq 3$ odd, \item or $K=aT_{cbr+2, r}\# -bT_{car+1,r}$ with $a,b, c>0$, and $r \geq 3$ odd. \end{itemize} In order to conclude that the arithmetic conditions in the statement are satisfied notice that if $\Upsilon_K(t)= C \cdot (1-|1-t|)$ then $C=-\tau(K)$. Consequently, such a $K$ is upsilon-alternating if and only if the relation $\tau(K)= -\sigma(K)/2$ holds. Therefore we need to evaluate the signature of the knots in the list above and compare it with the value of their $\tau$ invariant. The $\tau$ invariant of these knots is particularly easy to compute: $\tau$ is linear, and for the $(p,q)$ torus knot Ozsv\' ath and Szab\' o \cite{tau} proved that $\tau=(p-1)(q-1)/2$. The signature of torus knots on the other hand can be inductively computed as follows \cite{SignaturesTorusKnots}. Let $\sigma(q,r)$ denote the signature of the negative $(q,r)$ torus knot $-T_{q,r}$. Extend $\sigma(q,r)$ to the set of pairs $(p,r) \in \mathbb{Z}^2$ with $r<0$, $p>|r|$, and $\text{gcd}(p,q)=1$ setting $\sigma(p,r)=\sigma(p,p-r)$. Then \begin{equation} \sigma(q,r)=(-1)^m \sigma(r, (-1)^m k ) +f(m,r) \ , \label{torusknotssignature} \end{equation} where $m$ and $k$ respectively denote the quotient and the residue of the Euclidean division of $q$ and $r$ ($q=m r + k$ with $r>k>0$), and $f(m,r)$ is the function defined by Table \ref{table1}. By means of Equation \ref{torusknotssignature} the computation of the signature of the knots listed above reduces to the one of $\sigma(r,r-1)$ and $\sigma(r,r-2)$. An inductive argument again based on Equation \ref{torusknotssignature} shows that $\sigma(r,r-1)= (r-1)^2/2$ when $r\geq3$ is odd, and $\sigma(r,r-1)=(r^2-4)/2$ otherwise. Furthermore, $ \sigma(r,r-2)=(r-1)^2/2-2$ if $r\equiv -1 \ (\text{mod }4)$, and $\sigma(r,r-2)=(r-1)^2/2$ if otherwise $r\equiv 1 \ (\text{mod }4)$. With this said the claimed arithmetic conditions follows immediately. \end{proof} \begin{table}[t] \begin{tabular}{ | c | c | c |} \hline \phantom{$\Big[$} $f(m,r) \ $ & $r$ odd & $r$ even \\ \hline \phantom{$\Big[$} $m$ odd $ \ $ & $\frac{1}{2}\cdot (m+1)(r^2-1) \ $ & $\frac{1}{2} \cdot (mr^2+r^2-4) \ $ \\ \hline \phantom{$\Big[$} $m$ even $ \ $ & $\frac{1}{2} \cdot (mr^2-m)$ & $\frac{1}{2} \cdot mr^2$ \\ \hline \end{tabular} \vspace{0.3cm} \caption{\label{table1}} \vspace{-0.5cm} \end{table} As an immediate corollary of Proposition \ref{upsilon} one gets Lemma \ref{sums}. \section{Obstructions from the Kim-Livingston secondary invariant} \label{sectiontwo} In \cite{OS2} Ozsv\' ath and Szab\' o introduced a package of three-manifolds invariants called Heegaard Floer homology. This circle of ideas was then used by the same authors \cite{OS7}, and independently by Rasmussen \cite{Ras1} to introduce a knot invariant called knot Floer homology. For a concise introduction to these topics see \cite{HFKsurvey}. Recall that knot Floer homology associates to a knot $K \subset S^3$ a finitely-generated, $\mathbb{Z}$-graded, $(\mathbb{Z} \oplus \mathbb{Z})$-filtered chain complex $CFK^\infty(K)= (\bigoplus_{\mathbf{x} \in B} \mathbb{Z}_2[U, U^{-1}] \cdot \mathbf{x}, \partial )$ with the following properties \begin{itemize} \item $\partial$ is $\mathbb{Z}_2[U, U^{-1}]$-linear and given a basis element $\mathbf{x} \in B$, $\partial \mathbf{x} = \sum_\mathbf{y} n_{\mathbf{x}, \mathbf{y}}U^{m_{\mathbf{x},\mathbf{y}}} \cdot \mathbf{y}$ for suitable coefficients $ n_{\mathbf{x}, \mathbf{y}} \in \mathbb{Z}_2$, and non-negative exponents $m_{\mathbf{x}, \mathbf{y}} \geq 0$, \item the multiplication by $U$ drops the homological (Maslov) grading $M$ by two, and the filtration levels (denoted by $A$ and $j$) by one, \item $H_*(CFK^\infty(K))= \mathbb{Z}_2[U, U^{-1}]$ graded so that $\text{deg}U=-2$. \end{itemize} In \cite{OS7} Ozsv\' ath and Szab\' o show that the filtered chain homotopy type of $CFK^\infty(K)$ only depends on the isotopy class of $K$. The knot Floer complex $CFK^\infty(K)$ of a knot $K \subset S^3$ can be pictorially described as follows: \begin{enumerate} \item picture each $\mathbb{Z}_2$-generator $U^m \cdot \mathbf{x}$ of $CFK^\infty(K)$ on the planar lattice $\mathbb{Z} \times \mathbb{Z} \subset \mathbb{R}^2$ in position $\left(A(\mathbf{x})-m, -m \right) \in \mathbb{Z} \times \mathbb{Z}$, \item label each $\mathbb{Z}_2$-generator $U^m \cdot \mathbf{x}$ of $CFK^\infty(K)$ with its Maslov grading $M(\mathbf{x})-2m\in \mathbb{Z}$, \item connect two $\mathbb{Z}_2$-generators $U^n \cdot \mathbf{x}$ and $U^m \cdot \mathbf{y} $ with a directed arrow if in the differential of $U^n \cdot \mathbf{x}$ the coefficient of $U^m \cdot \mathbf{y}$ is non-zero. \end{enumerate} The Ozsv\' ath-Stipsicz-Szab\' o upsilon invariant is defined starting form this picture as follows. For $t \in [0,2]$ and $r \in \mathbb{R}$ let $\mathcal{F}_{t,r}$ be the sub-complex of $CFK^\infty(K)$ spanned by the generators contained in the half-plane defined by the equation $t/2 A+ (1-t/2)j\leq r$. Then $\Upsilon_K(t)=-2 \cdot \gamma_K(t)$ where $\gamma_K(t)$ is the minimum $r$ for which the inclusion $\mathcal{F}_{t,r} \hookrightarrow CFK^\infty(K)$ induces a surjective map on $H_0$. As shown by Kim and Livingston in \cite{KimLiv}, other concordance invariants can be obtained by looking at which filtration levels certain expected homologies are realised. This leads to a two variable concordance invariant $\Upsilon^{(2)}_{K,t}(s)$. Given $t \in [0,2]$ let $\mathcal{Z}^+$ and $\mathcal{Z}^-$ denote the set of cycles with Maslov grading zero generating $H_0(CFK^\infty(K))$ and contained in $\mathcal{F}_{t+\delta, \gamma_K (t+\delta) }$ and $\mathcal{F}_{t-\delta, \gamma_K (t-\delta) }$ respectively. Since $H_0(CFK^\infty(K)) \simeq \mathbb{Z}_2$ has only one non-zero element, for given $\xi^+ \in \mathcal{Z}^+$ and $\xi^- \in \mathcal{Z}^-$ there exists a chain with Maslov grading one $\beta \in CFK^\infty (K)$ such that $\partial \beta = \xi^+ - \xi^-$. We denote by $\gamma_{K,t}(s)$ the minimum $r$ for which $\mathcal{F}_{s,r}$ contains a $1$-chain realising a homology between a cycle in $\mathcal{Z}^+$ and one in $\mathcal{Z}^-$. Set $\Upsilon_{K,t}^{(2)}(s)= -2 \cdot (\gamma_{K,t}(s)-\gamma_K(t))$. Notice that $\Upsilon_{K,t}(s)$ is not defined if $\mathcal{Z}^+ \cap \mathcal{Z}^- \not= \emptyset $, in such a case we set $\Upsilon_{K,t}(s)=-\infty$. \begin{proof}[Proof of Theorem \ref{uno}] This is an argument along the line of \cite[Proposition 1.2]{alfieri1}. More precisely, suppose by contradiction that for some $q \geq 1$, $r \geq 5$ odd, there exists an alternating knot $J$ for which the torus knot $T_{qr+2,r}$ is concordant to $T_{qr+1,r}\# J$. Then, \[ \Upsilon_{T_{qr+2,r}, 4/r}^{(2)}\left(\frac{4}{r}\right)= \Upsilon_{T_{qr+1,r}\# J, 4/r}^{(2)}\left(\frac{4}{r}\right)= \Upsilon_{T_{qr+1,r}, 4/r}^{(2)}\left(\frac{4}{r}\right) \ ,\] where the first equality is because $\Upsilon^{(2)}$ is a concordance invariant, and the second one is consequence of \cite[Theorem 6.2]{alfieri1}. Notice that \cite[Theorem 6.2]{alfieri1} can be applied at $t=4/r<1$ (here $r>5$ by assumption) since the upsilon function of an alternating knot can have a singularity only at $t=1$. We now show that \[\Upsilon_{T_{qr+2,r}, 4/r}^{(2)}\left(\frac{4}{r}\right) \not= \Upsilon_{T_{qr+1,r}, 4/r}^{(2)}\left(\frac{4}{r}\right) \ . \] Given positive integers $a_1, \dots, a_{2k}$ we construct a finitely generated $\mathbb{Z}$-graded, $\mathbb{Z}\oplus \mathbb{Z}$-filtered, chain complex $C_*(a_1, \dots, a_{2k} )$ as follows. Set \[ C_*(a_1, \dots, a_{2k} )= \mathbb{Z}_2\{x_0, \dots , x_k, y_0, \dots , y_{k-1} \} \otimes \mathbb{Z}_2[U,U^{-1}] \ ,\] and consider the differential \[ \begin{cases} \ \partial x_i= 0 \ \ \ i=0, \dots , k \\ \ \partial y_i=x_i+ x_{i+1} \ \ \ i=0, \dots , k-1 \end{cases} \ . \] Define \[ \ \begin{cases} \ A(x_i)=n_i \\ \ j(x_i)= m_i\\ \ M(x_i)=0 \end{cases} \ \text{ and } \ \ \ \ \ \begin{cases} \ A(y_i)=n_i \\ \ j(y_i)=m_{i+1} \\ \ M(y_i)=1 \end{cases} \] where \[ \ \begin{cases} \ n_i=g-\sum_{j=0}^{i}a_{2j} \\ \ n_0=0 \end{cases} \ \ \ \ \ \ \begin{cases} \ m_i=\sum_{j=1}^i a_{2j-1} \\ \ m_0=0 \end{cases} \ , \] and coherently extend these gradings to $\mathbb{Z}_2\{x_0, \dots , x_k, y_0, \dots , y_{k-1} \} \otimes \mathbb{Z}_2[U,U^{-1}]$ so that the multiplication by $U$ drops the Maslov grading $M$ by two, and the Alexander filtration $A$ as well as the algebraic filtration $j$ by one. The resulting complex is the staircase complex of parameters $a_1, \dots , a_{2g}$ denoted by $C_*(a_1, \dots , a_{2g})$. The knot Floer complex of a $(p,q)$ torus knot has a representative of the form $ C_*(a_1, \dots, a_{2k} )$. Let $g=(p-1)(q-1)/2$ denotes the four-dimensional genus of the $(p,q)$ torus knot. The semigroup generated by $p$ and $q$, $S_{p,q}=\{np+mq \ | \ n, m \in \mathbb{Z}_{\geq 0} \}$, determines a colouring of $\{0, \dots , 2g-1\}$: color by red the numbers in $S_{p,q} \cap \{0, \dots , 2g-1\} $ and by blue the one in its complement $(\mathbb{Z} \setminus S_{p,q}) \cap \{0, \dots , 2g-1 \}$. By counting the gaps between blue and red numbers as suggested by Figure \ref{semigroup} we get two sequences of numbers $r_1, \dots r_k$ and $b_1, \dots , b_k$. In \cite{Peters} Petres shows that $CFK^\infty(T_{p,q})\simeq C_*(r_1,b_1, \dots , r_k, b_k)$. \begin{figure} \hspace{0.1cm} \xygraph{ !{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::} !~-{@{-}@[|(2.5)]} !{(-2,1.5) }*+{\dots}="x0" !{(-2.7,1.5) }*+{\bullet}="x1" !{(-4.2,1.5) }*+{\circ}="x2" !{(-5.7,1.5) }*+{\circ}="x3" !{(-7.2,1.5) }*+{\bullet}="x4" !{(-8.7,1.5) }*+{\circ}="x5" !{(-10.2,1.5) }*+{\bullet}="x6" !{(-11.7,1.5) }*+{\bullet}="x7" !{(-13.2,1.5) }*+{\circ}="x8" !{(-2.7,1.9) }*+{\textcolor{blue}{7}} !{(-4.2,1.9) }*+{\textcolor{red}{6}} !{(-5.7,1.9) }*+{\textcolor{red}{5}} !{(-7.2,1.9) }*+{\textcolor{blue}{4}} !{(-8.7,1.9) }*+{\textcolor{red}{3}} !{(-10.2,1.9) }*+{\textcolor{blue}{2}} !{(-11.7,1.9) }*+{\textcolor{blue}{1}} !{(-13.2,1.9) }*+{\textcolor{red}{0}} "x2"-"x1" "x3"-"x2" "x4"-"x3" "x5"-"x4" "x6"-"x5" "x7"-"x6" "x7"-"x8" } \caption{\label{semigroup} The elements of the semigroup generated by $5$ and $3$ correspond to the black dots. The staircase of the torus knot $T_{5,3}$ can be computed from the coloring above by counting the gaps between blue (black dotted) and red (white dotted) numbers. In this case $r_1=1, r_2=1, r_3=2$, $b_1=2, b_2=1, b_3=1$, and $CFK^\infty(T_{5,3})= S_*(\textcolor{red}{1},\textcolor{blue}{2},\textcolor{red}{1}, \textcolor{blue}{1},\textcolor{red}{2}, \textcolor{blue}{1})$.} \end{figure} The semigroup of the torus knot $T_{qr+1,r}$ is given by \[ S_{qr+1,r}= \bigcup_{j=0}^{r-2} \big\{n \equiv 0\text{ mod }r, \ jpr+1\leq n \leq (r-1)pr \big\} \ \cup \ \mathbb{Z}_{\geq(r-1)pr} \ .\] Thus $CFK^\infty(T_{qr+1,r})=C_*(1, r-1,\dots 1, r-1, 2 , r-2, \dots , 2, r-2, \dots)$ where the pair $(i, r-i)$ appears $q$ times. Notice that at $t=4/r$ we have $\mathcal{Z}^+=\{x_q\}$ and $\mathcal{Z}^-=\{x_{2q}\}$. A chain with Maslov grading one realising a homology between $x_q$ and $x_{2q}$ is given by $\beta =\sum_{i=q}^{2q-1} y_i$. Thus \begin{align*} \gamma_{T_{qr+1,r},\frac{4}{r}} \left(\frac{4}{r}\right) &= \min_\xi \left\{\frac{2}{r}\cdot A(\beta + \partial \xi)+\frac{r-2}{r} \cdot j(\beta + \partial \xi) \right\} \\ &= \frac{2}{r}\cdot A(\beta)+\frac{r-2}{r} \cdot j(\beta)\\ &= q(r-2)+2 \left( \frac{r-2}{r} \right) \ ,\end{align*} where the minimum in the first line is taken over all $\xi \in CFK^\infty(T_{qr+1,r})$ with Maslov grading two. Here the second equality is due to the fact that the differential of $CFK^\infty(T_{qr+1,r})$ vanishes on chains with even Maslov grading. The third one is a direct computation. Summing up we obtain \begin{align*} \Upsilon^{(2)}_{T_{qr+1,r},\frac{4}{r}} \left(\frac{4}{r}\right) &= -2 \left( \gamma_{T_{qr+1,r},\frac{4}{r}} \left(\frac{4}{r}\right)- \gamma_{T_{qr+1,r}}\left(\frac{4}{r}\right)\right) \\ &= -2 \left( \gamma_{T_{qr+1,r},\frac{4}{r}} \left(\frac{4}{r}\right) + \frac{1}{2} \Upsilon_{qr+1,r}\left(\frac{4}{r}\right)\right)\\ &= -2 \left( q(r-2)+2 \left( \frac{r-2}{r} \right) -q(r-2)\right) \ , \end{align*} which leads to \[\Upsilon^{(2)}_{T_{qr+1,r},\frac{4}{r}} \left(\frac{4}{r}\right)= -4 \cdot \frac{r-2}{r} \ . \] For the torus knot $T_{qr+2,r}$ we have $S_{qr+2,r}=S_1 \cup S_2 \cup \mathbb{Z}_{\geq (r-1)(pr+1)}$, where \[ S_1= \bigcup_{j=0}^{(r-1)/2} \big\{n \equiv 2j \text{ mod }r, \ jpr+1\leq n \leq (r-1)(pr+1) \big\} \] and \[ S_2= \bigcup_{j=1}^{(r-1)/2} \Big\{n \equiv 2j-1 \text{ mod }r, \ (r-1-2+j)(pr+1)/2\leq n \leq (r-1)(pr+1) \Big\} \ .\] Thus $CFK^\infty(T_{qr+2,r})=C_*(1, r-1,\dots 1, r-1, 1, 1, 1, r-3, \dots , 1, 1, 1 , r-3, \dots)$ where the pattern $(1, \dots , 1, r-(2j+1))$ with $2j+1$ many $1$'s appears $q$ times. In this case at $t=4/r$ we have $\mathcal{Z}^+=\{x_q\}$ and $\mathcal{Z}^-=\{x_{3q}\}$. A chain with Maslov grading one realising a homology between $x_q$ and $x_{3q}$ is given by $\beta =\sum_{i=q}^{3q-1} y_i$. Following the same argument we did for $CFK^\infty(T_{qr+1,r})$, we conclude that \[ \gamma_{T_{qr+2,r},\frac{4}{r}} \left(\frac{4}{r}\right)= \frac{2}{r} A(\beta)+\frac{r-2}{r}j(\beta) = q(r-2)+ \frac{r-1}{r}+ \left( 2- \frac{6}{r} \right) \] Thus \begin{align*} \Upsilon^{(2)}_{T_{qr+2,r},\frac{4}{r}} \left(\frac{4}{r}\right) &= -2 \left( \gamma_{T_{qr+2,r},\frac{4}{r}} \left(\frac{4}{r}\right)- \gamma_{T_{qr+2,r}}\left(\frac{4}{r}\right)\right) \\ &= -2 \left( \gamma_{T_{qr+2,r},\frac{4}{r}} \left(\frac{4}{r}\right) + \frac{1}{2} \Upsilon_{qr+2,r}\left(\frac{4}{r}\right)\right)\\ &= -2 \left( 2- \frac{6}{r} \right)\ , \end{align*} which leads to \[\Upsilon^{(2)}_{T_{qr+2,r},\frac{4}{r}} \left(\frac{4}{r}\right)= -4 \cdot \frac{r-3}{r} \ . \] Hence, $\Upsilon_{T_{qr+2,r}, 4/r}^{(2)}\left(\frac{4}{r}\right) > \Upsilon_{T_{qr+1,r}, 4/r}^{(2)}\left(\frac{4}{r}\right)$ proving the claim. \end{proof} The strategy used in the proof of Theorem \ref{uno} cannot be adapted to deal with the other families of Lemma \ref{sums}. To see why this is the case recall the following result. \begin{thm}[Hom \cite{HFKsurvey}]\label{jen} If two knots $K_1$ and $K_2$ are concordant then there exists $\mathbb{Z}$-graded, $(\mathbb{Z} \oplus \mathbb{Z})$-filtered, acyclic chain complexes $A_1$ and $A_2$ such that \begin{equation} \label{HomDependence} CFK^\infty (K_1) \oplus A_1 \simeq CFK^\infty(K_2) \oplus A_2 \ , \end{equation} where $\simeq$ denotes filtered chain homotopy equivalence. \QEDB \end{thm} In \cite{involutive1} Hendricks and Manolescu show that the knot Floer complex $CFK^\infty(K)$ naturally comes with an order four automorphism $\iota_K$ squaring to the Sarkar map \cite{SarkarBasepoint}. In the same paper they prove that the filtered chain homotopy type of the pair $CFKI^\infty(K)=(CFK^\infty(K), \iota_K)$ is an invariant of $K$. In fact, in \cite{involutive2} Hom and Hendricks prove that for $CFKI^\infty$ an analogue of Theorem \ref{jen} holds. \begin{thm}[Hendricks \& Hom \cite{involutive2}]\label{involutive2} If two knots $K_1$ and $K_2$ are concordant then there exists $\mathbb{Z}$-graded, $(\mathbb{Z} \oplus \mathbb{Z})$-filtered, acyclic chain complexes $A_1$ and $A_2$ together with involutions $\iota_{A_1}$ and $\iota_{A_2}$ such that $(CFK^\infty (K_1), \iota_{K_1}) \oplus (A_1,\iota_{A_1}) \simeq (CFK^\infty(K_2), \iota_{K_2}) \oplus (A_2,\iota_{A_2} )$. \QEDB \end{thm} We now prove that Equation \eqref{HomDependence} holds in the remaining cases of Lemma \ref{sums}. \begin{proof}[Proof of Proposition \ref{failure}]With the same notation as in the proof of Theorem \ref{uno} we have that $CFK^\infty (T_{3k+1,3})=C_*(1,2, \dots ,1 ,2, 2, 1 , \dots , 2, 1),$ $CFK^\infty (T_{3,2})=C_*(1,1),$ $CFK^\infty (T_{3k+2,3})=C_*(1,2, \dots ,1, 2,1,1, 2, 1 , \dots , 2, 1)$. Denote by $z_0, \dots , z_{2q}$ the generators of the staircase chain complex of $CFK^\infty (T_{3k+1,3})$ so that $M(z_{2i})=0$, $M(z_{2i+1})=1$, $\partial z_{2i}=0$, and $\partial z_{2i+1}=z_{2i}+z_{2i+2}$. Similarly, denote by $a, b$ and $c$ the generators of $CFK^\infty(T_{3,2})$ so that $M(a)=M(b)=0$, $M(c)=1$, $\partial a = \partial b =0$, and $\partial c = a +b$. Set \[z'_i= \begin{cases} \ a \otimes z_i \ \ \ \ \ \text{ if } i=0, \dots , q\\ \ c \otimes z_q \ \ \ \ \ \text{ if } i=q+1 \\ \ b \otimes z_{i-2} \ \ \ \text{ if } i=q+2, \dots , 2q+2 \end{cases} \ \ \ \begin{cases} \alpha_i= c \otimes z_{2i+1} \\ \beta_i= a \otimes z_{2i+1} + c \otimes z_{2i} \\ \gamma_i= b \otimes z_{2i+1} + c \otimes z_{2i+2} \\ \epsilon_i= b \otimes z_{2i} + a \otimes z_{2i+2} \\ \end{cases} \ . \] and notice that \begin{itemize} \item $CFK^\infty(T_{3q+1, 3})= \text{Span}_{\mathbb{Z}_2[U, U^{-1}]}\langle z'_0, \dots , z'_{2q+2} \rangle \oplus A_*$ where \[A_*= \bigoplus_{i=1}^q \text{Span}_{\mathbb{Z}_2[U, U^{-1}]}\langle \alpha_i, \beta_i, \gamma_i, \epsilon_i \rangle \ ,\] \item $\partial \alpha_i= \beta_i + \gamma_i$, $\partial \beta_i= \partial \gamma_i= \epsilon_i$, $\partial \epsilon_i=0$, and consequently $A_*$ is acyclic (being sum of acyclic complexes), \item $M(z_{2i})=0$, $M(z_{2i+1})=1$, $\partial z_{2i}=0$, and $\partial z_{2i+1}=z_{2i}+z_{2i+2}$ which means that $\text{Span}_{\mathbb{Z}_2[U, U^{-1}]}\langle z'_0, \dots , z'_{2q+2} \rangle $ is a staircase complex. In fact, a careful check of the Alexander and the algebraic filtrations shows that \[CFK^\infty(T_{3q+2,3}) = \text{Span}_{\mathbb{Z}_2[U, U^{-1}]}\langle z'_0, \dots , z'_{2q+2} \rangle \ .\] \end{itemize} Summing up we get that $CFK^\infty(T_{3q+1, 3})\otimes CFK^\infty(T_{3,2})= CFK^\infty(T_{3q+2,3}) \oplus A_*$ with $A_*$ acyclic. Notice that this can also be seen as a consequence of \cite[Lemma 3.18]{StaircaseDependence}. To prove the corresponding statement for $CFKI^\infty$ we need to check that \begin{enumerate} \item the involution of $CFKI^\infty(T_{3q+1, 3})\widetilde{\otimes} CFKI^\infty(T_{3,2})$ restricts to $\iota_{T_{3q+2,3}}$ on the sub complex spanned by $z'_0, \dots , z'_{2q+2}$, \item $\iota_{T_{3q+1, 3}} \times \iota_{T_{3, 2}} $leaves $A_*$ invariant. \end{enumerate} Here $CFKI^\infty(T_{3q+1, 3})\widetilde{\otimes} CFKI^\infty(T_{3,2}) = (CFK^\infty(T_{3q+1, 3})\otimes CFK^\infty(T_{3,2}) , \iota_{T_{3q+1, 3}} \times \iota_{T_{3, 2}} )$ denotes the product introduced by Zemke in \cite{zemke}. We will adopt Zemke's notation for the rest of this proof. According to \cite[Section 7]{involutive1} the knot involution of a $(p,q)$ torus knot acts on the associated staircase complex as a reflection about the $x=y$ axis. Thus, \begin{align*} \iota_{T_{3q+1, 3}} \times \iota_{T_{3, 2}}(z'_i)&= \iota_{T_{3q+1, 3}} \otimes \iota_{T_{3, 2}} z'_i + U^{-1} (\phi_{T_{3q+1, 3}} \otimes \psi_{T_{3, 2}}) \circ (\iota_{T_{3q+1, 3}} \otimes \iota_{T_{3, 2}}) z'_i\\ &= b \otimes z_{2q-i}+ U^{-1} (\phi_{T_{3q+1, 3}} \otimes \psi_{T_{3, 2}}) b \otimes z_{2q+2-i} \\ &=z'_{2q+2-i} \end{align*} where the third identity is due to the fact that $\phi_{T_{3, 2}}$ vanishes on $b$. Similarily (using the fact that $\phi_{T_{3, 2}}$ vanish on $a$ and $\psi_{T_{3q+1, 3}}$ does so on $z_q$) one proves that $\iota_{T_{3q+1, 3}} \times \iota_{T_{3, 2}}(z'_{q+1})= z'_{q+1}$, and $\iota_{T_{3q+1, 3}} \times \iota_{T_{3, 2}}(z'_{i})= z'_{i-2q-2}$ for $i=q+2, \dots , 2q+2$ leading to a proof of $(1)$. The proof of $(2)$ is an analogous computation. \end{proof} \section{Obstructions from the Owens-Strle theorem} \label{sectionthree} In this section we deal with the second family of Lemma \ref{sums}. We start by discussing an example in full detail, then we proceed with the necessary computations for the whole family. The main goal of this section is to prove Theorem \ref{firstfamily}. \subsection{A first example} It follows from Proposition \ref{upsilon} that the knot $K=T_{5,3} \# -T_{4,3}$ is upsilon-alternating. In order to prove that $T_{5,3} \# -T_{4,3}$ is not concordant to an alternating knot we will make use of the following result by Owens and Strle \cite{OwensStrle}. \begin{thm}[Owens \& Strle]\label{OwSt} Let $Y$ be a rational homology sphere with $|H_1(Y;\mathbb{Z})|=\delta$. If $Y$ bounds a negative-definite four-manifold $X$ and either $\delta$ is square-free or there is no torsion in $H_1(X; \mathbb{Z})$ then \[\max _{\mathfrak{s} \in \text{Spin}^c(Y)} 4d(Y,\mathfrak{s})\geq \begin{cases} \ 1-1/\delta \text{ if } \delta \text{ is odd,}\\ \ 1 \text{ if } \delta \text{ is even} \end{cases}\ . \] The inequality is strict unless the intersection form of $X$ is $(n-1)\langle-1\rangle\oplus \langle \delta\rangle$. Moreover, the two sides of the inequality are congruent modulo $4/\delta$. \QEDB \end{thm} More precisely, we will need the following lemma. \begin{lem} \label{criterion} Let $K \subset S^3$ be a knot with $\delta= |\det (K)|$ square-free. Set \[ d_{\max} (K)= \max _{\mathfrak{s} \in \text{Spin}^c(\Sigma_2(K))} 4d(\Sigma_2(K),\mathfrak{s}) \ , \ \ d_{\min} (K)= \min _{\mathfrak{s} \in \text{Spin}^c(\Sigma_2(K))} 4d(\Sigma_2(K),\mathfrak{s}) \ .\] If $K$ is concordant to an alternating knot then \[ d_{\max} (K) \geq 1- 1/\delta \ \ \text{ and } \ \ 1/\delta -1 \geq d_{\min}(K) \ .\] \end{lem} \begin{proof} Suppose that $K$ is concordant to an alternating knot $J$. Let $W$ be the double cover of $S^3\times I$ branched along a concordance between $K$ and $J$. It is well known that $W$ is a rational homology cobordism between $\Sigma_2(K)$ and $\Sigma_2(J)$. By taking the double cover of the four-ball branched along pushed-in copies of the black and the white surface of an alternating diagram of $J$ we obtain simply connected definite four-manifolds bounding $\Sigma_2(J)$. By gluing these simply connected definite pieces to $W$ along $\Sigma_2(J)$ we obtain a positive-definite filling $X^+$ and a negative-definite filling $X^-$ of $\Sigma_2(K)$. Since $\delta =|\det(K)|=|H_1(\Sigma_2(K);\mathbb{Z})|$ is a square-free odd number, we can apply Theorem \ref{OwSt} to the pairs $(X,Y)=(X^-,\Sigma_2(K))$ and $(X,Y)=(-X^+,-\Sigma_2(K))$, and obtain the claimed inequalities. \end{proof} \begin{prop}\label{casofacile} The knot $T_{5,3}\# -T_{4,3}$ is not concordant to an alternating knot. \end{prop} \begin{proof} Set $K=T_{5,3}\# -T_{4,3}$. Since $3=\det(K)=|H_1(\Sigma_2(K);\mathbb{Z})|$ is square-free, as a consequence of Lemma \ref{criterion} we have that \begin{equation}\label{OwStOb} d_{\max} (K) \geq 2/3 \ \ \ \text{ and } \ \ \ -2/3 \geq d_{\min}(K) \end{equation} We conclude by showing that one of these inequalities does not hold. Notice that $\Sigma_2(K)=\Sigma(2,3,5)\sharp -\Sigma(2,3,4)$ has three $\text{Spin}^c$ structures. These are obtained by taking the sum of the spin structure of the Poincar\'e sphere $\Sigma(2,3,5)$ with the three $\text{Spin}^c$ structures $\{\mathfrak{s},\mathfrak{t},\overline{\mathfrak{t}}\}$ of $-\Sigma(2,3,4)$. The Brieskorn spheres $\Sigma(2,3,5)$ and $\Sigma(2,3,4)$ are respectively the boundaries of the negative-definite $E_8$ and $E_6$ plumbings. The $d$-invariants of these graph manifolds where computed in \cite{OSGraphManifolds}, we have \[d(\Sigma(2,3,5))=2, \ d(\Sigma(2,3,4),\mathfrak{s})=\frac{3}{2}, \text{ and } \ d(\Sigma(2,3,4),\mathfrak{t})=d(\Sigma(2,3,4),\overline{\mathfrak{t}})=\frac{1}{6} \ . \] Thus, the $d$-invariants of $\Sigma_2(K)$ are $\{11/6,5/3,5/3\}$ and we can conclude that $d_{\max} (K) = 44/6$ and $ d_{\min}(K) = 20/3$. This contradicts the second inequality in \eqref{OwStOb} and proves the claim. \end{proof} \subsection{The family $T_{6c-1,3}\# -T_{6c-2,3}$} In this subsection we prove Proposition \ref{firstfamily}. We do this by generalising the argument used for $T_{5,3}\# -T_{4,3}$ to the knots $K_c=T_{6c-1,3} \# -T_{6c-2,3}$. The double branched cover of each $K_c$ is a difference of two Brieskorn spheres, namely $\Sigma_2(K_c)=\Sigma(2,3,6c-1)\sharp -\Sigma(2,3,6c-2)$. Since $|det(K_c)|=3$ the branched double covers $\Sigma_2(K_c)$ have three $\text{Spin}^c$ structures (one $\text{Spin}$ and two conjugated $\text{Spin}^c$ structures). We will prove that the associated correction terms are $\{11/6,5/3,5/3\}$ independently from $c$. Then the same argument given in the previous section will lead to a contradiction with the inequalities of Lemma \ref{criterion}. \begin{lem} For $c\geq 1$ we have $d(\Sigma(2,3,6c-1))=2$. \end{lem} \begin{proof}This is well known. One way to carry out the computation is via the so called knot surgery formula. Note that $\Sigma(2,3,6c-1)$ can be obtained as $1/c$-surgery along the trefoil knot. As explained in \cite{NiWu}, these correction terms can be computed via the formula $d(S^3_{1/c}(T_{3,2}))=2V_0(T_{3,2})$, where $V_0$ is the first in a sequence of concordance invariants $\{V_i\}_{i\geq 0}$ first introduced by Rasmussen in \cite{Ras1}. For torus knots (and more generally for algebraic knots) these invariants can be computed combinatorially from the gap function of the semigroup \cite{LivingstonCuspidalCurves}. For the trefoil knot one has $V_0=1$ and $V_i=0$ for $i\geq 1$. \end{proof} In order to compute the correction terms of the Brieskorn spheres $\Sigma(2,3,6c-2)$ we will make use of the algorithm introduced by Ozsv\' ath and Szab\' o in \cite{OSGraphManifolds} which we briefly recall. The manifolds $\Sigma(2,3,6c-2)$ can be described as the boundary of a negative-definite plumbing of spheres $X_\Gamma$ with associated star-shaped, three-legged graph $\Gamma$. These particular graphs have at most one bad vertex in the sense of \cite[Definition 1.1]{OSGraphManifolds}. The correction terms of such a plumbed three-manifold $Y_{\Gamma}= \partial X_\Gamma$ can be computed according to the following formula \cite[Corollary 1.5]{OSGraphManifolds} \begin{equation}\label{correctionterms} d(Y_{\Gamma},\mathfrak{s})=\max\frac{K^2+|\Gamma|}{4}\ , \end{equation} where the maximum is taken over all characteristic vectors $K \in H^2(X_\Gamma,\mathbb{Z})$ representing a $\text{Spin}^c$ structure restricting to $\mathfrak{s}$. The algorithm given in \cite{OSGraphManifolds} describes how to find a characteristic vector $K$ which maximises the left hand-side of Equation \ref{correctionterms}. Let $\Gamma$ be a negative-definite plumbing graph with at most one bad vertex. Recall that $K \in H^2(X_\Gamma, \mathbb{Z})= \text{Hom} (H_2(X_\Gamma, \mathbb{Z}), \mathbb{Z})$ is characteristic for the intersection pairing $Q_\Gamma$ of $X_\Gamma$ if \[\langle K , \alpha \rangle \equiv \alpha^2\text{ mod } 2\] for every $\alpha \in H_2(X_\Gamma ; \mathbb{Z})= \bigoplus_{v \in \Gamma} \mathbb{Z} \cdot v $. We denote by $\text{Char}(Q_\Gamma)$ the set of characteristic vectors of $Q_{\Gamma}$. We say that $K_0 \in \text{Char}(Q_\Gamma)$ is \emph{admissible} if \[ m(v)+2 \leq \langle K_0, v \rangle \leq -m(v)\] for every $v \in \Gamma$, where $m(v)$ denotes the weight of the vertex $v$. Given an admissible vector $K_0$ one can inductively construct a sequence $(K_0, K_1, \dots , K_n)$ in which a term $K_i$ is obtained from its predecessor $K_{i-1}$ by summing twice the Poincar\` e dual $PD(v)$ of a vertex $v$ of $\Gamma$ such that $\langle K , v \rangle = -m(v)$. We will refer to the operation $K \mapsto K+ 2PD(v)$ as a \emph{flip move} at the vertex $v$. A sequence $(K_0, \dots , K_n)$ is said to terminate in a \emph{full-path} if one of the following holds \begin{enumerate} \item $m(v) \leq \langle K_n, v \rangle \leq -m(v)-2$ for every vertex $v$, \item $ \langle K_n, v \rangle > -m(v)$ for some vertex $v$. \end{enumerate} If the former holds we say that the full-path is \emph{good}, otherwise we say that it is \emph{bad}. It follows from \cite[Proposition 3.2 ]{OSGraphManifolds} that the maximum in Equation \ref{correctionterms} is achieved by an admissible characteristic vector representing the $\text{Spin}^c$ structure $\mathfrak{s}$ and initiating a good full-path. In the following lemma we collect some useful remarks that will be extensively used in the proof of Lemma \ref{famiglia1}. These remarks already appeared in the literature, see for example \cite{SomeBreiskorn}. \begin{lem}\label{remarks} Let $\Gamma$ be a negative-definite plumbing graph. \begin{enumerate} \item If $K_0$ initiates a bad full-path then any other full-path starting with $K_0$ is bad. \item If $\Gamma' \subset \Gamma$ is a connected subgraph and $K_0' \in \text{Char}(X_{\Gamma'})$ is a characteristic vector starting a bad full-path in $\Gamma'$ then any characteristic vector $K_0 \in \text{Char}(X_{\Gamma})$ restricting to $K_0'$ on $H_2(X_{\Gamma'}; \mathbb{Z})$ starts a bad full-path on $\Gamma$. \item Suppose that $\Gamma' \subset \Gamma$ is a connected subgraph whose vertices are all $-2$-weighted. Then for any good full-path $(K_0, \dots , K_n)$ we have that $\langle K, v \rangle = 2$ for at most one vertex $v \in \Gamma'$. \end{enumerate} \end{lem} \begin{lem}\label{famiglia1} For $c\geq 1$ the Brieskorn sphere $\Sigma(2,3,6c-2)$ has one spin structure $\mathfrak{s}$ and two conjugated $\text{Spin}^c$ structures $\mathfrak{t}$ and $\overline{\mathfrak{t}}$. We have \[ d(\Sigma(2,3,6c-2),\mathfrak{s})=\frac{3}{2} \ , \text{ and } \ d(\Sigma(2,3,6c-2),\mathfrak{t})=d(\Sigma(2,3,6c-2),\overline{\mathfrak{t}})=\frac{1}{6} \ . \] \end{lem} \begin{proof} Let us assume that $c\geq 2$. We start by describing in full detail the case $c=2$. The Brieskorn sphere $\Sigma(2,3,10)$ can be described via the following negative-definite plumbing graph \[ \xygraph{ !{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::} !~-{@{-}@[|(2.5)]} !{(-1.2,1.5) }*+{\bullet}="x" !{(-2.7,1.5) }*+{\bullet}="x1" !{(-4.2,1.5) }*+{\bullet}="x2" !{(1.5,3) }*+{\bullet}="a1" !{(3,3) }*+{\bullet}="am" !{(1.5,0) }*+{\bullet}="c1" !{(3,0) }*+{\bullet}="cm" !{(-1.2,1.9) }*+{-2} !{(-2.7,1.9) }*+{-2} !{(-4.2,1.9) }*+{-3} !{(1.5,3.4) }*+{-2} !{(1.5,0.4) }*+{-2} !{(3,3.4) }*+{-2} !{(3,0.4) }*+{-2} "x"-"c1" "x"-"a1" "a1"-"am" "c1"-"cm" "x1"-"x" "x2"-"x1" } \] We represent characteristic vectors by recording their value on the vertices of the plumbing graph. For example the expression $$ \left[ \begin{array}{lllll} &&&0&0\\ 1&0&0&&\\ &&&0&0\\ \end{array} \right] $$ denotes the carachteristic vector which assumes the value 1 at the $-3$-weighted vertex and vanishes on all the other vertices. According to Lemma \ref{remarks} if a characteristic vector starts a good full-path then its value on the $-3$-weighted vertex is in $\{\pm 1,3\}$ and it is non-vanishing on at most one more vertex (in this case the corresponding value is necessarily equal to 2). Therefore, we may list these vectors as follows \begin{align*} & \left[ \begin{array}{lllll} &&&0&0\\ \alpha&0&0&&\\ &&&0&0\\ \end{array} \right] , \left[ \begin{array}{lllll} &&&0&0\\ \alpha&2&0&&\\ &&&0&0\\ \end{array} \right] , \left[ \begin{array}{lllll} &&&0&0\\ \alpha&0&2&&\\ &&&0&0\\ \end{array} \right], \left[ \begin{array}{lllll} &&&2&0\\ \alpha&0&0&&\\ &&&0&0\\ \end{array} \right],\\ & \left[ \begin{array}{lllll} &&&0&2\\ \alpha&0&0&&\\ &&&0&0\\ \end{array} \right] , \left[ \begin{array}{lllll} &&&0&0\\ \alpha&0&0&&\\ &&&2&0\\ \end{array} \right], \left[ \begin{array}{lllll} &&&0&0\\ \alpha&0&0&&\\ &&&0&2\\ \end{array} \right], \end{align*} where $\alpha\in\{\pm 1,3\}$. The only characteristic vectors which start good full-paths are $$ \left[ \begin{array}{lllll} &&&0&0\\ \pm1&0&0&&\\ &&&0&0\\ \end{array} \right], \left[ \begin{array}{lllll} &&&0&2\\ -1&0&0&&\\ &&&0&0\\ \end{array} \right], \left[ \begin{array}{lllll} &&&0&0\\ -1&0&0&&\\ &&&0&2\\ \end{array} \right]. $$ The first two vectors belong to a good full-path of length one. The third vector belongs to the following good full-path: \begin{align*} & \left[ \begin{array}{lllll} &&&0&\textbf{2}\\ -1&0&0&&\\ &&&0&0\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&\textbf{2}&-2\\ -1&0&0&&\\ &&&0&0\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&-2&0\\ -1&0&\textbf{2}&&\\ &&&0&0\\ \end{array} \right] \sim \\ &\left[ \begin{array}{rrrrr} &&&0&0\\ -1&\textbf{2}&-2&&\\ &&&2&0\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&0&0\\ 1&-2&0&&\\ &&&\textbf{2}&0\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&0&0\\ 1&-2&\textbf{2}&&\\ &&&-2&2\\ \end{array} \right] \sim \\ &\left[ \begin{array}{rrrrr} &&&2&0\\ 1&0&-2&&\\ &&&0&\textbf{2}\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&\textbf{2}&0\\ 1&0&-2&&\\ &&&2&-2\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&-2&2\\ 1&0&0&&\\ &&&\textbf{2}&-2\\ \end{array} \right] \sim \\ &\left[ \begin{array}{rrrrr} &&&-2&2\\ 1&0&\textbf{2}&&\\ &&&-2&0\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&0&2\\ 1&\textbf{2}& -2&&\\ &&&0&0\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&0& \textbf{2}\\ 3&-2& 0&&\\ &&&0&0\\ \end{array} \right] \sim \\ &\left[ \begin{array}{rrrrr} &&&\textbf{2}& -2\\ 3&-2& 0&&\\ &&&0&0\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&-2& 0\\ 3&-2& \textbf{2}&&\\ &&&0&0\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&0& 0\\ 3&0& -2&&\\ &&&\textbf{2}&0\\ \end{array} \right] \sim \\ &\left[ \begin{array}{rrrrr} &&&0& 0\\ 3&0& 0&&\\ &&&-2&\textbf{2}\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&0& 0\\ \textbf{3}&0& 0&&\\ &&&0&-2\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&0& 0\\ -3&\textbf{2}& 0&&\\ &&&0&-2\\ \end{array} \right] \sim \\ &\left[ \begin{array}{rrrrr} &&&0& 0\\ -1&-2& \textbf{2}&&\\ &&&0&-2\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&\textbf{2}& 0\\ -1&0& -2&&\\ &&&2&-2\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&-2& 2\\ -1&0& 0&&\\ &&&\textbf{2}&-2\\ \end{array} \right] \sim \\ &\left[ \begin{array}{rrrrr} &&&-2& 2\\ -1&0& \textbf{2}&&\\ &&&-2&0\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&0& 2\\ -1&\textbf{2}&-2&&\\ &&&0&0\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&0&\textbf{2}\\ 1&-2&0&&\\ &&&0&0\\ \end{array} \right] \sim \\ &\left[ \begin{array}{rrrrr} &&&\textbf{2}&-2\\ 1&-2&0&&\\ &&&0&0\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&-2&0\\ 1&-2&\textbf{2}&&\\ &&&0&0\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&0&0\\ 1&0&-2&&\\ &&&\textbf{2}&0\\ \end{array} \right] \sim \\ &\left[ \begin{array}{rrrrr} &&&0&0\\ 1&0&0&&\\ &&&-2&\textbf{2}\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&0&0\\ 1&0&0&&\\ &&&0&-2\\ \end{array} \right] . \end{align*} In the full-path above the boldfaced coefficients are the ones associated with the vertices on which the flip move is performed. Because of the obvious symmetry of the plumbing graph we also have a good full-path $$\left[ \begin{array}{lllll} &&&0&0\\ -1&0&0&&\\ &&&0&2\\ \end{array} \right] \sim \dots \sim \left[ \begin{array}{rrrrr} &&&0&-2\\ 1&0&0&&\\ &&&0&0\\ \end{array} \right] . $$ Via a straightforward but tedious computation one can find bad full-paths for all the other possible charcteristic vectors. Here we illustrate a specific example which will be useful later on in this proof. \begin{align*} & \left[ \begin{array}{rrrrr} &&&0&0\\ \textbf{3}&0&0&&\\ &&&0&0\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&0&0\\ -3&\textbf{2}&0&&\\ &&&0&0\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&0&0\\ -1&-2&\textbf{2}&&\\ &&&0&0\\ \end{array} \right] \sim \\ & \left[ \begin{array}{rrrrr} &&&\textbf{2}&0\\ -1&0&-2&&\\ &&&\textbf{2}&0\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&-2&2\\ -1&0&\textbf{2}&&\\ &&&-2&2\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&0&\textbf{2}\\ -1&2&-2&&\\ &&&0&\textbf{2}\\ \end{array} \right] \sim \\ & \left[ \begin{array}{rrrrr} &&&\textbf{2}&-2\\ -1&2&-2&&\\ &&&\textbf{2}&-2\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&-2&0\\ -1&2&\textbf{2}&&\\ &&&-2&0\\ \end{array} \right] \sim \left[ \begin{array}{rrrrr} &&&0&0\\ -1&4&-2&&\\ &&&0&0\\ \end{array} \right] . \end{align*} When $c\geq 3$ the Brieskorn sphere $\Sigma(2,3,6c-2)$ can be described via the following negative-definite plumbing graph \[ \xygraph{ !{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::} !~-{@{-}@[|(2.5)]} !{(-1.2,1.5) }*+{\bullet}="x" !{(-2.7,1.5) }*+{\bullet}="x1" !{(-4.2,1.5) }*+{\bullet}="x2" !{(-5.7,1.5) }*+{\bullet}="x3" !{(-7.2,1.5) }*+{\dots}="x4" !{(-8.7,1.5) }*+{\bullet}="x5" !{(1.5,3) }*+{\bullet}="a1" !{(3,3) }*+{\bullet}="am" !{(1.5,0) }*+{\bullet}="c1" !{(3,0) }*+{\bullet}="cm" !{(-1.2,1.9) }*+{-2} !{(-2.7,1.9) }*+{-2} !{(-4.2,1.9) }*+{-3} !{(-5.7,1.9) }*+{-2} !{(-8.7,1.9) }*+{-2} !{(1.5,3.4) }*+{-2} !{(1.5,0.4) }*+{-2} !{(3,3.4) }*+{-2} !{(3,0.4) }*+{-2} "x"-"c1" "x"-"a1" "a1"-"am" "c1"-"cm" "x1"-"x" "x2"-"x1" "x3"-"x2" "x4"-"x3" "x5"-"x4" } \] where the length of the leftmost $-2$-chain is $c-2$. It follows from our previous argument (when $c=2$) and from the second statement in Lemma \ref{remarks} that if a characteristic vector starts a good full-path then it belongs to the following list $$ \left[ \begin{array}{rrrrrrrr} &&&&&&0&0\\ x_1&\dots&x_{c-2}&\pm 1&0&0&&\\ &&&&&&0&0\\ \end{array} \right], \left[ \begin{array}{rrrrrrrr} &&&&&&0&2\\ x_1&\dots&x_{c-2}& -1&0&0&&\\ &&&&&&0&0\\ \end{array} \right], $$ $$ \left[ \begin{array}{rrrrrrrr} &&&&&&0&0\\ x_1&\dots&x_{c-2}&-1&0&0&&\\ &&&&&&0&2\\ \end{array} \right], $$ where at most one of the $x_i$'s is non-zero (and if so it is necessarily equal to $2$). A characteristic vector of the form $$K_0= \left[ \begin{array}{rrrrrrrr} &&&&&&0&0\\ x_1&\dots&x_{c-2}&1&0&0&&\\ &&&&&&0&0\\ \end{array} \right] $$ does not start a good full-path if one of the $x_i$'s is non-zero. To see this suppose that we have $x_i=2$ for some $i$. Then, we can write down the following bad full-path \begin{small} \begin{align*} K_0&\sim\dots\sim \left[ \begin{array}{rrrrrrrr} &&&&&&0&0\\ \dots&-2&\textbf{2}&1&0&0&&\\ &&&&&&0&0\\ \end{array} \right]\sim \left[ \begin{array}{rrrrrrrr} &&&&&&0&0\\ \dots&0&-2&\textbf{3}&0&0&&\\ &&&&&&0&0\\ \end{array} \right]\sim \\ & \hspace{0.957cm} \sim \left[ \begin{array}{rrrrrrrr} &&&&&&0&0\\ \dots &0&0&-3&\textbf{2}&0&&\\ &&&&&&0&0\\ \end{array} \right]\sim\dots \sim\left[ \begin{array}{rrrrrrrr} &&&&&&0&0\\ \dots&0&0&-1&4&-2&&\\ &&&&&&0&0\\ \end{array} \right] \ . \end{align*} \end{small} \hspace{-0.3cm} where the first omitted sequence of moves is the obvious sequence of flips which starts with a flip move at $x_i=2$, while the second one is the one suggested by the full-path $$\left[ \begin{array}{rrrrr} &&&0&0\\ 3&0&0&&\\ &&&0&0\\ \end{array} \right] \sim \dots \sim \left[ \begin{array}{rrrrr} &&&0&0\\ -1&4&-2&&\\ &&&0&0\\ \end{array} \right]$$ described above. Similarly, the characteristic vector $$\left[ \begin{array}{rrrrrrrr} &&&&&&0&2\\ x_1&\dots&x_{c-2}& -1&0&0&&\\ &&&&&&0&0\\ \end{array} \right]$$ does not start a good full-path if one of the $x_i$'s is non-zero. In fact, if $x_i=2$ for some $i$ then the (good) full-path $$\left[ \begin{array}{rrrrr} &&&0&2\\ -1&0&0&&\\ &&&0&0\\ \end{array} \right] \sim \dots \sim \left[ \begin{array}{rrrrr} &&&0&0\\ 1&0&0&&\\ &&&0&-2\\ \end{array} \right]$$ induces a sequence of flip moves \begin{small} \begin{align*} \left[ \begin{array}{rrrrrrrr} &&&&&&0&2\\ x_1&\dots&x_{c-2}& -1&0&0&&\\ &&&&&&0&0\\ \end{array} \right] \sim \dots \sim \left[ \begin{array}{rrrrrrrr} &&&&&&0&0\\ x_1&\dots&x_{c-2}+2& 1&0&0&&\\ &&&&&&0&-2\\ \end{array} \right] \end{align*} \end{small} leading to a bad full-path \begin{small} \begin{align*} \left[ \begin{array}{rrrrrrrr} &&&&&&0&2\\ x_1&\dots&x_{c-2}& -1&0&0&&\\ &&&&&&0&0\\ \end{array} \right] \sim \dots \sim \left[ \begin{array}{rrrrrrrr} &&&&&&0&0\\ \dots&4&\dots& 3&0&0&&\\ &&&&&&0&-2\\ \end{array} \right] . \end{align*} \end{small} Symmetrically one may find a bad full-path \begin{small} \begin{align*} \left[ \begin{array}{rrrrrrrr} &&&&&&0&0\\ x_1&\dots&x_{c-2}& -1&0&0&&\\ &&&&&&0&2\\ \end{array} \right] \sim \dots \sim \left[ \begin{array}{rrrrrrrr} &&&&&&0&-2\\ \dots&4&\dots& 3&0&0&&\\ &&&&&&0&0\\ \end{array} \right] . \end{align*} \end{small} when at least one of the $x_i$'s is equal to $2$. Summarising we found that only a characteristic vector of the form $$ \left[ \begin{array}{rrrrrrrr} &&&&&&0&0\\ 0&\dots&0&-1&0&0&&\\ &&&&&&0&2\\ \end{array} \right], \left[ \begin{array}{rrrrrrrr} &&&&&&0&2\\ 0&\dots&0& -1&0&0&&\\ &&&&&&0&0\\ \end{array} \right], $$ $$ \left[ \begin{array}{rrrrrrrr} &&&&&&0&0\\ 0&\dots&0&1&0&0&&\\ &&&&&&0&0\\ \end{array} \right], \left[ \begin{array}{rrrrrrrr} &&&&&&0&0\\ x_1&\dots&x_{c-1}&1&0&0&&\\ &&&&&&0&0\\ \end{array} \right], $$ with at most one of the $x_i$'s is non-zero and equal to $2$, can start a good full path. In fact one can easily check that all of them do. By means of Equation \ref{correctionterms} one can now conclude the argument (we omit the easy but tedious computations). \end{proof}
{ "timestamp": "2019-01-17T02:16:03", "yymm": "1712", "arxiv_id": "1712.05252", "language": "en", "url": "https://arxiv.org/abs/1712.05252" }
\section{Introduction} This work is devoted to the analysis of nonlinear damped wave equations for positive operators acting in Hilbert spaces. More precisely, for a densely defined positive operator ${\mathcal L}$ in a separable Hilbert space ${\mathcal H}$ we consider the Cauchy problem \begin{equation}\label{EQ: NoL-01} \left\{ \begin{split} \partial_{t}^{2}u(t)+{\mathcal L} u(t)+b\partial_{t}u(t)+m u(t)&=F(u, \partial_{t}u, {\mathcal L}^{1/2}u), \quad t>0,\\ u(0)&=u_{0}\in{\mathcal H}, \\ \partial_{t}u(0)&=u_{1}\in{\mathcal H}, \end{split} \right. \end{equation} with the damping term determined by $b>0$ and mass $m\in\mathbb R$. The main assumption in this paper is that the operator ${\mathcal L}$ has a discrete spectrum and that the corresponding eigenvectors form an orthonormal basis in ${\mathcal H}$. The main examples of interest for us would be the harmonic oscillator on ${\mathcal H}=L^2({\mathbb R}^n)$: \begin{equation}\label{EQ:ho} {\mathcal L}:=-\Delta+|x|^{2}, \,\,\, x\in\mathbb R^{n}, \end{equation} and the Laplacians, or more general positive elliptic pseudo-differential operators, on ${\mathcal H}=L^2(M)$ for compact manifolds $M$, with or without boundary. Of course there are numerous other examples that are covered by this setting, for example the twisted Laplacian (Landau Hamiltonian) on $\mathbb C^{n}$ given by $$ {\mathcal L}=\sum_{j=1}^{n}(Z_{j}\bar{Z}_{j}+\bar{Z}_{j}Z_{j}), $$ with $Z_{j}=\frac{\partial}{\partial z_{j}}+\frac{1}{2}\bar{z}_{j}$ and $\bar{Z}_{j}=-\frac{\partial}{\partial \bar{z}_{j}}+\frac{1}{2}z_{j}$, see Example \ref{EX:LH}, where we also derive the the Gagliardo-Nirenberg inequality for it. The other important situation occurs when the spectrum of ${\mathcal L}$ is continuous. In that case the analysis relies on rather different methods and this problem will be addressed in the subsequent paper. The analysis of linear and nonlinear damped wave equations has a long history. In papers \cite{M76, Wahl70} the authors first considered these kind of problems for the Laplacian on ${\mathbb R}^n$. We refer to papers \cite{HKN04, HKN06, HL92, HO04, I04, IMN04, IT05, KU13, Kh13, N04, N03, SW07, O03, O06, R90} in ${\mathbb R}^n$ dealing with damped wave equations under different assumptions, and references therein, where authors study the global solvability of the Cauchy problems for nonlinear wave equations for the Laplace operator with the dissipative term. Also, see \cite{K00, RTY11, W14} for some more abstract settings. For even more references, we refer to a recent survey \cite{IIW17}. Time-dependent dissipation has been also considered, see e.g. \cite{W06} for regular and \cite{GR15, RT16a} for irregular dissipation in linear problems, respectively. The global framework for the Fourier analysis generated by a densely defined operator ${\mathcal L}$ on $L^2(M)$ for manifolds $M$ with or without boundary was developed in \cite{RT16, RT16b}. In Section \ref{SEC:linear} we consider the linear equations and derive the exponential time decay for their solutions. This is done by using the Fourier analysis adapted to the operator ${\mathcal L}$, elements of which we review in the process of the proof. The exponential decay plays a crucial role in the further analysis, in particular allowing the handling of the nonlinear equations to rely mostly on the analysis in Sobolev spaces over ${\mathcal L}$. Such decay is achieved by the fact that the operator ${\mathcal L}$ has a discrete positive spectrum. In the case of continuous spectrum, more delicate $L^p$- methods are needed, and these will appear elsewhere in the subsequent analysis for that setting. Partial differential equations in general Hilbert (and also Banach) spaces have been considered in many papers as well, both linear and nonlinear. For example, see \cite{EFNT94} for an extensive analysis in terms of the dynamical systems behaviour, and \cite{Zua90} for related analysis. Linear wave equations in Hilbert spaces with irregular coefficients have been recently considered by the authors in \cite{RT17}. In Section \ref{SEC:semlinear} we consider the case of semilinear damped wave equations of the form \begin{equation}\label{EQ: NoL-01i} \left\{ \begin{split} \partial_{t}^{2}u(t)+{\mathcal L} u(t)+b\partial_{t}u(t)+m u(t)&=f(u), \quad t>0,\\ u(0)&=u_{0}\in{\mathcal H}, \\ \partial_{t}u(0)&=u_{1}\in{\mathcal H}, \end{split} \right. \end{equation} under the assumption that $f$ satisfies the properties \begin{equation} \label{PR: f-007i} \left\{ \begin{split} f(0) & =0, \\ |f(u)-f(v)| & \leq C (|u|^{p-1}+|v|^{p-1})|u-v|, \end{split} \right. \end{equation} for $u, v\in\mathbb R$. If ${\mathcal H}=L^2$, then an example of $f$ satisfying \eqref{PR: f-007i} is given by $$f(u)=\mu |u|^{p-1} u,$$ for $p>1$ and $\mu\in\mathbb R$ or, more generally, by differentiable functions $f$ such that $$|f'(u)|\leq C|u|^{p-1}.$$ In Section \ref{S: n+2} we consider a general case, namely, we deal with the nonlinear equation \eqref{EQ: NoL-01} for general nonlinearity $$F=F(u, \partial_{t}u, {\mathcal L}^{1/2} u)$$ satisfying an analogue of the Gagliardo-Nirenberg inequalities in the Sobolev space associated to ${\mathcal L}$. This condition is formulated in \eqref{PR: f-02} and some examples for it are given in \eqref{EQ:exnon}. In Section \ref{SEC:higher} we consider more general nonlinearities of the form \begin{equation}\label{EQ:F-gen-i} F_l=F_l(u,\{\partial^j u\}_{j=1}^l, \{{\mathcal L}^{j/2} u\}_{j=1}^{l}). \end{equation} for $F_l:{\mathbb C}^{2l+1}\to {\mathbb C}$, for any $l\in\mathbb N$. The proof of the global in time well-posedness in this case is an extension of the proof in Section \ref{S: n+2}, so we only very briefly indicate the differences there. A different feature here is that the smallness is required in higher regularity Sobolev spaces, but only to make sense of the higher order derivatives entering the nonlinearity $F_l$. From the physical point of view it is natural to assume that $b>0$ and $m\geq 0$. However, from the point of view of the well-posedness we may allow $m$ to be negative. In this case, there appears an interplay between $b, m$ and the bottom $\lambda_0$ of the spectrum of ${\mathcal L}$. The global in time decay properties of solutions to wave equation with negative mass in ${\mathbb R}^n$ were derived in \cite{RSm}. The inclusion of the mass term does allow us to derive certain results even in the case when the bottom of the spectrum of ${\mathcal L}$ is zero. For example, when ${\mathcal H}=L^2(M)$ for a compact manifold $M$ without boundary, and ${\mathcal L}$ being the positive Laplacian on $M$, the operator ${\mathcal L}$ has a zero eigenvalue. In this case, if the Cauchy data are constant, in the case of $m=0$ the solution to the linear problem allows also constants, so that there is no decay in time and no dispersion even for $b>0$. To avoid this kind of (trivial) problems, it will be convenient to assume that $\lambda_0+m>0$, with $\lambda_0\geq 0$ and $m\in\mathbb R$. Otherwise, to summarise and collect our assumptions for this paper, we will be assuming throughout that \begin{center} \em The positive operator ${\mathcal L}$ has a discrete spectrum $\{\lambda_j\}_{j\in\mathbb N}$ with $\lambda_{0}:=\inf\limits_{j\in\mathbb N}\lambda_j\geq 0$, and the corresponding eigenvectors form an orthonormal basis in ${\mathcal H}$. Moreover, we assume that $b>0$, $m\in\mathbb R$, and $\lambda_0+m>0$. \end{center} For example, in our setting, assuming that $\lambda_0+m>0$, for appropriate indices $\alpha, \beta$, we then have the estimates \begin{equation*}\label{Est-0-01i} \|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t)\|_{{\mathcal H}}\lesssim\e[-\frac{b}{2}t] \,\, (\|u_{0}\|_{H_{{\mathcal L}}^{\alpha+2\beta}}+\|u_{1}\|_{H_{{\mathcal L}}^{\alpha-1+2\beta}}), \; \textrm{ for } 0<b< 2\sqrt{\lambda_{0}+m}, \end{equation*} \begin{equation*}\label{Est-0-01bi} \|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t)\|_{{\mathcal H}}\lesssim (1+t)\e[-\frac{b}{2}t] (\|u_{0}\|_{H_{{\mathcal L}}^{\alpha+2\beta}}+\|u_{1}\|_{H_{{\mathcal L}}^{\alpha-1+2\beta}}), \; \textrm{ for } b=2\sqrt{\lambda_{0}+m}, \end{equation*} and \begin{equation*}\label{Est-0-02i} \|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t)\|_{{\mathcal H}}\lesssim\e[-(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m})t] (\|u_{0}\|_{H_{{\mathcal L}}^{\alpha+2\beta}}+\|u_{1}\|_{H_{{\mathcal L}}^{\alpha-1+2\beta}}), \; \textrm{ for } 2\sqrt{\lambda_{0}+m}<b, \end{equation*} for solutions of linear and nonlinear equations, modulo small modifications -- see the exact statements later on, e.g. in Proposition \ref{LEM: Est-01}. \smallskip Throughout this paper we will use the notation $\lesssim$ to not write constants (which are not depending on the main parameters) in estimates. We also assume $\mathbb N=\{1,2,\ldots\}$ and $\mathbb N_0=\mathbb N\cup\{0\}.$ \smallskip The authors would like to thank both referees for the useful and constructive comments. \section{Dissipative wave equation} \label{SEC:linear} In this section we derive energy estimates for the linear damped wave equation \begin{equation}\label{CPa-01} \left\{ \begin{split} \partial_{t}^{2}u(t)+{\mathcal L} u(t)+b\partial_{t}u(t)+m u(t)&=0, \quad t>0, \\ u(0)&=u_{0}\in{\mathcal H}, \\ \partial_{t}u(0)&=u_{1}\in{\mathcal H}, \end{split} \right. \end{equation} for some dissipation constant $b>0$ and some mass constant $m$. The time decay rates will depend on the following parameter associated with ${\mathcal L}$. Let $\{\lambda_j\}_{j=1}^{\infty}$ be the set of eigenvalues of ${\mathcal L}$. Since ${\mathcal L}$ is a positive operator, all eigenvalues are also positive. Then we call \begin{equation}\label{EQ: Eigen-Parameter} \lambda_{0}:=\inf\limits_{j\in\mathbb N}\lambda_j \end{equation} the bottom of the spectrum of ${\mathcal L}$. In this paper we make the only assumption that the operator is positive, i.e. that \begin{equation}\label{EQ:lam0} \lambda_{0}\geq 0. \end{equation} To obtain the time decay rate for solutions $u(t)$ of \eqref{CPa-01} we first derive the representation of solutions for \eqref{CPa-01} based on the suitable Fourier analysis adapted to the operator ${\mathcal L}$. For this, we first recall the necessary elements of the global Fourier analysis that has been developed in \cite{RT16} (also see \cite{RT16b}, and its applications to the spectral properties of operators in \cite{DRT16}). Since the operator ${\mathcal L}$ is self-adjoint, the construction of \cite{RT16} is considerably simplified. We now give its brief review adapting it to the present setting. Let ${\mathrm H}_{{\mathcal L}}^{\infty}:={\rm Dom}({{\mathcal L}}^{\infty})$ be the space of test functions for ${{\mathcal L}}$ which we define as $$ {\rm Dom}({{\mathcal L}}^{\infty}):=\bigcap_{k=1}^{\infty}{\rm Dom}({{\mathcal L}}^{k}), $$ where ${\rm Dom}({{\mathcal L}}^{k})$ is the domain of the operator ${{\mathcal L}}^{k}$, in turn defined as $$ {\rm Dom}({{\mathcal L}}^{k}):=\{f\in{\mathcal H}: \,\,\, {{\mathcal L}}^{j}f\in {\rm Dom}({{\mathcal L}}), \,\,\, j=0, \,1, \, 2, \ldots, k-1\}. $$ The Fr\'echet topology of ${\mathrm H}_{{{\mathcal L}}}^{\infty}$ is given by the family of semi-norms \begin{equation}\label{EQ:L-top} \|\varphi\|_{{\mathrm H}^{k}_{{{\mathcal L}}}}:=\max_{j\leq k} \|{{\mathcal L}}^{j}\varphi\|_{{\mathcal H}}, \quad k\in\mathbb N_0, \; \varphi\in{\mathrm H}_{{{\mathcal L}}}^{\infty}. \end{equation} The space $${\mathrm H}^{-\infty}_{{{\mathcal L}}}:={\mathscr L}({\mathrm H}_{{\mathcal L}}^{\infty}, \mathbb C)$$ of linear continuous functionals on ${\mathrm H}_{{\mathcal L}}^{\infty}$ is called the space of ${{\mathcal L}}$-distributions. For $w\in{\mathrm H}^{-\infty}_{{{\mathcal L}}}$ and $\varphi\in{\mathrm H}_{{\mathcal L}}^{\infty}$, we shall write $$ w(\varphi)=\langle w, \varphi\rangle. $$ For any $\psi\in{\mathrm H}_{{\mathcal L}}^{\infty}$, the functional $$ {\mathrm H}_{{\mathcal L}}^{\infty}\ni \varphi\mapsto (\varphi,\psi) $$ is an ${{\mathcal L}}$-distribution, which gives an embedding $\psi\in{\mathrm H}_{{{\mathcal L}}}^{\infty}\hookrightarrow{\mathrm H}^{-\infty}_{{\mathcal L}}$. Let $\mathcal S(\mathbb N^{n})$ denote the space of rapidly decaying functions $\varphi:\mathbb N^{n}\rightarrow\mathbb C$. That is, $\varphi\in\mathcal S(\mathbb N^{n})$ if for any $N<\infty$ there exists a constant $C_{\varphi, N}$ such that $$ |\varphi(\xi)|\leq C_{\varphi, m}\langle\xi\rangle^{-N} $$ holds for all $\xi\in\mathbb N^{n}$, where we denote $$\langle\xi\rangle:=(1+|\lambda_{\xi}|)^{1/2},$$ where $\lambda_\xi$ are the eigenvalues of ${\mathcal H}$ labelled according to multiplicities. We denote by $e_\xi$ the corresponding eigenvectors of ${\mathcal H}$. The topology on $\mathcal S(\mathbb N^{n})$ is given by the seminorms $p_{k}$, where $k\in\mathbb N_{0}$ and $$ p_{k}(\varphi):=\sup_{\xi\in\mathbb N^{n}}\langle\xi\rangle^{k}|\varphi(\xi)|. $$ We now define the ${\mathcal L}$-Fourier transform on ${\mathrm H}_{{\mathcal L}}^{\infty}$ as the mapping $$ (\mathcal F_{{\mathcal L}}f)(\xi)=(f\mapsto\widehat{f}): {\mathrm H}_{{\mathcal L}}^{\infty}\rightarrow\mathcal S(\mathbb N^{n}) $$ by the formula \begin{equation} \label{FourierTr} \widehat{f}(\xi):=(\mathcal F_{{\mathcal L}}f)(\xi)=(f, e_{\xi}). \end{equation} The ${\mathcal L}$-Fourier transform $\mathcal F_{{\mathcal L}}$ is a bijective homeomorphism from ${\mathrm H}_{{{\mathcal L}}}^{\infty}$ to $\mathcal S(\mathbb N^{n})$. Its inverse $$\mathcal F_{{\mathcal L}}^{-1}: \mathcal S(\mathbb N^{n}) \rightarrow {\mathrm H}_{{\mathcal L}}^{\infty}$$ is given by \begin{equation} \label{InvFourierTr} \mathcal F^{-1}_{{{\mathcal L}}}h=\sum_{\xi\in\mathbb N^{n}} h(\xi) e_{\xi},\quad h\in\mathcal S(\mathbb N^{n}), \end{equation} so that the Fourier inversion formula becomes \begin{equation} \label{InvFourierTr0} f=\sum_{\xi\in\mathbb N^{n}} \widehat{f}(\xi)e_{\xi} \quad \textrm{ for all } f\in{\mathrm H}_{{{\mathcal L}}}^{\infty}. \end{equation} The Plancherel's identity takes the form \begin{equation}\label{EQ:Plancherel} \|f\|_{{\mathcal H}}=\p{\sum_{\xi\in\mathbb N^{n}} |\widehat{f}(\xi)|^2}^{1/2}. \end{equation} \smallskip Consequently, we can also define Sobolev spaces $H^s_{\mathcal L}$ associated to ${\mathcal L}$. Thus, for any $s\in\mathbb R$, we set \begin{equation}\label{EQ:HsL} H^s_{\mathcal L}:=\left\{ f\in{\mathrm H}^{-\infty}_{{\mathcal L}}: {\mathcal L}^{s/2}f\in {\mathcal H}\right\}, \end{equation} with the norm $\|f\|_{H^s_{\mathcal L}}:=\|{\mathcal L}^{s/2}f\|_{{\mathcal H}}$, which we understand as \begin{equation*}\label{EQ:Hsub-norm} \|f\|_{H^s_{\mathcal L}}:=\|{\mathcal L}^{s/2}f\|_{{\mathcal H}}:= \p{\sum_{\xi\in\mathbb N^{n}} \lambda_\xi^{s} |\widehat{f}(\xi)|^{2}}^{1/2}. \end{equation*} In particular, for $s=0$, we have $H^0_{\mathcal L}={\mathcal H}.$ For $f,g\in{\mathcal H}$ the convolution $(f\ast_{{\mathcal L}} g)$ was defined in \cite{RT16} by the formula $$ f\ast_{{\mathcal L}} g : = \sum\limits_{\xi\in\mathbb N^{n}}\widehat{f}(\xi) \, \widehat{g}(\xi)\,e_{\xi}. $$ This convolution and its properties in general Hilbert spaces have been analysed in \cite{KRT17, RT16c}. In terms of this convolution, the solution of \eqref{CPa-01} is given as $$ u(t)=K_{0}(t)\ast_{{\mathcal L}} u_{0}+K_{1}(t)\ast_{{\mathcal L}} u_{1}, $$ where the ${\mathcal L}$-Fourier transforms $R_{i}(t, \xi)$ of $K_{i}(t)$ $(i=0,1)$ are determined from the ordinary differential equations \begin{equation}\label{CPa-02} \left\{ \begin{split} \partial_{t}^{2}\widehat{u}(t, \xi)+b\partial_{t}\widehat{u}(t, \xi)+(\sigma_{{\mathcal L}}(\xi)+m) \widehat{u}(t, \xi)&=0, \quad t>0,\\ \widehat{u}(0, \xi)&=\widehat{u}_{0}(\xi), \\ \partial_{t}\widehat{u}(0, \xi)&=\widehat{u}_{1}(\xi), \end{split} \right. \end{equation} where $\sigma_{{\mathcal L}}(\xi)=\lambda_\xi$ is the symbol of the operator ${\mathcal L}$. In the case $\sigma_{{\mathcal L}}(\xi)+m\neq b^{2}/4$ the equations \eqref{CPa-02} can be solved explicitly with their solutions given by $$ \widehat{u}(t, \xi)=C_{0}\e[(-b/2+i\sqrt{\sigma_{{\mathcal L}}(\xi)+m-b^{2}/4})t]+C_{1}\e[(-b/2-i\sqrt{\sigma_{{\mathcal L}}(\xi)+m-b^{2}/4})t], $$ where $$ C_{0}=\left(\frac{b}{4i\sqrt{\sigma_{{\mathcal L}}(\xi)+m-b^{2}/4}}+\frac{1}{2}\right)\widehat{u}_{0}(\xi) +\frac{1}{2i\sqrt{\sigma_{{\mathcal L}}(\xi)+m-b^{2}/4}}\widehat{u}_{1}(\xi), $$ and $$ C_{1}=\left(\frac{i b}{4\sqrt{\sigma_{{\mathcal L}}(\xi)+m-b^{2}/4}}+\frac{1}{2}\right)\widehat{u}_{0}(\xi)+ \frac{i}{2\sqrt{\sigma_{{\mathcal L}}(\xi)+m-b^{2}/4}}\widehat{u}_{1}(\xi). $$ And, for the case $\sigma_{{\mathcal L}}(\xi)+m=b^{2}/4$ the equations \eqref{CPa-02} can be solved with their solutions given by $$ \widehat{u}(t, \xi)=C_{0}\e[(-b/2)t]+C_{1} t \, \e[(-b/2)t], $$ where $$ C_{0}=\widehat{u}_{0}(\xi), \,\,\, C_{1}=\frac{b}{2}\widehat{u}_{0}(\xi)+\widehat{u}_{1}(\xi). $$ Thus, for $\sigma_{{\mathcal L}}(\xi)+m\neq b^{2}/4$, we obtain \begin{align*} R_{0}(t, \xi)=\left(\frac{b}{4i\sqrt{\sigma_{{\mathcal L}}(\xi)+m-b^{2}/4}}+\frac{1}{2}\right)\e[(-b/2+i\sqrt{\sigma_{{\mathcal L}}(\xi)+m-b^{2}/4})t]\\ +\left(\frac{i b}{4\sqrt{\sigma_{{\mathcal L}}(\xi)+m-b^{2}/4}}+\frac{1}{2}\right)\e[(-b/2-i\sqrt{\sigma_{{\mathcal L}}(\xi)+m-b^{2}/4})t], \end{align*} and \begin{align*} R_{1}(t, \xi)=\frac{1}{2i\sqrt{\sigma_{{\mathcal L}}(\xi)+m-b^{2}/4}}\e[(-b/2+i\sqrt{\sigma_{{\mathcal L}}(\xi)+m-b^{2}/4})t]\\ +\frac{i}{2\sqrt{\sigma_{{\mathcal L}}(\xi)+m-b^{2}/4}}\e[(-b/2-i\sqrt{\sigma_{{\mathcal L}}(\xi)+m-b^{2}/4})t], \end{align*} and, for $\sigma_{{\mathcal L}}(\xi)+m=b^{2}/4$, we have \begin{align*} R_{0}(t, \xi)=\left(1+\frac{b}{2}t \right) \, \e[(-b/2)t], \,\,\, R_{1}(t, \xi)=t \, \e[(-b/2)t]. \end{align*} Thus, using these formulae, we get \begin{prop}\label{LEM: Est-01} Let $\lambda_{0}\geq 0$ be the bottom of the spectrum of ${\mathcal L}$ defined by \eqref{EQ: Eigen-Parameter}. Assume that $\lambda_{0}+m>0$. Then the solution $u$ of \eqref{CPa-01} satisfies the estimates \begin{equation}\label{Est-0-01} \|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t)\|_{{\mathcal H}}\lesssim\e[-\frac{b}{2}t] \,\, (\|u_{0}\|_{H_{{\mathcal L}}^{\alpha+2\beta}}+\|u_{1}\|_{H_{{\mathcal L}}^{\alpha-1+2\beta}}), \end{equation} for $0<b< 2\sqrt{\lambda_{0}+m}$, and \begin{equation}\label{Est-0-01b} \|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t)\|_{{\mathcal H}}\lesssim (1+t)\e[-\frac{b}{2}t] (\|u_{0}\|_{H_{{\mathcal L}}^{\alpha+2\beta}}+\|u_{1}\|_{H_{{\mathcal L}}^{\alpha-1+2\beta}}), \end{equation} for $b=2\sqrt{\lambda_{0}+m}$, and \begin{equation}\label{Est-0-02} \|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t)\|_{{\mathcal H}}\lesssim\e[-(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m})t] (\|u_{0}\|_{H_{{\mathcal L}}^{\alpha+2\beta}}+\|u_{1}\|_{H_{{\mathcal L}}^{\alpha-1+2\beta}}), \end{equation} for $2\sqrt{\lambda_{0}+m}<b$, for all $\alpha\in\mathbb N_{0}$ and $\beta\geq 0.$ \end{prop} \begin{proof By taking into account the equalities \begin{equation*} \|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u\|_{{\mathcal H}}=\|\mathcal F_{{\mathcal L}}(\partial_{t}^{\alpha}{\mathcal L}^{\beta}u)\|_{l^{2}}, \end{equation*} \begin{equation*} \mathcal F_{{\mathcal L}}(\partial_{t}^{\alpha}{\mathcal L}^{\beta}u)=\sigma_{{\mathcal L}}^{\beta}(\xi)\mathcal F_{{\mathcal L}}(\partial_{t}^{\alpha}u), \end{equation*} and the representations of $R_{0}(t, \xi)$ and $R_{1}(t, \xi)$, we obtain the statement of Proposition \ref{LEM: Est-01}. \end{proof} \begin{rem}\label{REM:positivity} We note that we could combine the operator ${\mathcal L}$ and the mass term $m$ into a new operator ${\mathcal L}+m$. Then, the statement of Proposition \ref{LEM: Est-01} would hold under the assumption that the bottom of the spectrum ${\mathcal L}+m$ is $>0$, without assuming that the operator ${\mathcal L}$ is positive. However, we prefer to formulate it in this form since the operator ${\mathcal L}^{1/2}$ will appear later on in the Gagliardo-Nirenberg inequality in \eqref{Inty-G-N-01}. \end{rem} \section{Semilinear damped wave equation} \label{SEC:semlinear} In this section we consider the semilinear damped wave equation for the operator ${\mathcal L}$, taking the form: \begin{equation}\label{EQ: NoL-01s} \left\{ \begin{split} \partial_{t}^{2}u(t)+{\mathcal L} u(t)+b\partial_{t}u(t)+m u(t)&=f(u), \quad t>0,\\ u(0)&=u_{0}\in{\mathcal H}, \\ \partial_{t}u(0)&=u_{1}\in{\mathcal H}. \end{split} \right. \end{equation} A typical example that we are interested in is ${\mathcal H}=L^2({\mathbb R}^n)$ or ${\mathcal H}=L^2(M)$ for a compact manifold $M$, and \begin{equation}\label{EQ:nonlinex} f(u)=\mu |u|^{p-1} u, \end{equation} for $p>1$ and $\mu\in\mathbb R$. However, we will be able to prove the global in time well-posedness for a more general class of nonlinearities $f(u)$ in abstract Hilbert spaces, satisfying the conditions \eqref{PR: f-007} in Theorem \ref{TH: 01}. We now introduce the following notion of the Gagliardo--Nirenberg index that will be important for our global in time well-posedness result for \eqref{EQ: NoL-01s}. We may identify our Hilbert space ${\mathcal H}$ as ${\mathcal H}=L^2(\Omega)$ for a measure space $\Omega$, so that we can also use the scale $L^p(\Omega)$ of spaces on $\Omega$. We can write $\|\cdot\|_{{\mathcal H}}=\|\cdot\|_2$ in this notation. \begin{defi}[Gagliardo--Nirenberg index] \label{DEF:GN} We say that $p\geq 1$ is Gag\-li\-ardo--Nirenberg admissible for the operator ${\mathcal L}$ if the Gagliardo--Nirenberg type inequality \begin{equation}\label{Inty-G-N-01} \|u\|_{2p}\leq C \|{\mathcal L}^{1/2} u\|_{2}^{\theta} \, \|u\|_{2}^{1-\theta} \end{equation} holds for some $\theta=\theta(p)\in[0, 1]$. \end{defi} \begin{ex}[Harmonic oscillator] Note that for the harmonic oscillator ${\mathcal L}=-\Delta+|x|^{2}$ in $\mathbb R^{n}$, the following indices are Gagliardo--Nirenberg admissible, i.e. we have that \eqref{Inty-G-N-01} holds for \begin{equation}\label{EQ:GN} \left\{ \begin{split} n=1 \,\, \hbox{and} \,\, n=2: & \,\,\, 1\leq p< \infty; \\ n\geq3: & \,\,\, 1\leq p\leq \frac{n}{n-2}. \end{split} \right. \end{equation} These properties follow from the corresponding properties of the Laplacian, see the results of Nirenberg's paper \cite{N59}. In this case we also have the bottom of the spectrum of ${\mathcal L}$ given by $\lambda_0=n$, see Appendix \ref{APP}. \end{ex} \begin{ex}[Laplacian on spheres] For ${\mathcal L}$ being the Laplacian on the sphere $\mathbb S^n$, with ${\mathcal H}=L^2(\mathbb S^n)$, the Gagliardo-Nirenberg admissible indices are also given by \eqref{EQ:GN}, see e.g. \cite{Dol}. \end{ex} \begin{ex}[Laplacian on compact Riemannian manifolds] More generally, for ${\mathcal L}$ being the Laplacian on the compact Riemannian manifold $\mathcal M$, with ${\mathcal H}=L^2(\mathcal M)$, the Gagliardo-Nirenberg admissible indices are given by \begin{equation}\label{EQ:GN-03} \left\{ \begin{split} n=2: & \,\,\, 1\leq p< \infty; \\ n\geq3: & \,\,\, 1\leq p\leq \frac{n}{n-2}. \end{split} \right. \end{equation} For this, we refer to the papers of Ceccon and Montenegro \cite{CM08, CM13}. For more references, see e.g. \cite{B03, ACM15, CD16} and references therein. \end{ex} \begin{ex}[Landau Hamiltonian]\label{EX:LH} If we take the twisted Laplacian on $\mathbb C^{n}$ $$ {\mathcal L}=\sum_{j=1}^{n}(Z_{j}\bar{Z}_{j}+\bar{Z}_{j}Z_{j}), $$ with $Z_{j}=\frac{\partial}{\partial z_{j}}+\frac{1}{2}\bar{z}_{j}$, and $\bar{Z}_{j}=-\frac{\partial}{\partial \bar{z}_{j}}+\frac{1}{2}z_{j}$, then the Gagliardo-Nirenberg admissible indices for the twisted Laplacian (Landau Hamiltonian) ${\mathcal L}$ are given by \begin{equation}\label{EQ:GN-000} \left\{ \begin{split} n=1: & \,\,\, 1\leq p< \infty; \\ n\geq2: & \,\,\, 1\leq p\leq \frac{n}{n-1}. \end{split} \right. \end{equation} Moreover, we have $\lambda_0=n$. The spectrum of ${\mathcal L}$ is discrete but the eigenvalues have infinite multiplicities, see e.g. \cite {RS15} or \cite{RT16a}. However, as we do not make any assumption on multiplicities, this situation is covered by our setting. \begin{proof}[Proof of \eqref{EQ:GN-000}] It follows from the H\"{o}lder inequality that $$ \int_{\mathbb C^{n}}|u|^{2p}dz=\int_{\mathbb C^{n}}|u|^{2ps}|u|^{2p(1-s)}dz\leq\left(\int_{\mathbb C^{n}}|u|^{\frac{2n}{n-1}}dz\right)^{ps \frac{n-1}{n}} \left(\int_{\mathbb C^{n}}|u|^{2}dz\right)^{p(1-s)}, $$ for any $s\in [0,1]$ such that \begin{equation}\label{EQ: ind-p} ps\frac{n-1}{n}+p(1-s)=1. \end{equation} Then by using the Sobolev embedding from \cite[Lemma 2.3]{RS15}, we obtain $$ \|u\|_{L^{2p}(\mathbb C^{n})}\lesssim \|u\|_{\dot{L}_{1}^{2}(\mathbb C^{n})}^{s} \|u\|_{L^{2}(\mathbb C^{n})}^{1-s}. $$ Finally, \eqref{EQ: ind-p} yields \eqref{EQ:GN-000}. \end{proof} \end{ex} For the convenience of the reader we recall the definition of the Sobolev spaces $H^s_{\mathcal L}$, $s\in\mathbb R$, associated to ${\mathcal L}$: \begin{equation*}\label{EQ:HsL-00} H^s_{\mathcal L}:=\left\{ f\in H^{-\infty}_{\mathcal L}: {\mathcal L}^{s/2}f\in L^2\right\}, \end{equation*} with the norm $\|f\|_{H^s_{\mathcal L}}:=\|{\mathcal L}^{s/2}f\|_{L^2}.$ We also recall that $\lambda_0=\lambda_0({\mathcal L})$ denotes the bottom of the spectrum of ${\mathcal L}$ defined by \eqref{EQ: Eigen-Parameter}. \begin{thm} \label{TH: 01} Let $p>1$ be Gagliardo-Nirenberg admissible for ${\mathcal L}$, i.e. assume that \eqref{Inty-G-N-01} holds. Suppose that $\lambda_0\geq 0$ and $\lambda_0+m>0$. Assume that $f$ satisfies the properties \begin{equation} \label{PR: f-007} \left\{ \begin{split} f(0) & =0, \\ |f(u)-f(v)| & \leq C (|u|^{p-1}+|v|^{p-1})|u-v|, \end{split} \right. \end{equation} for $u, v\in\mathbb R$. Assume that the Cauchy data $u_{0}\in H_{{\mathcal L}}^{1}$ and $u_{1}\in {\mathcal H}$ satisfy \begin{equation} \label{EQ: Th-cond-01} \|u_{0}\|_{H_{{\mathcal L}}^{1}}+\|u_{1}\|_{{\mathcal H}}\leq\varepsilon. \end{equation} Then, there exists a small positive constant $\varepsilon_{0}>0$ such that the Cauchy problem \begin{equation*}\label{EQ: NoL-02} \left\{ \begin{split} \partial_{t}^{2}u(t)+{\mathcal L} u(t)+b\partial_{t}u(t) + m u(t)&=f(u), \quad t>0,\\ u(0)&=u_{0}\in H_{{\mathcal L}}^{1}, \\ \partial_{t}u(0)&=u_{1}\in{\mathcal H}, \end{split} \right. \end{equation*} has a unique global solution $u\in C(\mathbb R_{+}; H_{{\mathcal L}}^{1})\bigcap C^{1}(\mathbb R_{+}; {\mathcal H})$ for all $0<\varepsilon\leq\varepsilon_{0}$. Moreover, when $0 < b < 2\sqrt{\lambda_{0}+m}$ we have \begin{equation}\label{TH-NoL-1-01} \|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t)\|_{{\mathcal H}}\lesssim (1+t)^{1/2} \e[-\frac{b}{2}t], \end{equation} and when $b = 2\sqrt{\lambda_{0}+m}$ we have \begin{equation}\label{TH-NoL-1-01} \|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t)\|_{{\mathcal H}}\lesssim (1+t)^{3/2} \e[-\frac{b}{2}t], \end{equation} and when $2\sqrt{\lambda_{0}+m}<b$ we have \begin{equation}\label{TH-NoL-1-03} \|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t)\|_{{\mathcal H}}\lesssim (1+t)^{1/2}\e[-(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m})t], \end{equation} for $(\alpha, \beta)=(0, 0)$ and $(\alpha, \beta)=(0, 1/2)$, and $(\alpha, \beta)=(1, 0)$. \end{thm} As noted in the introduction, if ${\mathcal H}=L^2$, then an example of $f$ satisfying \eqref{PR: f-007} is given by \eqref{EQ:nonlinex}, i.e. by $$f(u)=\mu |u|^{p-1} u, \quad p>1,\; \mu\in\mathbb R,$$ or by a differentiable function $f$ such that $$|f'(u)|\leq C|u|^{p-1}.$$ \begin{proof}[Proof of Theorem \ref{TH: 01}] Let us consider the closed subsets $Z_j$ of the space $C^{1}(\mathbb R_{+}; \,\, H^{1}_{{\mathcal L}})$ defined as $$ Z_{j} :=\{u\in C^{1}(\mathbb R_{+}; \,\, H^{1}_{{\mathcal L}}); \,\, \|u\|_{Z_{j}}\leq L_{j}\}, \,\,\, j=1,2,3, $$ with \begin{align*} \|u\|_{Z_{1}}:=\sup_{t\geq0}\{(1+t)^{-1/2}\e[\frac{b}{2}t](\|u(t, \cdot)\|_{2}+\|\partial_{t}u(t, \cdot)\|_{2}+\|{\mathcal L}^{1/2}u(t, \cdot)\|_{2})\}, \end{align*} if $0<b< 2\sqrt{\lambda_{0}+m}$, and \begin{align*} \|u\|_{Z_{2}}:=\sup_{t\geq0}\{(1+t)^{-3/2}\e[\frac{b}{2}t](\|u(t, \cdot)\|_{2}+\|\partial_{t}u(t, \cdot)\|_{2}+\|{\mathcal L}^{1/2}u(t, \cdot)\|_{2})\}, \end{align*} if $b = 2\sqrt{\lambda_{0}+m}$, and \begin{align*} \|u\|_{Z_{3}}:=\sup_{t\geq0}\{(1+t)^{-1/2}\e[(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m})t](\|u(t, \cdot)\|_{2}+\|\partial_{t}u(t, \cdot)\|_{2}+\|{\mathcal L}^{1/2}u(t, \cdot)\|_{2})\}, \end{align*} if $2\sqrt{\lambda_{0}+m}<b$, where $L_{j}>0$ ($j=1, 2, 3$) will be specified later. Now we define the mapping $\Gamma$ on $Z_j$ by \begin{equation} \label{MAP-01} \begin{split} \Gamma[u](t):= K_{0}(t)\ast_{{\mathcal L}}u_{0} &+ K_{1}(t)\ast_{{\mathcal L}}u_{1}+\int_{0}^{t}K_{1}(t-\tau)\ast_{{\mathcal L}}f(u(\tau))d\tau, \end{split} \end{equation} where $K_0$, $K_1$, and the convolution $\ast_{{\mathcal L}}$ are as defined in Section \ref{SEC:linear}. We claim that \begin{equation} \label{MAP-02} \|\Gamma[u]\|_{Z_{j}}\leq L_{j} \end{equation} for all $u\in Z_{j}$ and \begin{equation} \label{MAP-03} \|\Gamma[u]-\Gamma[v]\|_{Z_{j}}\leq \frac{1}{r_{j}} \|u-v\|_{Z_{j}} \end{equation} for all $u, v\in Z_{j}$ with $r_{j}>1$ and $j=1,2,3$. Once we proved \eqref{MAP-02} and \eqref{MAP-03}, we get that $\Gamma$ is a contraction mapping on $Z_{j}$. The Banach fixed point theorem then implies that $\Gamma$ has a unique fixed point on $Z_{j}$ with $j=1,2,3$. It means that there exists a unique global solution $u$ of the equation $$ u=\Gamma[u] \,\,\, \hbox{in} \,\,\, Z_{j}, $$ which also gives the solution to \eqref{EQ: NoL-01s}. So, we now concentrate on proving \eqref{MAP-02} and \eqref{MAP-03}. As we noted before we may identify our Hilbert space ${\mathcal H}$ as a measure space ${\mathcal H}=L^2(\Omega)$, so that we can also use the scale of $L^p$ spaces on $\Omega$. We can write $\|\cdot\|_{{\mathcal H}}=\|\cdot\|_2$ in this notation. Recalling the second assumption in \eqref{PR: f-007} on $f$, namely, $$ |f(u)-f(v)|\leq C (|u|^{p-1}+|v|^{p-1})|u-v|, $$ applying it to functions $u=u(t)$ and $v=v(t)$ we get $$ \|(f(u)-f(v))(t, \cdot)\|_{2}^{2}\leq C \int_\Omega (|u(t)|^{p-1}+|v(t)|^{p-1})^{2}|u(t)-v(t)|^{2}. $$ Consequently, by the H\"{o}lder inequality, we get $$ \|(f(u)-f(v))(t, \cdot)\|_{2}^{2}\leq C (\|u(t, \cdot)\|^{p-1}_{2p}+\|v(t, \cdot)\|^{p-1}_{2p})^{2} \|(u-v)(t, \cdot)\|^{2}_{2p} $$ since $$ \frac{1}{\frac{p}{p-1}}+\frac{1}{p}=1. $$ By the Gagliardo--Nirenberg-type inequality \eqref{Inty-G-N-01} which holds for $p$ by the assumption, and by Young's inequality $$ a^{\theta} b^{1-\theta}\leq \theta a + (1-\theta) b $$ for $0\leq\theta\leq1$, $a,b\geq0$, we obtain \begin{equation} \label{EQ: Gagliardo-Nirenberg-01} \begin{split} \|(f(u)& -f(v))(t, \cdot)\|_{2} \leq C \Big[\left(\|{\mathcal L}^{1/2} u(t, \cdot)\|_{2}+\|u(t, \cdot)\|_{2}\right)^{p-1}\\ & +\left(\|{\mathcal L}^{1/2} v(t, \cdot)\|_{2}+\|v(t, \cdot)\|_{2}\right)^{p-1}\Big] \\ & \times \left(\|{\mathcal L}^{1/2} (u-v)(t, \cdot)\|_{2}+\|(u-v)(t, \cdot)\|_{2}\right). \end{split} \end{equation} Recalling that $\|u\|_{Z_{j}}\leq L_{j}$ and $\|v\|_{Z_{j}}\leq L_{j}$ for $j=1,2,3$, from \eqref{EQ: Gagliardo-Nirenberg-01} we get \begin{equation} \label{EQ: Gagliardo-Nirenberg-02} \|(f(u)-f(v))(t, \cdot)\|_{2} \leq C (1+t)^{p/2}\e[-\frac{b}{2} p t] L_{1}^{p-1} \|u-v\|_{Z_{1}}, \end{equation} for $0<b< 2\sqrt{\lambda_{0}+m}$, and \begin{equation} \label{EQ: Gagliardo-Nirenberg-02a} \|(f(u)-f(v))(t, \cdot)\|_{2} \leq C (1+t)^{3p/2}\e[-\frac{b}{2} p t] L_{2}^{p-1} \|u-v\|_{Z_{2}}, \end{equation} for $b = 2\sqrt{\lambda_{0}+m}$, and \begin{equation} \label{EQ: Gagliardo-Nirenberg-02b} \|(f(u)-f(v))(t, \cdot)\|_{2} \leq C (1+t)^{p/2} \e[-(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m}) p t] L_{3}^{p-1} \|u-v\|_{Z_{3}}, \end{equation} for $2\sqrt{\lambda_{0}+m}<b$. By putting $v=0$ in \eqref{EQ: Gagliardo-Nirenberg-02}--\eqref{EQ: Gagliardo-Nirenberg-02b}, and using that $f(0)=0$, we also have \begin{equation} \label{EQ: Gagliardo-Nirenberg-03} \begin{split} \|f(u)(t, \cdot)\|_{2} & \leq C (1+t)^{p/2}\e[-\frac{b}{2} p t] L_{1}^{p},\quad \textrm{ for } \; 0 < b < 2\sqrt{\lambda_{0}+m}, \end{split} \end{equation} and \begin{equation} \label{EQ: Gagliardo-Nirenberg-03a} \begin{split} \|f(u)(t, \cdot)\|_{2} & \leq C (1+t)^{3p/2}\e[-\frac{b}{2} p t] L_{2}^{p},\quad \textrm{ for } \; b = 2\sqrt{\lambda_{0}+m}, \end{split} \end{equation} and \begin{equation} \label{EQ: Gagliardo-Nirenberg-03b} \begin{split} \|f(u)(t, \cdot)\|_{2} & \leq C (1+t)^{p/2}\e[-(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m}) p t] L_{3}^{p}, \quad \textrm{ for } \; 2\sqrt{\lambda_{0}+m}<b. \end{split} \end{equation} Now, let us estimate the integral operator \begin{equation} \label{OP: Int-NoL-01} \begin{split} J[u](t,x):=\int_{0}^{t}K_{1}(t-\tau)\ast_{{\mathcal L}}f(u(\tau,x))d\tau. \end{split} \end{equation} More precisely, for $\alpha=0, 1$ and for all $\beta\geq 0$ we have \begin{equation*} \label{OP: Int-NoL-02} \begin{split} |\partial^{\alpha}_{t} & {\mathcal L}^{\beta}J[u](t, x)|^{2}\leq \Big| \int_{0}^{t}\partial^{\alpha}_{t}{\mathcal L}^{\beta}K_{1}(t-\tau)\ast_{{\mathcal L}}f(u(\tau, x)) d \tau \Big|^{2} \\ &\leq \left(\int_{0}^{t} \Big| \partial^{\alpha}_{t}{\mathcal L}^{\beta}K_{1}(t-\tau)\ast_{{\mathcal L}}f(u(\tau, x)) \Big| d \tau \right)^{2} \\ &\leq t \int_{0}^{t} \Big| \partial^{\alpha}_{t}{\mathcal L}^{\beta}K_{1}(t-\tau)\ast_{{\mathcal L}}f(u(\tau, x)) \Big|^{2} d \tau. \end{split} \end{equation*} Then for $0<b< 2\sqrt{\lambda_{0}+m}$, by using Proposition \ref{LEM: Est-01}, we get \begin{equation} \label{OP: Int-NoL-03} \begin{split} &\|\partial^{\alpha}_{t}{\mathcal L}^{\beta} J[u](t, \cdot)\|_{2}^{2} \leq t \int_{0}^{t}\| \partial^{\alpha}_{t}{\mathcal L}^{\beta} K_{1}(t-\tau)\ast_{{\mathcal L}}f(u(\tau, \cdot)) \|_{2}^{2} d \tau \\ &\leq C t \int_{0}^{t} \e[-2\frac{b}{2}(t-\tau)] \|f(u(\tau, \cdot)) \|_{H^{\alpha-1+2\beta}_{{\mathcal L}}}^{2} d \tau \\ & = C t \e[ - b t ] \int_{0}^{t} \e[ b \tau ] \|f(u(\tau, \cdot)) \|_{H^{\alpha-1+2\beta}_{{\mathcal L}}}^{2} d \tau. \end{split} \end{equation} Similarly, for $b = 2\sqrt{\lambda_{0}+m}$ we obtain \begin{equation} \label{OP: Int-NoL-03a} \begin{split} &\|\partial^{\alpha}_{t}{\mathcal L}^{\beta} J[u](t, \cdot)\|_{2}^{2} \leq C t \e[ - b t ] \int_{0}^{t} (1+t-\tau)^{2} \e[ b \tau ] \|f(u(\tau, \cdot)) \|_{H^{\alpha-1+2\beta}_{{\mathcal L}}}^{2} d \tau \\ &\leq C t(1+t)^{2} \e[ - b t ] \int_{0}^{t} \e[ b \tau ] \|f(u(\tau, \cdot)) \|_{H^{\alpha-1+2\beta}_{{\mathcal L}}}^{2} d \tau. \end{split} \end{equation} Also, for $2\sqrt{\lambda_{0}+m}<b$ we have \begin{equation} \label{OP: Int-NoL-03b} \begin{split} \|\partial^{\alpha}_{t}{\mathcal L}^{\beta} J[u](t, \cdot)&\|_{2}^{2} \leq C t \e[- 2(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m}) t] \\ &\times\int_{0}^{t} \e[ 2(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m}) \tau] \|f(u(\tau, \cdot)) \|_{H^{\alpha-1+2\beta}_{{\mathcal L}}}^{2} d \tau. \end{split} \end{equation} Now we have to control the norm $ \|f(u(\tau, \cdot)) \|_{H^{\alpha-1+2\beta}_{{\mathcal L}}}^{2}. $ We notice that for $(\alpha, \beta)=(0, 1/2)$ and $(\alpha, \beta)=(1, 0)$ we have $\alpha-1+2\beta\leq0$. Thus, using \eqref{EQ: Gagliardo-Nirenberg-02} and \eqref{EQ: Gagliardo-Nirenberg-03}, we obtain from \eqref{OP: Int-NoL-03} that \begin{equation} \label{OP: Int-NoL-04} \|\partial^{\alpha}_{t} {\mathcal L}^{\beta} ( J[u] - J[v] )(t, \cdot)\|_{2} \leq C t^{1/2} \e[- \frac{b}{2} t] \, L_{1}^{p-1}\|u-v\|_{Z_{1}}, \end{equation} and \begin{equation} \label{OP: Int-NoL-05} \|\partial^{\alpha}_{t} {\mathcal L}^{\beta} J[u](t, \cdot)\|_{2} \leq C t^{1/2} \e[- \frac{b}{2} t] \, L^{p}_{1}, \end{equation} with the estimates \eqref{OP: Int-NoL-04}--\eqref{OP: Int-NoL-05} holding for $(\alpha, \beta)=(0, 1/2)$ and $(\alpha, \beta)=(1, 0)$. Similarly, using \eqref{EQ: Gagliardo-Nirenberg-02a} and \eqref{EQ: Gagliardo-Nirenberg-03a}, we get from \eqref{OP: Int-NoL-03a}: \begin{equation} \label{OP: Int-NoL-04a} \|\partial^{\alpha}_{t} {\mathcal L}^{\beta} ( J[u] - J[v] )(t, \cdot)\|_{2} \leq C t^{1/2}(1+t) \e[- \frac{b}{2} t] \, L_{2}^{p-1}\|u-v\|_{Z_{2}}, \end{equation} and \begin{equation} \label{OP: Int-NoL-05a} \|\partial^{\alpha}_{t} {\mathcal L}^{\beta} J[u](t, \cdot)\|_{2} \leq C t^{1/2}(1+t) \e[- \frac{b}{2} t] \, L^{p}_{2}, \end{equation} with the estimates \eqref{OP: Int-NoL-04}--\eqref{OP: Int-NoL-05} holding for $(\alpha, \beta)=(0, 1/2)$ and $(\alpha, \beta)=(1, 0)$. Also, from \eqref{OP: Int-NoL-03b} by using \eqref{EQ: Gagliardo-Nirenberg-02b} and \eqref{EQ: Gagliardo-Nirenberg-03b}, we get \begin{equation} \label{OP: Int-NoL-04b} \|\partial^{\alpha}_{t} {\mathcal L}^{\beta} ( J[u] - J[v] )(t, \cdot)\|_{2} \leq C t^{1/2} \e[- (\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m}) t] \, L_{3}^{p-1}\|u-v\|_{Z_{3}}, \end{equation} \begin{equation} \label{OP: Int-NoL-05b} \|\partial^{\alpha}_{t} {\mathcal L}^{\beta} J[u](t, \cdot)\|_{2} \leq C t^{1/2} \e[- (\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m}) t] \, L_{3}^{p}, \end{equation} for $(\alpha, \beta)=(0, 1/2)$ and $(\alpha, \beta)=(1, 0)$. Consequently, by the definition of $\Gamma[u]$ in \eqref{MAP-01} and using Proposition \ref{LEM: Est-01} for the first term and estimates for $\|J[u]\|_{Z_{j}}$ for the second term below, we obtain \begin{equation} \label{Gamma: Contraction mapping-01} \begin{split} \|\Gamma[u]\|_{Z_{j}} & \leq \|K_{0}(t)\ast_{{\mathcal L}}u_{0} + K_{1}(t)\ast_{{\mathcal L}}u_{1}\|_{Z_{j}} + \|J[u]\|_{Z_{j}} \\ & \leq C_{1j}(\|u_{0}\|_{H^{1}_{{\mathcal L}}}+\|u_{1}\|_{L^{2}}) + C_{2j}L_{j}^{p}, \end{split} \end{equation} for some $C_{1j}>0$ and $C_{2j}>0$, $j=1,2,3$. Moreover, in the similar way, we can estimate \begin{equation} \label{Gamma: Contraction mapping-02} \|\Gamma[u]-\Gamma[v]\|_{Z_{j}} \leq \|J[u] - J[v]\|_{Z_{j}} \leq C_{3j}L_j^{p-1} \|u-v\|_{Z_{j}}, \end{equation} for some $C_{3j}>0$, $j=1,2,3$. Taking some $r_{j}>1$, we choose $$L_{j}:=r_{j} C_{1j}(\|u_{0}\|_{H^{1}_{{\mathcal L}}}+\|u_{1}\|_{L^{2}})$$ with sufficiently small $\|u_{0}\|_{H^{1}_{{\mathcal L}}}+\|u_{1}\|_{L^{2}}<\varepsilon$ so that \begin{equation} \label{Gamma: Contraction mapping-03} C_{2j}L_{j}^{p}\leq \frac{1}{r_{j}} L_{j}, \,\,\,\, C_{3j}L_{j}^{p-1}\leq \frac{1}{r_{j}}. \end{equation} Then estimates \eqref{Gamma: Contraction mapping-01}--\eqref{Gamma: Contraction mapping-03} imply the desired estimates \eqref{MAP-02} and \eqref{MAP-03}. This means that we can apply the fixed point theorem for the existence of solutions. The estimates \eqref{TH-NoL-1-01} and \eqref{TH-NoL-1-03} follow from \eqref{OP: Int-NoL-03}-- \eqref{OP: Int-NoL-03b}. Theorem \ref{TH: 01} is now proved. \end{proof} \section{Nonlinear damped wave equation} \label{S: n+2} In this section we deal with the general nonlinearity by considering the nonlinear term of the form $F(u, u_t, {\mathcal L}^{1/2} u)$, for some function $F:\mathbb C^3\to\mathbb C$. Now, let us suppose that the following property holds. Denoting $$U:=(u, u_t, {\mathcal L}^{1/2} u)$$ for $u\in C^{1}(\mathbb R_{+}; H_{{\mathcal L}}^{1}),$ we assume that $F(U)\in C(\mathbb R_{+}; H_{{\mathcal L}}^{1})$ and we call the index $p>1$ to be $(F,{\mathcal L})$-admissible if we have the estimate \begin{equation} \label{PR: f-02} \begin{split} \|F(U)-F(V)\|_{H_{{\mathcal L}}^{1}}\lesssim (\|U\|_{H_{{\mathcal L}}^{1}}^{p-1}+ \|V\|_{H_{{\mathcal L}}^{1}}^{p-1})\|U-V\|_{H_{{\mathcal L}}^{1}}. \end{split} \end{equation} We note that using the definition of Sobolev spaces in \eqref{EQ:HsL}, in this notation we have \begin{equation}\label{EQ:Uu} \|U\|_{H_{{\mathcal L}}^{1}}\simeq \|{\mathcal L}^{1/2}u\|_{{\mathcal H}}+\|{\mathcal L}^{1/2}\partial_t u\|_{{\mathcal H}}+\|{\mathcal L} u\|_{{\mathcal H}}. \end{equation} An example of $F$ satisfying \eqref{PR: f-02} may be given by nonlinearities of the form \begin{equation}\label{EQ:exnon} F(U)=\varphi \|U\|_{{\mathcal H}}^p \quad \textrm{ or }\quad F(U)=\varphi \|U\|_{H_{{\mathcal L}}^{1}}^p, \end{equation} for some $\varphi\in {\rm Dom}\,({\mathcal L})$. We now give the global in time well-posedness statement. \begin{thm} \label{TH: 02} Let $p>1$ be $(F,{\mathcal L})$-admissible, i.e. assume that $F=F(u, \partial_{t}u, {\mathcal L}^{1/2} u)$ satisfies the condition \eqref{PR: f-02}. Suppose that $F(0)=0$, and that $u_{0}\in H_{{\mathcal L}}^{2}$ and $u_{1}\in H_{{\mathcal L}}^{1}$ are such that \begin{equation*} \label{EQ: Th-cond-01} \|u_{0}\|_{H_{{\mathcal L}}^{2}}+\|u_{1}\|_{H_{{\mathcal L}}^{1}}\leq\varepsilon. \end{equation*} Assume that $\lambda_0\geq 0$ and $\lambda_0+m>0$. Then, there exists a small positive constant $\varepsilon_{0}>0$ such that the Cauchy problem \begin{equation*}\label{EQ: NoL-02} \left\{ \begin{split} \partial_{t}^{2}u(t)+{\mathcal L} u(t)+b\partial_{t}u(t)+m u(t)&=F(u, \partial_{t}u, {\mathcal L}^{1/2} u), \quad t>0,\\ u(0)&=u_{0}\in H_{{\mathcal L}}^{2}, \\ \partial_{t}u(0)&=u_{1}\in H_{{\mathcal L}}^{1}, \end{split} \right. \end{equation*} has a unique global solution $u\in C(\mathbb R_{+}; H_{{\mathcal L}}^{2})\cap C^{1}(\mathbb R_{+}; H_{{\mathcal L}}^{1})$ for all $0<\varepsilon\leq\varepsilon_{0}$. Moreover, for $0 < b < 2\sqrt{\lambda_{0}+m}$ we have \begin{equation}\label{TH-NoL-1-01b} \|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t)\|_{{\mathcal H}}\lesssim (1+t)^{1/2}\e[-\frac{b}{2}t], \end{equation} and for $b = 2\sqrt{\lambda_{0}+m}$ we have \begin{equation}\label{TH-NoL-1-02b} \|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t)\|_{{\mathcal H}}\lesssim (1+t)^{3/2}\e[-\frac{b}{2}t], \end{equation} and for $2\sqrt{\lambda_{0}+m}<b$ we have \begin{equation}\label{TH-NoL-1-03b} \|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t)\|_{{\mathcal H}}\lesssim (1+t)^{1/2} \e[-(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m})t], \end{equation} for any $(\alpha, \beta)\in\{(0, 0), (0, 1/2), (1, 0), (0, 1), (1, 1/2), (2, 0)\}$. \end{thm} \begin{proof} The proof of Theorem \ref{TH: 02} is similar to that of Theorem \ref{TH: 01} except that we aim at using the assumption \eqref{PR: f-02} instead of the Gagliardo-Nirenberg inequality. First, we define the closed subsets $Z_j$ of the space $C^{2}(\mathbb R_{+}; \,\, H^{2}_{{\mathcal L}})$ by $$ Z_{j} :=\{u\in C^{2}(\mathbb R_{+}; \,\, H^{2}_{{\mathcal L}}); \,\, \|u\|_{Z_{j}}\leq L_{j}\}, \,\,\, j=4,5,6, $$ with \begin{equation*} \begin{split} \|u\|_{Z_{4}}:=\sup_{t\geq0}\{&(1+t)^{-1/2}\e[\frac{b}{2}t](\|u(t, \cdot)\|_{2}+\|\partial_{t}u(t, \cdot)\|_{2}+\|{\mathcal L}^{1/2}u(t, \cdot)\|_{2}\\ &+\|\partial_{t}{\mathcal L}^{1/2}u(t, \cdot)\|_{2}+\|{\mathcal L} u(t, \cdot)\|_{2}+\|\partial_{t}^{2}u(t, \cdot)\|_{2})\}, \,\,\, \hbox{if} \,\,\, 0 < b < 2\sqrt{\lambda_{0}+m}, \end{split} \end{equation*} and \begin{equation*} \begin{split} \|u\|_{Z_{5}}:=\sup_{t\geq0}\{&(1+t)^{-3/2}\e[\frac{b}{2}t](\|u(t, \cdot)\|_{2}+\|\partial_{t}u(t, \cdot)\|_{2}+\|{\mathcal L}^{1/2}u(t, \cdot)\|_{2}\\ &+\|\partial_{t}{\mathcal L}^{1/2}u(t, \cdot)\|_{2}+\|{\mathcal L} u(t, \cdot)\|_{2}+\|\partial_{t}^{2}u(t, \cdot)\|_{2})\}, \,\,\, \hbox{if} \,\,\, b = 2\sqrt{\lambda_{0}+m}, \end{split} \end{equation*} and \begin{equation*} \begin{split} \|u\|_{Z_{6}}:=\sup_{t\geq0}\{&(1+t)^{-1/2}\e[(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m})t](\|u(t, \cdot)\|_{2}+\|\partial_{t}u(t, \cdot)\|_{2}+\|{\mathcal L}^{1/2}u(t, \cdot)\|_{2}\\ &+\|\partial_{t}{\mathcal L}^{1/2}u(t, \cdot)\|_{2}+\|{\mathcal L} u(t, \cdot)\|_{2}+\|\partial_{t}^{2}u(t, \cdot)\|_{2})\}, \,\,\, \hbox{if} \,\,\, 2\sqrt{\lambda_{0}+m}<b, \end{split} \end{equation*} where $L_{j}>0$ ($j=4,5,6$) are to be specified later. Now, we begin by repeating several steps from the proof of Theorem \ref{TH: 01}, namely, we define the mapping $\Gamma$ on $Z_4$, $Z_5$ and $Z_6$ by \begin{equation} \label{MAP-01b} \begin{split} \Gamma[u](t):= K_{0}(t)\ast_{{\mathcal L}}u_{0} &+ K_{1}(t)\ast_{{\mathcal L}}u_{1}\\ &+\int_{0}^{t}K_{1}(t-\tau)\ast_{{\mathcal L}}F(u, u_{t}, {\mathcal L}^{1/2}u)(\tau)d\tau, \end{split} \end{equation} and we show that $\Gamma$ is a contraction mapping on $Z_4$, $Z_5$ and $Z_6$. By \eqref{PR: f-02} we have \begin{equation} \label{PR: f-02b} \begin{split} \|F(U)-F(V)\|_{H^{1}_{{\mathcal L}}}\lesssim (\|U\|_{H^{1}_{{\mathcal L}}}^{p-1}+\|V\|_{H^{1}_{{\mathcal L}}}^{p-1})\|U-V\|_{H^{1}_{{\mathcal L}}}. \end{split} \end{equation} We take $u$ and $v$ satisfying $\|u\|_{Z_{j}}\leq L_{j}$ and $\|v\|_{Z_{j}}\leq L_{j}$ for $j=4,5,6$. Recalling \eqref{EQ:Uu} for $U=(u,\partial_t u,{\mathcal L}^{1/2}u)$, from \eqref{PR: f-02b} we get \begin{equation} \label{EQ: Gagliardo-Nirenberg-02-cc} \|(F(U)-F(V))(t, \cdot)\|_{H^{1}_{{\mathcal L}}} \leq C (1+t)^{p/2} \e[-\frac{b}{2} p t] L_{4}^{p-1} \|u-v\|_{Z_{4}}, \end{equation} and \begin{equation} \label{EQ: Gagliardo-Nirenberg-02a-cc} \|(F(U)-F(V))(t, \cdot)\|_{H^{1}_{{\mathcal L}}} \leq C (1+t)^{3p/2} \e[-\frac{b}{2} p t] L_{5}^{p-1} \|u-v\|_{Z_{5}}, \end{equation} and \begin{equation} \label{EQ: Gagliardo-Nirenberg-02b-cc} \|(F(U)-F(V))(t, \cdot)\|_{H^{1}_{{\mathcal L}}} \leq C (1+t)^{p/2} \e[-(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m}) p t] L_{6}^{p-1} \|u-v\|_{Z_{6}}, \end{equation} respectively. Since $F(0)=0$, by putting $v=0$ and $V=0$ in \eqref{EQ: Gagliardo-Nirenberg-02-cc}--\eqref{EQ: Gagliardo-Nirenberg-02b-cc}, we obtain \begin{equation} \label{EQ: Gagliardo-Nirenberg-03-cc} \|F(U)(t, \cdot)\|_{H^{1}_{{\mathcal L}}} \leq C (1+t)^{p/2} \e[-\frac{b}{2} p t] L_{4}^{p}, \end{equation} and \begin{equation} \label{EQ: Gagliardo-Nirenberg-03a-cc} \|F(U)(t, \cdot)\|_{H^{1}_{{\mathcal L}}} \leq C (1+t)^{3p/2} \e[-\frac{b}{2} p t] L_{5}^{p}, \end{equation} and \begin{equation} \label{EQ: Gagliardo-Nirenberg-03b-cc} \|F(U)(t, \cdot)\|_{H^{1}_{{\mathcal L}}} \leq C (1+t)^{p/2} \e[-(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m}) p t] L_{6}^{p}, \end{equation} respectively. As in the proof of Theorem \ref{TH: 01}, in view of Proposition \ref{LEM: Est-01}, for the integral operator \begin{equation} \label{OP: Int-NoL-01-cc} \begin{split} J[u](t, x):=\int_{0}^{t}K_{1}(t-\tau)\ast_{{\mathcal L}}F(u(\tau, x), u_t(\tau, x), {\mathcal L}^{1/2} u(\tau, x))d\tau, \end{split} \end{equation} for $0<b< 2\sqrt{\lambda_{0}+m}$ we have \begin{equation} \label{OP: Int-NoL-03-cc} \begin{split} &\|\partial^{\alpha}_{t}{\mathcal L}^{\beta} J[u](t, \cdot)\|_{2}^{2} \leq t \int_{0}^{t}\| \partial^{\alpha}_{t}{\mathcal L}^{\beta} K_{1}(t-\tau)\ast_{{\mathcal L}}F(u, u_t, {\mathcal L}^{1/2} u)(\tau, \cdot) \|_{2}^{2} d \tau \\ &\leq C t \e[- b t] \int_{0}^{t} \e[ b \tau] \|F(u, u_t, {\mathcal L}^{1/2} u)(\tau, \cdot) \|_{H^{\alpha-1+2\beta}_{{\mathcal L}}}^{2} d \tau. \end{split} \end{equation} Similarly, for $b=2\sqrt{\lambda_{0}+m}$ we get \begin{equation} \label{OP: Int-NoL-03a-cc} \begin{split} \|\partial^{\alpha}_{t}{\mathcal L}^{\beta} J[u](t, \cdot)\|_{2}^{2} \leq C t (1+t)^{2} \e[- b t] \int_{0}^{t} \e[ b \tau] \|F(u, u_t, {\mathcal L}^{1/2} u)(\tau, \cdot) \|_{H^{\alpha-1+2\beta}_{{\mathcal L}}}^{2} d \tau. \end{split} \end{equation} Also, for $2\sqrt{\lambda_{0}+m}<b$ we obtain \begin{equation} \label{OP: Int-NoL-03b-cc} \begin{split} \|\partial^{\alpha}_{t}{\mathcal L}^{\beta} J[u](t, \cdot)&\|_{2}^{2} \leq C t \e[- 2(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m}) t] \\ &\times\int_{0}^{t} \e[ 2(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m}) \tau] \|F(u, u_t, {\mathcal L}^{1/2} u)(\tau, \cdot) \|_{H^{\alpha-1+2\beta}_{{\mathcal L}}}^{2} d \tau \end{split} \end{equation} for $(\alpha, \beta)\in\{(0, 0), (0, 1/2), (1, 0), (0, 1), (1, 1/2), (2, 0)\}$. Now, combining the previous discussions with the estimates \eqref{EQ: Gagliardo-Nirenberg-02-cc}--\eqref{EQ: Gagliardo-Nirenberg-03b-cc}, we complete the proof of Theorem \ref{TH: 02}. \end{proof} \section{Higher order nonlinearities} \label{SEC:higher} In this section we briefly indicate how the obtained results can be extended to higher order nonlinearities $F_l:{\mathbb C}^{2l+1}\to {\mathbb C}$, $l\in\mathbb N$, in the form \begin{equation}\label{EQ:F-gen} F_l=F_l(u,\{\partial^j_t u\}_{j=1}^l, \{{\mathcal L}^{j/2} u\}_{j=1}^{l}). \end{equation} Denoting $$U:=(u,\{\partial^j_t u\}_{j=1}^l, \{{\mathcal L}^{j/2} u\}_{j=1}^{l})$$ for $u\in C^{l}(\mathbb R_{+}; H_{{\mathcal L}}^{l})$, we assume that $F_l(U)\in C(\mathbb R_{+}; H_{{\mathcal L}}^{l})$ and we say that the index $p>1$ is $(F_l,{\mathcal L})$-admissible if we have the inequality \begin{equation} \label{PR: f-02m} \begin{split} \|F_l(U)-F_l(V)\|_{H_{{\mathcal L}}^{l}}\lesssim (\|U\|_{H_{{\mathcal L}}^{1}}^{p-1}+ \|V\|_{H_{{\mathcal L}}^{1}}^{p-1})\|U-V\|_{H_{{\mathcal L}}^{1}}. \end{split} \end{equation} An example of $F=F_l$ satisfying \eqref{PR: f-02m} may be given by nonlinearities of the form \begin{equation}\label{EQ:exnonh} F(U)=\varphi \|U\|_{{\mathcal H}}^p \quad \textrm{ or }\quad F(U)=\varphi \|U\|_{H_{{\mathcal L}}^{1}}^p, \end{equation} for some $\varphi\in H_{\mathcal L}^{1}$. \begin{thm} \label{TH: 0g} Let $l\in\mathbb N$. Assume that $\lambda_0\geq0$ and $\lambda_0+m>0$, where $\lambda_0=\lambda_0({\mathcal L})$ is the bottom of the spectrum of ${\mathcal L}$. Let $p>1$ be such that $F_l$ as in \eqref{EQ:F-gen} satisfies the condition \eqref{PR: f-02m}. Suppose that $F(0)=0$, and that $u_{0}\in H_{{\mathcal L}}^{l+1}$ and $u_{1}\in H_{{\mathcal L}}^{l}$ are such that \begin{equation*} \label{EQ: Th-cond-01m} \|u_{0}\|_{H_{{\mathcal L}}^{l+1}}+\|u_{1}\|_{H_{{\mathcal L}}^{l}}\leq\varepsilon. \end{equation*} Then, there exists a small positive constant $\varepsilon_{0}>0$ such that the Cauchy problem \begin{equation*}\label{EQ: NoL-02m} \left\{ \begin{split} \partial_{t}^{2}u(t)+{\mathcal L} u(t)+\partial_{t}u(t)+m u(t)&=F_l(u,\{\partial^j u\}_{j=1}^l, \{{\mathcal L}^{j/2} u\}_{j=1}^{l}), \quad t>0,\\ u(0)&=u_{0}\in H_{{\mathcal L}}^{l+1}, \\ \partial_{t}u(0)&=u_{1}\in H_{{\mathcal L}}^{l}, \end{split} \right. \end{equation*} has a unique global solution $u\in C(\mathbb R_{+}; H_{{\mathcal L}}^{l+1})\cap C^{1}(\mathbb R_{+}; H_{{\mathcal L}}^{l})$ for all $0<\varepsilon\leq\varepsilon_{0}$. Moreover, for $0<b< 2\sqrt{\lambda_{0}+m}$ we have \begin{equation}\label{TH-NoL-1-01bm} \|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t)\|_{{\mathcal H}}\lesssim (1+t)^{1/2}\e[-\frac{b}{2}t], \end{equation} for $b = 2\sqrt{\lambda_{0}+m}$ we have \begin{equation}\label{TH-NoL-1-02bm} \|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t)\|_{{\mathcal H}}\lesssim (1+t)^{3/2}\e[-\frac{b}{2}t], \end{equation} and for $2\sqrt{\lambda_{0}+m}<b$ we have \begin{equation}\label{TH-NoL-1-03bm} \|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t)\|_{{\mathcal H}}\lesssim (1+t)^{1/2} \e[-(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m})t], \end{equation} for any $(\alpha, \beta)\in \mathbb N_0\times\frac12\mathbb N_0$ with $\alpha+2\beta\leq l+1$. \end{thm} \begin{proof} The proof of Theorem \ref{TH: 0g} is similar to that of Theorem \ref{TH: 02} except that we aim at using more general assumption \eqref{PR: f-02m} instead of the assumption \eqref{PR: f-02}. Analogously, define the closed subsets $Z_j$ of the space $C^{l+1}(\mathbb R_{+}; \,\, H^{l+1}_{{\mathcal L}})$ by $$ Z_{j} :=\{u\in C^{l+1}(\mathbb R_{+}; \,\, H^{l+1}_{{\mathcal L}}); \,\, \|u\|_{Z_{j}}\leq L_{j}\}, \,\,\, j=3l+1,3l+2,3l+3 $$ with \begin{equation*} \begin{split} \|u\|_{Z_{3l+1}}:=\sup_{t\geq0}\{(1+t)^{-1/2}\e[\frac{b}{2}t]\left(\sum_{(\alpha, \beta)\in \mathbb N_0\times\frac12\mathbb N_0}^{\alpha+2\beta\leq l+1}\|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t, \cdot)\|_{2}\right)\}, \end{split} \end{equation*} if $0<b< 2\sqrt{\lambda_{0}+m}$, and \begin{equation*} \begin{split} \|u\|_{Z_{3l+2}}:=\sup_{t\geq0}\{(1+t)^{-3/2}\e[\frac{b}{2}t]\left(\sum_{(\alpha, \beta)\in \mathbb N_0\times\frac12\mathbb N_0}^{\alpha+2\beta\leq l+1}\|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t, \cdot)\|_{2}\right)\}, \end{split} \end{equation*} if $b = 2\sqrt{\lambda_{0}+m}$, and \begin{equation*} \begin{split} \|u\|_{Z_{3l+3}}:=\sup_{t\geq0}\{(1+t)^{-1/2}\e[(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m})t]\left(\sum_{(\alpha, \beta)\in \mathbb N_0\times\frac12\mathbb N_0}^{\alpha+2\beta\leq l+1}\|\partial_{t}^{\alpha}{\mathcal L}^{\beta}u(t, \cdot)\|_{2}\right)\}, \end{split} \end{equation*} if $2\sqrt{\lambda_{0}+m}<b$, where $L_{j}>0$ ($j=3l+1, 3l+2, 3l+3$) are to be defined. As in the proof of Theorem \ref{TH: 02}, we define the mapping $\Gamma_l$ on $Z_{3l+1}$, $Z_{3l+2}$ and $Z_{3l+3}$ by \begin{equation} \label{MAP-01b-g} \begin{split} \Gamma_l[u](t):= K_{0}(t)\ast_{{\mathcal L}}u_{0} &+ K_{1}(t)\ast_{{\mathcal L}}u_{1}\\ &+\int_{0}^{t}K_{1}(t-\tau)\ast_{{\mathcal L}}F_l(u,\{\partial^j u\}_{j=1}^l, \{{\mathcal L}^{j/2} u\}_{j=1}^{l})(\tau)d\tau. \end{split} \end{equation} In addition, we establish that $\Gamma_l$ is a contraction mapping on $Z_{3l+1}$, $Z_{3l+2}$ and $Z_{3l+3}$. By \eqref{PR: f-02m} we have \begin{equation} \label{PR: f-02bg} \begin{split} \|F_l(U)-F_l(V)\|_{H_{{\mathcal L}}^{l}}\lesssim (\|U\|_{H_{{\mathcal L}}^{1}}^{p-1}+ \|V\|_{H_{{\mathcal L}}^{1}}^{p-1})\|U-V\|_{H_{{\mathcal L}}^{1}}. \end{split} \end{equation} Take $u$ and $v$ such that $\|u\|_{Z_{j}}\leq L_{j}$ and $\|v\|_{Z_{j}}\leq L_{j}$ for $j=3l+1, 3l+2, 3l+3$. Then for $U=(u,\{\partial^j u\}_{j=1}^l, \{{\mathcal L}^{j/2} u\}_{j=1}^{l})$, from \eqref{PR: f-02bg} we have \begin{equation} \label{EQ: Gagliardo-Nirenberg-02-ccg} \|(F(U)-F(V))(t, \cdot)\|_{H^{l}_{{\mathcal L}}} \leq C (1+t)^{p/2} \e[-\frac{b}{2} p t] L_{3l+1}^{p-1} \|u-v\|_{Z_{3l+1}}, \end{equation} and \begin{equation} \label{EQ: Gagliardo-Nirenberg-02a-ccg} \|(F(U)-F(V))(t, \cdot)\|_{H^{l}_{{\mathcal L}}} \leq C (1+t)^{3p/2} \e[-\frac{b}{2} p t] L_{3l+2}^{p-1} \|u-v\|_{Z_{3l+2}}, \end{equation} and \begin{equation} \label{EQ: Gagliardo-Nirenberg-02b-ccg} \|(F(U)-F(V))(t, \cdot)\|_{H^{l}_{{\mathcal L}}} \leq C (1+t)^{p/2} \e[-(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m}) p t] L_{3l+3}^{p-1} \|u-v\|_{Z_{3l+3}}, \end{equation} respectively. Due to $F(0)=0$, by substituting $V=0$ in \eqref{EQ: Gagliardo-Nirenberg-02-ccg}--\eqref{EQ: Gagliardo-Nirenberg-02b-ccg}, we get \begin{equation} \label{EQ: Gagliardo-Nirenberg-03-ccg} \|F(U)(t, \cdot)\|_{H^{l}_{{\mathcal L}}} \leq C (1+t)^{p/2} \e[-\frac{b}{2} p t] L_{3l+1}^{p}, \end{equation} and \begin{equation} \label{EQ: Gagliardo-Nirenberg-03a-ccg} \|F(U)(t, \cdot)\|_{H^{l}_{{\mathcal L}}} \leq C (1+t)^{3p/2} \e[-\frac{b}{2} p t] L_{3l+2}^{p}, \end{equation} and \begin{equation} \label{EQ: Gagliardo-Nirenberg-03b-ccg} \|F(U)(t, \cdot)\|_{H^{l}_{{\mathcal L}}} \leq C (1+t)^{p/2} \e[-(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m}) p t] L_{3l+3}^{p}, \end{equation} respectively. Repeating the proof of the above theorem, from the point of view of Proposition \ref{LEM: Est-01}, for the operator \begin{equation} \label{OP: Int-NoL-01-ccg} \begin{split} J_l[u](t, x):=\int_{0}^{t}K_{1}(t-\tau)\ast_{{\mathcal L}}F_l(u,\{\partial^j u\}_{j=1}^l, \{{\mathcal L}^{j/2} u\}_{j=1}^{l})(\tau, x)d\tau, \end{split} \end{equation} for $0<b< 2\sqrt{\lambda_{0}+m}$ we obtain \begin{equation} \label{OP: Int-NoL-03-ccg} \begin{split} \|\partial^{\alpha}_{t}&{\mathcal L}^{\beta} J_l[u](t, \cdot)\|_{{\mathcal H}}^{2} \\ &\leq t \int_{0}^{t}\| \partial^{\alpha}_{t}{\mathcal L}^{\beta} K_{1}(t-\tau)\ast_{{\mathcal L}}F_l(u,\{\partial^j u\}_{j=1}^l, \{{\mathcal L}^{j/2} u\}_{j=1}^{l})(\tau, \cdot) \|_{{\mathcal H}}^{2} d \tau \\ &\leq C t \e[- b t] \int_{0}^{t} \e[ b \tau] \|F_l(u,\{\partial^j u\}_{j=1}^l, \{{\mathcal L}^{j/2} u\}_{j=1}^{l})(\tau, \cdot) \|_{H^{\alpha-1+2\beta}_{{\mathcal L}}}^{2} d \tau. \end{split} \end{equation} Similarly, for $b=2\sqrt{\lambda_{0}+m}$ we have \begin{equation} \label{OP: Int-NoL-03a-ccg} \begin{split} \|\partial^{\alpha}_{t}&{\mathcal L}^{\beta} J_l[u](t, \cdot)\|_{2}^{2} \\ &\leq C t (1+t)^{2} \e[- b t] \int_{0}^{t} \e[ b \tau] \|F_l(u,\{\partial^j u\}_{j=1}^l, \{{\mathcal L}^{j/2} u\}_{j=1}^{l})(\tau, \cdot) \|_{H^{\alpha-1+2\beta}_{{\mathcal L}}}^{2} d \tau. \end{split} \end{equation} Also, for $2\sqrt{\lambda_{0}+m}<b$ we get \begin{equation} \label{OP: Int-NoL-03b-ccg} \begin{split} \|\partial^{\alpha}_{t}&{\mathcal L}^{\beta} J_l[u](t, \cdot)\|_{2}^{2} \leq C t \e[- 2(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m}) t] \\ &\times\int_{0}^{t} \e[ 2(\frac{b}{2}-\sqrt{\frac{b^{2}}{4}-\lambda_{0}-m}) \tau] \|F_l(u,\{\partial^j u\}_{j=1}^l, \{{\mathcal L}^{j/2} u\}_{j=1}^{l})(\tau, \cdot) \|_{H^{\alpha-1+2\beta}_{{\mathcal L}}}^{2} d \tau \end{split} \end{equation} for $(\alpha, \beta)\in \mathbb N_0\times\frac12\mathbb N_0$ with $\alpha+2\beta\leq l+1$. Now, repeating the proof of Theorem \ref{TH: 01} step by step and, taking into account the assumption \eqref{PR: f-02m}, we can finish to prove Theorem \ref{TH: 0g}. \end{proof}
{ "timestamp": "2017-12-15T02:01:32", "yymm": "1712", "arxiv_id": "1712.05009", "language": "en", "url": "https://arxiv.org/abs/1712.05009" }
\section{Introduction} Higher-order derivative interactions naturally appear in effective field theories. In particular, in the system with gravity, we need to take into account such terms since various higher-order corrections can be relevant to the dynamics. However, higher-derivative interactions often lead to the so-called Ostrogradski instability \cite{Ostrogradski,Woodard:2006nt}: higher-derivative interactions give additional degrees of freedom which makes the Hamiltonian unbounded from below, and hence the system shows an instability. If such a ghost mode appears, one should regard the system as an effective theory which is valid only below the energy scale of the mass of the ghost mode, otherwise the system loses the unitarity. In a class of ghost-free higher-derivative interactions, one does not come across with such an instability problem. In the case of a system with a single scalar and a tensor, the Horndeski class ~\cite{Horndeski:1974wa,Kobayashi:2011nu} of interactions are free from ghosts. In this class of interactions, the equations of motion (E.O.M) are at most the second order differential equations, and no additional degree of freedom shows up. In general, one may ask the following question: among many possible higher-order derivative terms, what kind of structure gives us ghost-free interactions? For example, in the so-called Galileon models~\cite{Nicolis:2008in}, Galileon scalar fields can be understood as the Goldstone mode of translation symmetry in extra dimensions, and the action is made out of ghost-free derivative terms. Therefore, one can say that the hidden translation symmetry controls the higher-derivative interactions so that there appear no new degrees of freedom. The absence of ghosts in supersymmetric Galileon model~\cite{Farakos:2013zya} can be also achieved by a spontaneously broken hidden SUSY~\cite{Roest:2017uga}. Higher-derivative interactions are also studied in gravity theories. Despite the existence of fourth-order derivative interactions, the so-called Starobinsky model~\cite{Starobinsky:1980te}, which has a quadratic term of the Ricci scalar, does not have any ghost as well as the Horndeski class. This is because such a system is equivalent to the scalar-tensor system without higher-derivatives. As a cosmological application, the Starobinsky model predicts the spectral tilt of scalar curvature perturbation compatible with the latest CMB observation~\cite{Ade:2015lrj}. One can extend this model to the system with an arbitrary function of the Ricci scalar, called the $f(R)$-gravity model~\cite{Buchdahl:1983zz} (see also Ref.~\cite{DeFelice:2010aj,Nojiri:2017ncd} for review), which is also dual to a scalar-tensor system, and therefore free from the ghost instability. Higher-derivative interactions were also studied in supersymmetric (SUSY) theories, both for global SUSY and supergravity (SUGRA). In SUSY cases, there is another problem called the auxiliary field problem: space-time derivatives may act in general on SUSY auxiliary fields ($F$ and $D$ for chiral and vector multiplets, respectively) in the off-shell superfield formulation. Then, they become dynamical and so one cannot eliminate them by their E.O.M \cite{Gates:1995fx-0,Gates:1995fx-1}. The auxiliary field problem and the higher-derivative ghosts usually come up together~\cite{Antoniadis:2007xc,Dudas:2015vka}. In four dimensional (4D) ${\cal N}=1$ SUSY theories, a classification of higher-derivative terms free from ghosts and the auxiliary field problem was given for chiral superfields \cite{Khoury:2010gb,Khoury:2011da,Koehn:2012ar,Koehn:2012te} as well as for vector superfields \cite{Fujimori:2017kyi}. Such higher-derivative interactions of chiral superfields were applied to low-energy effective theory~\cite{Buchbinder:1994iw-0,Buchbinder:1994iw-1,Buchbinder:1994iw-2,Buchbinder:1994iw-3} (see also \cite{Nitta:2014fca}), coupling to SUGRA \cite{Koehn:2012ar,Farakos:2012qu}, Galileons~\cite{Khoury:2011da}, ghost condensation \cite{Koehn:2012te}, a Dirac-Born-Infeld (DBI) inflation~\cite{Sasaki:2012ka}, flattening of the inflaton potential~\cite{Aoki:2014pna-0,Aoki:2014pna-1}, a (baby) Skyrme model \cite{Adam:2013awa-0,Adam:2013awa-1,Nitta:2014pwa-0,Nitta:2014pwa-1,Bolognesi:2014ova-0,Bolognesi:2014ova-1, Gudnason:2015ryh-0,Gudnason:2015ryh-1,Queiruga:2015xka}, other BPS solitons \cite{Nitta:2014pwa-0,Nitta:2014pwa-1,Queiruga:2017blc,Eto:2012qda}, and modulated vacua \cite{Nitta:2017mgk,Nitta:2017yuf-1}, while higher-derivative interactions of vector superfields were applied to the DBI action \cite{Cecotti:1986gb,Bagger:1996wp,Rocek:1997hi}, SUGRA coupling \cite{Cecotti:1986gb,Kuzenko:2002vk-0,Kuzenko:2002vk-1,Abe:2015nxa}, SUSY Euler-Heisenberg action \cite{Farakos:2012qu,Cecotti:1986jy,Farakos:2013zsa,Dudas:2015vka}, and non-linear self-dual actions \cite{Kuzenko:2002vk-0,Kuzenko:2002vk-1,Kuzenko:2000tg-0,Kuzenko:2000tg-1,Kuzenko:2000tg-2}. On the other hand, higher-derivative interaction of gravity multiplets were studied in 4D ${\cal N}=1$ SUGRA. In Ref.~\cite{Cecotti:1987sa}, Cecotti constructed the higher-order terms of the Ricci scalar in the old minimal supergravity formulation and showed that at least one ghost superfield appears if we have $R^n$ $(n\geq3)$ terms in the system. It is possible to avoid the ghost by some modifications of the system. In \cite{Ferrara:2014fqa}, the so-called nilpotent constraint on the Ricci scalar multiplet, which removes a scalar field in the multiplet, is considered. Due to the absence of the scalar, the bosonic ghost is absent in the spectrum of the system. This mechanism has been applied to various higher-curvature models in SUGRA~\cite{Farakos:2017mwd}. The nilpotent constraint ${\mathcal{R}}^2=0$, however, is an effective description of a broken-SUSY system. If the linearly realized SUSY is restored in a higher energy regime, the ghost mode would show up.\footnote{ The nilpotent condition on a chiral superfield $\Phi$ has two solutions. A nontrivial solution is $\phi=\frac{\psi\psi}{F^\phi}$ where $\phi$, $\psi$ and $F^\phi$ are scalar, Weyl spinor, and auxiliary scalar components of $\Phi$. Obviously, this solution is well-defined for $F^\phi\neq0$, that is, SUSY should be spontaneously broken. } As another approach, in~\cite{Diamandis:2017ems} the authors considered a deformation of the ghost kinetic term by introducing an additional K\"ahler potential term. It is shown that the resultant ghost-free system is equivalent to the matter coupled $f(R)$ SUGRA. Meanwhile, in our previous work~\cite{Fujimori:2016udq}, we proposed a simple method to remove a ghost mode in 4D ${\cal N}=1$ SUSY chiral multiplets \cite{Antoniadis:2007xc,Dudas:2015vka}, which we dubbed ``ghostbuster mechanism.'' We gauge a U(1) symmetry by introducing a non-dynamical gauge superfield without kinetic term to the higher-derivative system with assigning charges on chiral superfields properly in order for the gauge field to absorb the ghost. Namely, due to the gauge degree of freedom, the ghost in the system is removed by the U(1) gauge fixing. In this class of models, a hidden local symmetry plays a key role in the ghostbuster mechanism. Actually, before this work, esentially the same technique is used for superconformal symmetry in the conformal SUGRA formalism: the conformal SUGRA has one ghost-like degree of freedom called as a compensator. Such a degree of freedom is removed by the superconformal gauge fixing, whereas in the ghostbuster mechanism, the hidden local U(1) gauge fixing removes the ghost associated with higher-derivatives. Therefore, in SUGRA models, one may understand the higher-derivative ghost as a second compensator for the system with the superconformal symmetry $\times$ hidden local U(1) symmetry. In this paper, we apply the ghostbuster mechanism to remove the ghost in the $f(R)$ SUGRA system. Interestingly, the hidden U(1) symmetry required for the mechanism can be understood as the gauged R-symmetry, since the gravitational superfield should be gauged under the U(1) symmetry. The U(1) charge assignment is uniquely determined, and therefore, naively one cannot expect a ghost mode cancelation a priori. As we will show, a would-be ghost superfield has a gauge charge and can be nicely removed by the gauge fixing of the U(1) symmetry. As a price of this achievement, however, the resultant system generically has an unstable scalar potential in a pure SUGRA case. Such an unstable scalar potential can be cured by various modifications. As an example we propose a model with a matter chiral superfield. We will find that such a deformation leads to a healthy model of SUGRA without either ghosts or instabilities of the scalar potential. One will easily find how the ghost supermultiplet is eliminated from the dual matter-coupled SUGRA viewpoint. We also address the same question in the higher-curvature SUGRA system. We find that, after integrating out the auxiliary vector superfield for the mechanism, the scalar curvature terms including $R^n$ with $n\geq3$ disappear, and the resultant system has linear and quadratic terms in $R$. However, the $R+R^2$ SUGRA system has couplings completely different from that proposed in \cite{Cecotti:1987sa}. This observation means that, despite the disappearance of higher scalar curvatures in the final form, the higher-curvature deformation in the original action gives a physical consequence even after applying the ghostbuster mechanism. This paper is organized as follows. In Sec.\,\ref{review}, we briefly review the higher-curvature SUGRA models and its dual description. In particular, one finds that once the SUSY version of the higher order Ricci scalar term $R^n$ ($n\geq3$) is included in the old minimal SUGRA formulation, there appears at least one ghost chiral superfield. We apply the ghostbuster mechanism to the higher-curvature SUGRA in Sec.\,\ref{GB}. We will see that although the ghost superfield can be removed by the mechanism, the resultant system has a scalar potential with an instability in the direction of a scalar field. Then, in Sec.\,\ref{sec:unstablemodel}, we discuss a simple modification of the model by introducing an extra matter chiral superfield. We show an example which is stable and free from ghost as well. Finally, we conclude in Sec.\,\ref{conclusion}. Throughout this paper, we will use the notation of~\cite{Freedman:2012zz}. \section{Higher-curvature terms in supergravity} \label{review} In this section, we review the construction of higher-order terms of the Ricci scalar in 4D ${\cal N}=1$ SUGRA~\cite{Cecotti:1987sa}.\footnote{Cosmological application of SUSY Starobinsky model is discussed e.g. in~\cite{Kallosh:2013xya,Farakos:2013cqa,Ketov:2013dfa}.} In this paper, we use the conformal SUGRA formalism, in which there are conformal symmetry and its SUSY counterparts in addition to super-Poincar\'e symmetry \cite{Kaku:1978nz,Kaku:1978ea,Townsend:1979ki,Kugo:1982cu}. In order to fix the extra gauge degree of freedom, we need to introduce an unphysical degrees of freedom called the conformal compensator, which should be in a superconformal multiplet. In this paper, we adopt a chiral superfield as a compensator superfield, which leads to the so-called old minimal SUGRA after superconformal gauge fixing. We show the components of supermultiplet, the density formulas, and identities in Appendix~\ref{Components}. First, let us show the pure conformal SUGRA action, \begin{align} S = \left[ - \frac{3}{2} S_0 \bar{S}_0 \right]_D, \label{pure} \end{align} where $S_0$ is the chiral compensator with the charges $(w,n)=(1,1)$ in conformal SUGRA (see Apeendix \ref{Components} for the definition of the charges), and $[\cdots]_D$ denotes the D-term density formula. Taking the pure SUGRA gauge, $S_0=\bar{S}_0=1$, $b_\mu=0$, we obtain an action whose bosonic part takes the form \begin{align} S=\int d^4x\sqrt{-g}\left(\frac{1}{2}R-3|F^{S_0}|^2+3A_aA^a\right), \end{align} where $R$ is the Ricci scalar, $F^{S_0}$ is the F-term of $S_0$ and $A_a$ is the gauge field of chiral U(1)$_A$ symmetry, which is a part of superconformal symmetry. The E.O.M. for the auxiliary fields $F^{S_0}$ and $A_a$ can be solved by setting $F^{S_0} = A_a = 0$, and then we find the pure SUGRA action. The action~(\ref{pure}) can also be written as \begin{align} S = \left[ \frac{3}{2} S_0^2 {\mathcal{R}} \right]_F, \end{align} where $[\cdots]_F$ is the F-term density formula. Here we have used the identity given in~\eqref{ID}. The chiral superfield ${\mathcal{R}}$ is the so-called scalar curvature superfield, defined by \begin{align} {\mathcal{R}} \equiv \frac{\Sigma(\bar{S}_0)}{S_0}, \end{align} where $\Sigma$ is the chiral projection operator. Its components in the pure SUGRA gauge are given by \begin{align} {\mathcal{R}} = [ \Phi \,,\, P_L \chi \,,\, F ] = \left[ -\bar{F}^{S_0} \,,\, \cdots \,,\, |F^{S_0}|^2 + \frac{1}{6} R + A_a A^a - i \partial_a A^a + \cdots \right], \end{align} where ellipses denote fermionic parts. From this expression, we find that the F-component of ${\mathcal{R}}$ contains the Ricci scalar. It has been known that there is no ghost in the system involving $R^2$, which is realized as \begin{align} S = \left[ - \frac{3}{2} S_0 \bar{S}_0 + \frac{\alpha}{2} {\mathcal{R}} \bar{{\mathcal{R}}} \right]_D, \label{Rsquare} \end{align} where $\alpha$ is a real constant. The bosonic part of this action after the superconformal gauge fixing is \begin{align} S|_B = \int d^4x \sqrt{-g} \Biggl[& \frac{R}{2} + \frac{\alpha}{36} R^2 - 3 |F^{S_0}|^2 - \alpha D_a F^{S_0} D^a \bar{F}^{S_0} + 3 A_a A^a + \alpha (\partial_a A^a)^2 \nonumber \\ & + \frac{\alpha R}{6} \left( |F^{S_0}|^2 + 2 A_a A^a \right) + \alpha \left( |F^{S_0}|^2 + A_a A^a \right)^2 \Biggr], \end{align} where $D_a$ represents the covariant derivative, $D_a S_0 = ( \partial_a - iA_a ) S_0 = - i A_a, D_a F^{S_0} = ( \partial_a + 2i A_a ) F^{S_0}$. The Lagrangian has the quadratic Ricci scalar term $\frac{\alpha}{36} R^2$ and also the non-minimal couplings between $F^{S_0}, A_a$ and $R$. In this system, there exist four real massive modes $\varphi_i$ with the common mass $m^2=3/\alpha$ in the fluctuations around the vacuum $g_{\mu\nu}=\eta_{\mu\nu} $ and $F^{S_0}=A_a=0$: \begin{eqnarray} g_{\mu\nu}=\eta_{\mu\nu}+\left(\eta_{\mu\nu} -\frac{\partial_\mu\partial_\nu}{\Box}\right) \varphi_1, \quad A_\mu =\partial_\mu \varphi_2, \quad F^{S_0}=\varphi_3+i \varphi_4. \end{eqnarray} We stress that, as is often the case with SUSY higher derivative models, the auxiliary fields have their kinetic terms and hence they are dynamical degrees of freedom in the presence of the higher-derivative term. Next, let us consider a SUGRA system with $R^n$, $n\geq3$ along the line of Refs.~\cite{Cecotti:1987sa,Ozkan:2014cua,Ferrara:2014fqa}. As we discussed in the previous section, ${\mathcal{R}}$ superfield has the Ricci scalar in its F-component. Using the chiral projection operator $\Sigma$, one can obtain the superfield $\Sigma(\bar{{\mathcal{R}}})$ which has $R$ in the lowest component: \begin{align} \Sigma(\bar{{\mathcal{R}}}) =& \left[ - \frac{1}{6} R - |F^{S_0}|^2 - A_a A^a - i\partial_a A^a + \cdots \,,\, \cdots \,,\, \right. \nonumber \\ & \hspace{1cm} \left. \frac{1}{6} R F^{S_0} + \left( \partial_a^2 + i \partial_a A^a - A_a A^a \right) F^{S_0} + \cdots \right], \end{align} where we have shown only the relevant part. With this superfield $\Sigma(\bar{{\mathcal{R}}})$, one can construct an action involving arbitrary functions of $R$, i.e.~$f(R)$ gravity models in SUGRA. Here we consider the action of the form \begin{align} S=\left[-\frac{3}{2}S_0\bar{S}_0\Omega\left(\frac{{\mathcal{R}}}{S_0},\frac{\bar{{\mathcal{R}}}}{\bar{S}_0},\frac{\Sigma(\bar{{\mathcal{R}}})}{S_0^2}, \frac{\bar{\Sigma}({\mathcal{R}})}{\bar{S}_0^2}\right)\right]_D+\left[S_0^3{\cal F}\left(\frac{{\mathcal{R}}}{S_0},\frac{\Sigma(\bar{{\mathcal{R}}})}{S_0^2}\right)\right]_F,\label{fR1} \end{align} where $\Omega$ is an arbitrary real function and ${\cal F}$ is an arbitrary holomorphic function. If we chose $\Omega = 0, {\cal F}(S,X) = S ( 3 - \alpha X)/2$, then this action reduces to (\ref{Rsquare}) since \begin{eqnarray} \left[S_0^3{\cal F}\left(\frac{{\mathcal{R}}}{S_0},\frac{\Sigma(\bar{{\mathcal{R}}})}{S_0^2}\right)\right]_F= \left[\frac32 S_0^2 {\mathcal{R}}-\frac \alpha 2 {\mathcal{R}} \Sigma(\bar {\cal R})\right]_F=\left[-\frac32 S_0\bar S_0+\frac\alpha 2 {\mathcal{R}} \bar {\mathcal{R}} \right]_D. \end{eqnarray} The bosonic part of the action contains the following terms including higher-order terms of Ricci scalar $R$ \begin{eqnarray} \int d^4x \sqrt{-g} \left\{ - \frac{R^2}{12} \Omega_{S\bar S}(S,\bar S,X,\bar X) + \frac{R}{6} {\cal F}_S(S, X) + {\rm h.c.} \right\}_{S = -\bar F^{S_0} ,\, X= -R/6} \, , \label{curvature} \end{eqnarray} where the subscripts on the functions denote the differentiations with respect to the scalar fields. Such SUSY higher-derivative terms have derivative interactions of auxiliary fields, and the interactions make the auxiliary fields dynamical as \begin{eqnarray} \int d^4x \sqrt{-g}\left\{ \frac{1}{12} g^{\mu\nu} \partial_\mu R \, \partial_\nu R \, \Omega_{X\bar X} + \left( \partial_\mu^2 F^{S_0} {\cal F}_{X} + {\rm h.c.} \right) + \cdots \right\}_{S = -\bar F^{S_0} ,\, X=-R/6} \, . \end{eqnarray} In this system, in addition to the scalar degree of freedom from the derivative terms of the Ricci-curvature, the higher-derivative terms of the ``dynamical'' auxiliary field $F^{S_0}$ give rise to multiple scalar degrees of freedom, some of which are ghost-like. If we choose $\Omega(S,\bar S,X,\bar X) = S \bar S \tilde \Omega(X,\bar X)$, ${\cal F}(S,X) = S \tilde {\cal F}(X)$, and set $F^{S_0}=0$ identically as is done by imposing the nilpotent condition ${\mathcal{R}}^2=0$ in Ref.~\cite{Ferrara:2014fqa}, the above terms vanish and no ghost seems to appear. Without such a condition, however, the appearance of ghost is unavoidable as is clearly shown in the following. The present system is also equivalent to a standard SUGRA model coupled to matter superfields. As in the previous section, we use Lagrange multiplier suerfields, and rewrite the action~(\ref{fR1}) as \begin{align} S' = & \Bigg[ - \frac{3}{2} S_0 \bar{S}_0 \, \Omega(S,\bar{S},X,\bar{X}) \Bigg]_D + \Bigg[ S_0^3 \, {\cal F}(S,X) \Bigg]_F \nonumber\\ & \hspace{1cm} + \Bigg[ 3 S_0^3 \, T \left( \frac{{\mathcal{R}}}{S_0} - S \right) \Bigg]_F +\Bigg[ 3 S_0^3 \,Y \left( \frac{\Sigma( \bar{S}_0 \bar{S} )}{S_0^2} - X \right) \Bigg]_F, \end{align} where $T$ and $Y$ are Lagrange multiplier superfields with $(w,n)=(0,0)$. The E.O.Ms of $T$ and $Y$ give the constraints which reproduce the original action~(\ref{fR1}). Instead, using the identity~(\ref{ID}), we can also obtain the dual action \begin{align} S' = \left[ - \frac{3}{2} S_0 \bar{S}_0 \left( T + \bar{T} + Y \bar{S} + \bar{Y} S + \Omega(S,\bar{S},X,\bar{X}) \right) \right]_D + \bigg[S_0^3 \left( {\cal F}(S,X) - 3TS - 3XY \right) \bigg]_F.\label{fR2} \end{align} This is a standard SUGRA system with the following K\"ahler and super-potentials, \begin{align} K &= -3 \log \left( T + \bar{T} + Y \bar{S} + \bar{Y} S + \Omega(S,\bar{S},X,\bar{X}) \right), \label{eq:GhostKahler}\\ W &= {\cal F}(S,X) - 3TS - 3XY. \end{align} Let us show the existence of a ghost mode. The K\"ahler metric of the $\{S,Y\}$ sector takes the form, \begin{align} K_{I\bar{J}}=\left( \begin{array}{cc} K_{S\bar{S}}&-\frac{1}{A}\\ -\frac{1}{A}&0 \end{array} \right), \end{align} where $A=T+\bar{T}+Y\bar{S}+\bar{Y}S+\Omega(S,\bar{S},X,\bar{X})$. The determinant of this sub matrix has negative determinant, and this K\"ahler metric has one negative eigenvalue corresponding to a ghost. Thus, the $f(R)$ SUGRA model has one ghost mode in general. Note that $X$ becomes an auxiliary superfield if $\Omega=\Omega(S,\bar{S})$ is independent of $X$. Even in such a case, the system has higher-curvature terms in the ${\cal F}(S,X)$ term in \eqref{curvature}. The reduced dual system is described by \begin{align} K &= -3 \log \left( T+\bar{T}+Y\bar{S}+\bar{Y}S+\Omega(S,\bar{S}) \right), \nonumber \\ W &= g(S,Y) - 3TS, \label{redfr} \end{align} where $g(S,Y) = [{\cal F} - X {\cal F}_X]_{X=X(S,Y)}$ and $X(S,Y)$ is a solution of ${\cal F}_X - 3Y = 0$.\footnote{ Here we assume that the equation ${\cal F}_X = {\cal F}_X(X)$ can be solved for $X$ (e.g. ${\cal F} \propto S X^{n-1}$ with $n\ge 3$). Constant and linear terms in ${\cal F}$ merely rescale the $R$ and $R^2$ terms respectively.} This reduction does not change the above discussion, and hence a ghost mode appears in this system as well. \section{Ghostbuster in $f(R)$ supergravity} \label{GB} In this section, we consider the elimination of the ghost superfield along the line of Ref.~\cite{Fujimori:2016udq}. To eliminate the ghost superfield, one needs to introduce a gauge redundancy, by which one of the degrees of freedom is removed. In the $f(R)$ SUGRA discussed above, all the superfields ${\mathcal{R}}$, $\Sigma(\bar{{\mathcal{R}}})$ are expressed in terms of $S_0$ with the SUSY derivative operators. Hence, once we introduce a vector superfield $V_R$ for a U(1) gauge symmetry and assign the charge to $S_0$ so that it transforms as \begin{eqnarray} S_0 \to e^{\Lambda} S_0, ~~~~~ V_R \to V_R - \Lambda - \bar\Lambda, \end{eqnarray} the transformation law of ${\mathcal{R}}$ and $\Sigma(\bar{{\mathcal{R}}})$ are automatically determined as \begin{eqnarray} {\mathcal{R}}_g \equiv \frac{\Sigma(\bar S_0 e^{V_R})}{S_0} \to e^{-2\Lambda} {\mathcal{R}}_g, \quad \Sigma_g(\bar {\mathcal{R}}) \equiv \Sigma(\bar {\mathcal{R}}_g e^{-2V_R}) \to e^{2\Lambda} \Sigma_g(\bar {\mathcal{R}}), \end{eqnarray} where the chiral projection $\Sigma$ needs to be modified so that the operations is covariant under the gauge symmetry. In the rest of this section, we omit the suffix $g$ attached to ${\mathcal{R}}_g, \Sigma_g$. Interestingly, the U(1) gauge symmetry under which the compensator is charged becomes a gauged R-symmetry~\cite{Ferrara:1983dh}. We call it a U(1)$_R$ symmetry in the following discussion. Here, however, we do not introduce a kinetic term for $V_R$ and thus the vector superfield $V_R$ is an auxiliary superfield, which should be written as a composite field consisting of curvature superfields ${\mathcal{R}}$ and $\Sigma(\bar{{\mathcal{R}}})$. \subsection{Ghostbuster in pure $f(R)$ supergravity model } Let us introduce a U(1)$_R$ gauge symmetry under which $S_0$ has charge $c_{S_0}=1$. Since the chiral superfield ${\mathcal{R}}$ = $\Sigma(\bar{S}_0)/S_0$, the charge of ${\mathcal{R}}$ is determined as $c_{{\mathcal{R}}}=-2$. Analogously, we find that $c_{\Sigma(\bar{{\mathcal{R}}})}=2$. Then the gauged extension of the system~(\ref{fR1}) with $\Omega=\Omega(S,\bar S)$ is described by the action \begin{align} S = \left[ - \frac{3}{2} \, S_0 \, e^{V_R} \bar{S}_0 \, \Omega \left( \frac{{\mathcal{R}}}{S_0} \,, \frac{\bar{{\mathcal{R}}}}{\bar{S}_0} e^{-3V_R}\right) \right]_D + \left[ S_0^3 \, {\cal F} \left( \frac{{\mathcal{R}}}{S_0} \,, \frac{\Sigma(\bar{{\mathcal{R}}})}{S_0^2} \right) \right]_F,\label{originalgb} \end{align} where $\Omega$ should be gauge invariant and ${\cal F}$ should have gauge charge $c_{\cal F}=-3$ in total. Hence ${\cal F}$ should take the form \begin{align} {\cal F} \left( \frac{{\mathcal{R}}}{S_0} \,, \frac{\Sigma(\bar{{\mathcal{R}}})}{S_0^2} \right) \, =~ 3 \tilde{{\cal F}} \left( \frac{\Sigma(\bar{{\mathcal{R}}})}{S_0^2} \right) \frac{{\mathcal{R}}}{S_0}. \end{align} To discuss the ghost elimination, it is useful to consider the dual system as in the non-gauged case~(\ref{fR2}). The dual system of the gauged model is described by \begin{align} S' = & \Bigg[ -\frac{3}{2} \, S_0 \, e^{V_R} \bar{S}_0 \, \Omega(S \,, \bar{S}e^{-3V_R}) \Bigg]_D + \Bigg[ 3 S_0^3 \, \tilde{{\cal F}}(X )S \Bigg]_F \nonumber \\ & \hspace{5mm} + \Bigg[ 3 S_0^3 \, T \left( \frac{{\mathcal{R}}}{S_0} - S \right) \Bigg]_F + \Bigg[ 3 S_0^3 \,Y \left( \frac{\Sigma( \bar{S}_0 \bar{S} )}{S_0^2} - X \right) \Bigg]_F, \label{eq:PureGravityGeneral} \end{align} where the gauge charges of $T,S,X,Y$ are $(c_T,c_S,c_X,c_Y)=(0 \,, -3 \,, 0 \,, -3)$. Similarly to the non-gauged case, we can rewrite this action as \begin{align} S' \ =& ~ \Bigg[ - \frac{3}{2} \, S_0 \, e^{V_R} \bar{S}_0 \left \{ T + \bar{T} + Y \bar{S}e^{-3V_R} +\bar{Y} e^{-3V_R} S + \Omega(S, \bar{S} e^{-3V_R}) \right\} \Bigg]_D \nonumber \\ +& ~ \Bigg[ 3 S_0^3 \left( \tilde{{\cal F}}(X) S - TS - XY \right) \Bigg]_F. \end{align} For simplicity, in the following discussion, we choose the function $\Omega=\gamma-h S\bar{S} e^{-3V_R}$, where $\gamma$ is a real constant. Note that one can perform the following procedure with a more general form of $\Omega$ in a similar way. Then we obtain \begin{align} S \, =& ~ \Bigg[ -\frac{3}{2} S_0 \, e^{V_R} \bar{S}_0 \left(\gamma+T+\bar{T} + \left(Y\bar{S} + \bar{Y}S - h S \bar S \right) e^{-3V_R} \right) \Bigg]_D \nonumber \\ -& ~ \Bigg[ 3 S_0^3 \left( \tilde{{\cal F}}(X) S + TS + XY \right) \Bigg]_F. \end{align} We stress that the U(1)$_R$ charges of $(S,Y)$ are automatically determined to be non-zero. This is a nontrivial and important nature of the $f(R)$ SUGRA model since the ghostbuster mechanism does not work if $S$ and $Y$, either of which corresponds to the ghost mode, did not have the U(1)$_R$ charges. The variation of $V_R$ gives the following E.O.M for $V_R$ \begin{align} (\gamma + T + \bar{T} ) e^{V_R} - 2 \left( Y \bar S + S \bar Y - h S\bar S \right) e^{-2V_R} = 0. \end{align} This equation can be algebraically solved in terms of $V_R$ as \begin{align} e^{-3V_R} = \frac{\gamma + T + \bar{T}}{2 \left( Y \bar S + S \bar Y - h S \bar S \right)}. \end{align} Substituting this solution to the action, one finds \begin{align} S \, =& ~ \left[ - \frac{3}{2} S_0 \bar{S}_0 \, ( \gamma + T + \bar{T} )^{\frac{2}{3}} \left( Y \bar S + S \bar Y - h S \bar S \right)^{\frac{1}{3}}\right]_D \nonumber \\ -& ~ \left[ \frac{2}{\sqrt{3}} S_0^3 \left(\tilde{{\cal F}}(X) S + T S + X Y \right) \right]_F,\label{dualmatter} \end{align} where we have rescaled $S_0$ as $S_0 \to 2^{1/3}/\sqrt{3} \, S_0$. Thus, starting from the modified higher-curvature action~\eqref{originalgb}, we find the dual matter-coupled system~\eqref{dualmatter}. After partial gauge fixings of superconformal symmetry\footnote{More specifically, we fix dilatiation, chiral U(1) symmetry, S-SUSY, and conformal boost, so that Poincare SUSY remains in the resultant system. The detailed procedure of superconformal gauge fixing is discussed e.g. in \cite{Freedman:2012zz}.}, this system becomes Poincar\'e SUGRA with the following K\"ahler and superpotentials, \begin{align} K =& -2\log \left( \gamma+T+\bar{T} \right) - \log \left( Y \bar S + S \bar Y - h S \bar S \right), \\ W =& -\frac{2}{\sqrt{3}} \left( \tilde{{\cal F}}(X) S + T S + X Y \right). \end{align} This system is invariant under the U(1)$_R$ gauge transformation $\{ S,Y \} \to \{ e^{\Lambda}S , e^{\Lambda} Y \}$. Therefore, if the lowest component of $Y$ takes a non-zero value, we can fix the U(1)$_R$ gauge by setting $Y=1$. Then, after a redefinition $S\to S+\frac1h$, we obtain \begin{align} K = & -2 \log (\gamma + T + \bar{T}) - \log(1 - h^2 S \bar S), \\ W = & - \frac{2}{\sqrt{3}} \left( \tilde{{\cal F}}(X) (S + 1/h) + T (S+1/h) + X \right). \end{align} If $S \not= 0$, we can also fix the gauge by setting $S=1$. Then we find \begin{align} K =& -2\log (\gamma + T + \bar{T}) - \log(Y + \bar Y - h), \label{eq:SfixK}\\ W =& -\frac{2}{\sqrt{3}} \left(\tilde{{\cal F}}(X) + T + X Y \right). \end{align} Except for the two points $S=0\ (Y=\infty)$, $Y=0\ (S=\infty)$, the above two descriptions are equivalent and related by a coordinate transformation between $S$ and $Y$. In both cases, all the eigenvalues of the K\"ahler metric are obviously positive. Therefore, we have shown that the ghost mode is eliminated by our ghostbuster mechanism. Note that $X$ is an auxiliary field in this setup, and we need to solve the E.O.M for $X$ to obtain the physical superpotential. We stress that the elimination of the ghost mode by the ghostbuster mechanism in this higher-curvature system is nontrivial since we do not have any choice of the charge assignment to the superfields. As we have seen above, the would-be ghost modes have charges under U(1)$_R$, which enables us to remove the ghost mode by the gauge degree of freedom. \subsection{Instability of scalar potential} In this section, we analyze the scalar potential of the ghost-free system derived in the previous section. The F-term scalar potential in the Poincar\'e SUGRA is given by \begin{eqnarray} V = e^K \left[ K^{A\bar B} ( W_A + K_A W ) ( \bar W_{\bar B} + K_{\bar B} \bar W ) - 3|W|^2 \right]. \end{eqnarray} If we choose the gauge fixing condition $S=1$, ${\rm Im} \, T$ appears only in $W$ due to the shift symmetry of ${\rm Im} \, T$ in the K\"ahler potential, and hence the mass of ${\rm Im} \, T$ is given by \begin{eqnarray} m_{{\rm Im} \, T}^2 \propto e^{K} ( K^{A \bar B} K_A K_{\bar B } -3 ). \end{eqnarray} The K\"ahler potential in Eq.(\ref{eq:SfixK}) has the property called the no-scale relation \begin{eqnarray} K^{A \bar B}K_A K_{\bar B}=3. \end{eqnarray} Since $W \propto T + XY$, the potential has the following linear term of ${\rm Im} \, T$ \begin{eqnarray} {\rm Im}( K_{\bar B} K^{\bar B A} W_A) \, {\rm Im} \, T \label{ImT}. \end{eqnarray} To realize a stable vacuum at ${\rm Im} \, T = 0$, this quantity must vanish identically. By using the K\"ahler potential in Eq.~(\ref{eq:SfixK}), we find that the coefficient of the linear term is given by \begin{eqnarray} {\rm Im}( K_{\bar B} K^{\bar B A} W_A ) \ = \ \frac{4}{\sqrt{3}} ( Y + \bar Y - h ) \, {\rm Im} X. \end{eqnarray} Note that the non-dynamical field $X$ becomes a function of $Y$ after solving its E.O.M. ${\rm Im} \, T$ has only a mass term $\sim \langle Y + \bar Y - h \rangle ( X_Y Y + \overline{X}_Y \bar Y ){\rm Im} \, T$, where $X_Y\equiv \langle\partial_Y( {\rm Im}X)\rangle$. Unfortunately, this ``off-diagonal'' contribution in the mass matrix leads to a tachyonic mode.\footnote{ In general, $\langle Y + \bar Y - h \rangle$ should be nonzero since the K\"ahler potential has $-\log( Y + \bar Y - h )$ and diverges for $\langle Y+\bar Y - h \rangle=0$.} This instability cannot be cured by any higher-order terms since ${\rm Im} \, T$ appears only in the term~\eqref{ImT}. Therefore, ${\rm Im} X \not = 0$ makes ${\rm Im} \,T$ unstable and even if there is the local minimum in ${\rm Im}X={\rm Im} \,T=0$, that point cannot be a local minimum, but must be a saddle point. We conclude that although the instability caused by ghost mode is absent thanks to the ghostbuster mechanism, the pure higher-curvature action has an unstable scalar potential, which does not have any stable SUSY minimum. In the next section, we consider an extension of our model to improve this point. \section{Stable ghostbuster model with extra matter}\label{sec:unstablemodel} \subsection{Preliminary} As we discussed in the previous section, the scalar potential of our minimal model has no stable SUSY minimum. One may improve such a situation by various types of modifications. Here we take a relatively simple way; We introduce an additional matter field $Z$ so that the coupling between the gravitational sector and the additional sector stabilizes the potential.\footnote{Even in the $R^2$ model, the deformation of scalar potential of $T$ corresponding to the scalaron superfield requires an additional degree of freedom in the dual higher-curvature SUGRA action~\cite{Cecotti:2014ipa}.} Let us assume that $Z$ carries no U(1)$_R$ charge so that the superpotential $W$ contains $TZ$ term in the $S=1$ gauge. Then it is possible to introduce $Z$ in the superpotential in such a way that the constraint for $S$ is modified as \begin{eqnarray} S = \frac{{\mathcal{R}}}{S_0} \quad \to \quad S Z = \frac{{\mathcal{R}}}{S_0}. \label{ModifiedS} \end{eqnarray} We can also change the definition of $X$ as \begin{eqnarray} X=\frac{\Sigma(\bar S_0 \bar S)}{S_0^2} \quad \to \quad X=\frac{\Sigma\left(\bar S_0 \bar S \, \bar k(\bar Z,Z) \right)}{S_0^2}, \end{eqnarray} with an arbitrary function $k(Z,\bar Z)$. Note that if we chose $k(Z,\bar Z) = Z$, then we obtain the same unstable model as in Sec.\,\ref{GB} with the redefinition $S \rightarrow S'=SZ$. Therefore, $k(Z,\bar{Z})$ should have a constant term around the minimum of $Z$, i.e. $k(\langle Z\rangle,\langle\bar Z\rangle) \equiv c \not =0$. Under this modification, the dual system is given by \begin{align} S' \, =& ~ \Bigg[ -\frac{3}{2} \, S_0 \, e^{V_R} \bar{S}_0 \, \Omega( S, \bar{S} e^{-3V_R} , Z, \bar Z) \Bigg]_D + \Bigg[ 3 \, S_0^3 \, T \left(\frac{{\mathcal{R}}}{S_0} - SZ \right) \Bigg]_F \nonumber \\ +& ~\Bigg[ S_0^3 S \tilde{\cal F}(X) \Bigg]_F + \Bigg[3 S_0^3 \,Y \left(\frac{\Sigma(\bar{S}_0 \bar{S} \, \bar k(\bar Z))}{S_0^2} - X \right) \Bigg]_F, \label{eq:PureGravityGeneral} \end{align} which can be rewritten as \begin{align} S' =& ~ \Bigg[ - \frac{3}{2} \, S_0 \, e^{V_R} \bar{S}_0 \left\{ T + \bar{T} + Y\bar{S} e^{-3V_R} \, \bar k(\bar Z) + \bar{Y} e^{-3V_R} S \, k(Z) +\Omega \right\} \Bigg]_D \nonumber \\ +& ~ \Bigg[ S_0^3 \left(\tilde {\cal F} (X)S-3TSZ-3XY\right) \Bigg]_F. \end{align} For simplicity, let us choose the function as \begin{eqnarray} \Omega = \gamma-g(Z,\bar Z)- h( Z,\bar Z) \, S\bar{S} \, e^{-3V_R}. \end{eqnarray} After solving the E.O.M for $V_R$, we find the following K\"ahler potential and superpotential \begin{align} K =& -2 \log \Big[ \gamma+T+\bar{T}-g(Z,\bar Z) \Big] - \log \Big[ Y \bar k(\bar Z) +\bar Y k(Z) - h(Z,\bar Z) \Big], \\ W =& \, \frac{2}{\sqrt{3}} \left[ \frac{1}{3} \tilde {\cal F}(X) - \left(T Z +X Y \right) \right], \end{align} in the $S=1$ gauge. \subsection{Example of matter coupled $f(R)$ supergravity}\label{exfr} Let us discuss a simple example by setting the functions as \begin{eqnarray} k(Z)=c +Z, \quad \quad \quad \Omega =\gamma + (\beta - b Z\bar Z) \, S \bar S \, e^{-3V_R}. \label{SimpleModel} \end{eqnarray} The corresponding K\"ahler potential is given by \begin{align} K&=-2\log \omega_1 \label{SMK} -\log \omega_2, \\ \omega_1&\equiv \gamma+T+\bar{T},\\ \omega_2&\equiv \beta + \left( \bar Y(c+Z)+{\rm c.c.} \right) - b Z \bar Z, \end{align} where both $\omega_1$ and $\omega_2$ are required to be positive so that there exists a solution of the E.O.M. for $V_R$ and the condition $e^K > 0$. The eigenvalues $\{ \lambda_i \, | \, i=1,2,3 \}$ of the K\"ahler metric $K_{A\bar B}$ are given by \begin{eqnarray} \lambda_1 = \frac{2}{\omega_1^2}, \quad \lambda_2 + \lambda_3 = \frac{|\partial_Y \omega_2|^2 + |\partial_Z\omega_2|^2 + b \, \omega_2}{\omega_2^2}, \quad \lambda_2\lambda_3 = \frac{b |c|^2-\beta}{\omega_2^3}. \end{eqnarray} Furthermore, by choosing the function $\tilde {\cal F}$ so that $\tilde {\cal F}(0)=0 ,\, \tilde {\cal F}'(0) =0$, we find a SUSY vacuum satisfying $W_A=W=0$ at $X=Y=T=S=0$, which is guaranteed to be stable. Therefore, there exists the SUSY vacuum with a positive definite metric if and only if \begin{eqnarray} \gamma = \omega_1 |_{\rm vac} > 0, \quad \beta = \omega_2 |_{\rm vac} > 0, \quad b > \frac{\beta}{|c|^2}, \quad c \not= 0. \end{eqnarray} When these conditions are satisfied, there exist no ghost anywhere in the region ${\cal M} = \{T,Y,Z \,|\, \omega_1>0 \,,\, \omega_2>0 \}$ and the boundary $\partial {\cal M}$ is geodesically infinitely far away from the SUSY vacuum. \section{Ghostbuster mechanism from higher-curvature SUGRA viewpoint } \label{GBC} In this section, we discuss how the ghostbuster mechanism works in the higher-curvature frame. As we have seen in previous two sections, the ghost supermultiplet is eliminated in both pure and matter-coupled higher-curvature systems. Let us consider the original action for $f(R)$ gravity before taking the dual transformation. For concreteness of the discussion, we take the simplest model with an additional matter superfield in Eq.~(\ref{SimpleModel}). The same conclusion follows even in the absence of an additional matter. The higher-curvature action can be obtained by solving E.O.M. for $T$ and $Y$ and imposing the constraints for $S$ and $X$. Here we introduce $S_1 \equiv c S_0 S+{\mathcal{R}}_g $ as an extra matter and solve the modified constraint (\ref{ModifiedS}) for $Z$. After introducing the quadratic term of $X$, the original action takes the form \begin{align} S' =& ~ \bigg[-\frac{3}{2}\gamma |S_0|^2e^{V_R} -\frac{3\beta}{2|c|^2} |S_1-{\mathcal{R}}_g |^2 e^{-2V_R}+\frac32 a |S_0 X|^2e^{V_R} +\frac32 b |{\mathcal{R}}_g|^2 e^{-2V_R} \bigg]_D\nonumber \\ +& ~ \bigg[ S_0^2 (S_1-{\mathcal{R}}_g) X {\cal G}(X) \bigg]_F \end{align} with $\tilde {\cal F}(X) \equiv c X {\cal G}(X)$ and \begin{eqnarray} {\mathcal{R}}_g = \frac{\Sigma(\bar S_0 e^{V_R})}{S_0}, \quad \quad X= \frac{\Sigma(\bar S_1 e^{-2V_R} )}{S_0^2}, \end{eqnarray} where $a$ and $b$ are real (positive) parameters. Note that $X$ now does not have the Ricci scalar in the lowest component but a higher-derivative superfield made out of $S_1$. This means that the higher-derivative term of ${\mathcal{R}}_g$ is now replaced by that of $S_1$, and hence the higher-curvature term does not show up. By expanding the action explicitly, one can check that this action has Ricci scalar terms up to the quadratic order. We note that, however, this does not lead to the conclusion that the ghost is removed by the additional matter: since there still exist higher-derivative terms of $S_1$, the ghost mode can arise from such terms. One may also confirm that the absence of the higher curvature terms $R^n \ (n\geq3)$ is not an artifact of field redefinition. We can show that in this specific matter coupled model, the higher-curvature terms exist only in the off-shell action before substituting the solution of the E.O.M for the auxiliary field in $V_R$. We stress that this conclusion does not mean that the higher-curvature modification is removed by the ghostbuster mechanism. As we claimed above, the resultant system has scalar curvature terms only up to the quadratic order, as the simplest Ceccoti model does~\cite{Cecotti:1987sa}. However, the coupling of the resultant system is completely different from the Ceccoti model. In our dual matter coupled system in Sec.~\ref{exfr}, K\"ahler potential~\label{SMK} takes the form \begin{equation} K\sim-2\log(T+\bar{T})-\log(Y+\bar Y+\cdots), \end{equation} whereas, in the Ceccoti model, it can be written as \begin{equation} K=-3\log(T_{c}+\bar{T}_c+\cdots), \end{equation} where $T,Y$ and $T_c$ are chiral superfields. The difference of the K\"ahler potentials leads to a different moduli space geometry. Interestingly, all $T,Y$ and $T_c$ have the hyperbolic geometry structure, which is applicable to the so-called inflationary $\alpha$-attractors~\cite{Kallosh:2013yoa,Carrasco:2015uma}. In the $\alpha$-attractor inflation, we take the moduli space $K=-3\alpha\log (\Phi+\bar{\Phi})$ for an inflaton superfield $\Phi$, and the value of the parameter $\alpha$ has a relation to the tensor to scalar ratio $r$ as $r=\frac{12\alpha}{N^2}$, where $N$ is the number of e-foldings at the horizon exit. In our model, we have $\alpha=\frac13$ and $\frac23$, whereas the Ceccoti model has $\alpha=1$. If we apply our model to inflation, we would find a value of tensor to scalar ratio $r$ different from that of the Ceccoti model. Therefore, the higher-curvature modification has physical consequences even though the higher-order scalar curvature terms seem to disappear after the ghostbuster mechanism. Since the construction of the inflation model is beyond the scope of this paper, we leave it as future work. \section{Conclusion} \label{conclusion} We have applied the ghost buster method to a higher-curvature system of SUGRA. It has been known that once we introduce a higher scalar curvature multiplet $\Sigma(\bar{\mathcal R} )$, a ghost mode generically shows up in the system as we reviewed in Sec.\,\ref{review}. The ghostbuster method requires a nontrivial U(1) gauge symmetry with a non-propagating gauge superfield. It turned out that the required U(1) symmetry should be the gauged R-symmetry in the case of the higher-curvature system, since the ghost arises from the gravitational superfield. Due to the uniqueness of the gauge charge assignment, it is nontrivial that if the ghostbuster method is applicable to remove the ghost. As we have shown in Sec.\,\ref{GB}, thanks to the nonzero U(1) charge of ``would-be'' ghost mode, we can eliminate the ghost mode and obtain a ghost-free action. However, the resultant ghost-free system turned out to be unstable because of the scalar potential instability. Such an instability is easily cured by introducing matter fields, which would be necessary for realistic models. Additional matter superfields can stabilize the scalar potential if we choose proper couplings between gravity and matter multiplets. We have also discussed how the ghostbuster mechanism can be seen in the higher-curvature system in Sec.~\ref{GBC}. We have found that the higher-order scalar curvature terms $R^n$ with $n\geq3$ are eliminated in using the mechanism, and the resultant system has the scalar curvature up to the quadratic order. However, the higher-curvature modification is not completely eliminated by the mechanism. We find moduli space geometry different from the known $R+R^2$ supergravity~\cite{Cecotti:1987sa}. Therefore, despite the absence of $f(R)$ type interactions in the final form, the SUSY higher-order curvature corrections give physical differences. In particular, the difference of the moduli space structure might be useful for constructing inflationary models. In this work, we did not discuss the elimination of ghosts originated from higher-derivative terms of matter superfields. It is a straightforward extension of our previous work~\cite{Fujimori:2016udq} for global SUSY to SUGRA and is much easier than the higher-curvature model discussed in this paper, since the U(1) charge assignment is not unique for matter higher-derivative models. Since the higher-derivatives of matter fields in SUGRA requires the compensator $S_0$, it would be interesting to assign the U(1) charge to the compensator as well, i.e. we can use U(1) R-symmetry for the ghostbuster mechanism as with the higher-curvature case, which is only possible for the SUGRA case. Let us mention the applicability of our mechanism to the other SUGRA formulations, where the auxiliary fields in the gravity multiplet are different. Our mechanism is not applicable for the so-called new minimal SUGRA formulation~\cite{Sohnius:1981tp}, since the compensator is a real linear superfield, which cannot have any U(1) charge. For the non-minimal SUGRA case, it would be possible to assign a nontrivial U(1) charge to complex linear compensator. In addition, it is known that the $R^2$ model of non-minimal SUGRA has a ghost mode in the spectrum, so it is interesting to see if the ghost can be removed by our mechanism. \section*{Acknowledgement} This work is supported by the Ministry of Education, Culture, Sports, Science (MEXT)-Supported Program for the Strategic Research Foundation at Private Universities ``Topological Science'' (Grant No.~S1511006). The work of M.~N.~is also supported in part by a Grant-in-Aid for Scientific Research on Innovative Areas ``Topological Materials Science'' (KAKENHI Grant No.~15H05855) from the MEXT of Japan, and by the Japan Society for the Promotion of Science (JSPS) Grant-in-Aid for Scientific Research (KAKENHI Grant No.~16H03984). Y.~Y.~is supported by SITP and by the NSF Grant PHY-1720397.
{ "timestamp": "2018-05-21T02:03:32", "yymm": "1712", "arxiv_id": "1712.05017", "language": "en", "url": "https://arxiv.org/abs/1712.05017" }
\section{Introduction} Cosmic strings are one-dimensional topological defects formed by spontaneous symmetry breaking in the early universe. They were first proposed in the 1970s \cite{Kibble:1976sj} and have evoked ongoing cosmological interest (see Ref. \cite{Vilenkin:2000} for review). The initially formed defects (of order one per horizon via the Kibble mechanism) quickly evolve into a scaling network of horizon-size long strings and loops of different sizes, with properties dictated largely by the string tension $\mu$. The hypothesis that a cosmic string network might actively source the density fluctuations for structure formation in our universe was extensively studied in the 1980s and 1990s and found to require tension $G\mu \simeq 10^{-6}$, where $G$ is the Newton constant (taking $c=1$). In that scenario the fraction of cosmic string energy content in the universe is roughly \begin{equation} \label{Ocs} \Omega_{string} \simeq \Gamma G \mu \end{equation} where, numerically, $\Gamma \simeq 50$. If cosmic strings were responsible for the fluctuations their contribution to the energy content in the universe would be negligible. However, in models with actively generated fluctuations the power spectrum of the cosmic microwave background radiation (CMBR) has {\it no} acoustic peaks. The discovery of the acoustic peaks in the CMBR power spectrum in late 1990s rules out cosmic strings as the primary source of fluctuation and strongly supports the inflationary universe scenario. Observational bounds on the cosmic string tension and energy content continue to improve. In the last year, pulsar timing limits on the stochastic background from strings improved from $G \mu < 10^{-9}$ \cite{Sousa:2016ggw} to $G \mu \lta 1.5 \times 10^{-11}$ \cite{Blanco-Pillado:2017oxo,Blanco-Pillado:2017rnf}. Very recently (after this paper was substantially finished) limits on cosmic strings for LIGO observing run O1 were reported \cite{Abbott:2017mem}: $G \mu < 10^{-10}$ (stochastic background) and $\lta \,3 \times 10^{-7}$ (bursts) for model choices closest to our own. We have added brief comments and will elaborate at a future time.\footnote{The quoted limits are for model $M=2$ with intercommutation probability $p=10^{-2}$ in Fig. 5 \cite{Abbott:2017mem}} A long standing goal of theoretical physics is finding a consistent framework for quantum gravity and nature's known force fields and matter content. String theory is the leading candidate. Because of its rich structure and dynamics, an explicit string theory realization of the standard model of strong and electroweak interactions remains elusive. As a result, the search for evidence of string theory in nature turns out to be very challenging. One promising avenue is hunting for the progeny of strings of string theory stretched to horizon sizes. The study of inflation in string theory (see Ref. \cite{Baumann:2014nda} for a review) has revealed routes to the formation of macroscopic strings at the conclusion of the inflationary epoch. Such strings behave very much like the original cosmic strings \cite{Jones:2002cv,Sarangi:2002yt,Jones:2003da,Copeland:2003bj} but with tensions that easily satisfy the present observational bounds. Although similar in many respects, they differ in a number of significant ways. To distinguish them from the traditional cosmic strings, we refer to them as cosmic superstrings. In this paper, we show superstrings can have properties consistent with today's observational bounds and still be detectable in the near future. Encouraged by the recent spectacular success of LIGO \cite{LIGO}, we shall present our estimate of the detectability of cosmic superstrings (for $G \mu > 10^{-15}$) via gravitational wave bursts from cusps and kinks \cite{Damour:2001bk,Damour:2004kw}. Gravitational wave searches combined with microlensing searches \cite{Chernoff:2007pd,Chernoff:2014cba} can teach us a lot about what types of strings might be present. If cosmic superstrings are discovered then measurements will provide valuable information about how our universe is realized within string theory and go a long way towards addressing the question ``is string theory the theory that describes nature?'' Any positive detections, of course, will provide the most direct possible insights. Since superstring theory has 9 spatial dimensions, common experience suggests 6 of them are compactified. Turning on quantized fluxes \cite{Giddings:2001yu,Kachru:2003aw} in the presence of $D$-branes \cite{Polchinski:1995mt} yields a warped geometry having throat regions connected to a bulk space. A typical flux compactificaton in Type IIB string theory can have dozens to hundreds of throats. In one possibility, the brane world scenario, visible matter is described by open strings living inside a stack of 3 spatial dimensional D3-branes sitting at the bottom of one of the throats. The D3-branes span the normal dimensions of our universe. The mass scale at the bottom of a throat is decreased (warped) by orders of magnitude compared to the bulk scale, which is simply the string scale $M_S$, taken here to be a few orders of magnitude below the Planck scale $M_P = G^{-1/2} \simeq 10^{19}$ GeV. Superstrings sitting at a bottom have tensions decreased by the same factor so tension $\mu$ is orders of magnitude below that implied by the string scale $M_S^2$. Reheating at the end of inflation excites the light string modes that constitute the standard model particles and marks the beginning of the hot big bang. The production of cosmic superstrings after inflation has been studied mostly in the simplest scenario in string theory, namely the $D$3-${\bar D}$3-brane inflation in a warped geometry in flux compactification \cite{Dvali:1998pa,Dvali:2001fw,Burgess:2001fx,Kachru:2003sx}. The energy source for reheating is the brane-anti-brane annihilation at the end of inflation. In a flux compactification, this energy release happens in a warped throat, namely the inflationary throat. The energy released can also go to light string modes and strings with horizon-scale sizes. The annihilation of the $D$3-${\bar D}$3-brane pair easily produces both $F$-strings and $D$1-strings in that throat. In a simple brane inflationary scenario, using the PLANCK data\cite{Planck:2013}, one finds that $G \mu < 10^{-9}$\cite{Firouzjahi:2005dh,Bean:2007hc}. There may be numerous throats so the standard model throat may differ from the inflationary throat and also a host of other spectator throats. Energy released in the inflationary throat spreads to other, more warped throats. Reheating is expected to generate standard model particles in the standard model throat and cosmic superstrings in throats warped at least as much as the inflationary throat\cite{Kofman:2005yz,Chen:2006ni,Chialva:2005zy}. Brane-flux inflationary scenarios generally yield similar outcomes. For other inflationary scenarios in string theory, the picture is less clear, though even a very small production of $F$-strings and $D$1-strings will eventually evolve to the scaling solution, so it may not be unreasonable to assume that such strings are produced irrespective of the details of the particular inflationary realization. The cosmic superstrings and the particles tend to sit at the throat bottoms due to energetic considerations. The string network in each throat is expected to evolve independently of the other throats though all cosmic strings are visible to us via their gravitational interactions. Each network reaches a scaling solution that is insensitive to initial conditions and largely set by string tensions appropriate to the throat. We shall start with ordinary cosmic strings, which have been extensively studied \cite{Vilenkin:2000}, and list how properties of superstrings in string theory differ and how each difference enhances or suppresses the prospects for detectability. To describe order of magnitude changes to the probability of detection, we introduce a single parameter ${\cal G}$ to summarize why cosmic superstrings offer much better chances than ordinary cosmic strings. In this over-simplified picture, we compare the fraction of cosmic superstring energy content in the universe to that of the conventional $\Omega_{string}$ (\ref{Ocs}), $$\Omega_{superstring} \sim {\cal G} \Omega_{string}\simeq \left(\frac{N_sN_T}{p}\right) \Omega_{string}$$ where $N_s$ is the effective number of species of strings within a single warped throat (e.g., $N_s \sim 1$ to $4$). Here $p \le 1$ is the effective intercommutation probability. For usual cosmic strings, $p \simeq 1$ while $p \le 1$ for superstrings, and can be as small as $p \sim 10^{-3}$ \cite{Jackson:2004zg}. It is pointed out that it may go like $p^{2/3}$ (instead of $p$) in $\Omega_{superstring}$ \cite{Avgoustidis:2005nv}. $N_T$ is the effective number of throats in the flux compactification, throats with cosmic superstrings sitting at its bottom; actually, we should only count those with string tensions above the eventual observational limit. Here we have in mind $G \mu > 10^{-18}$. Overall, we expect $1 \ll {\cal G} < 10^4$. Combining this $\cal G$ factor enhancement with the enhancement coming from the clustering of low tension cosmic superstrings (following dark matter) in our galaxy (a density enhancement factor ${\cal F} \sim 10^5$ \cite{Chernoff:2007pd,Chernoff:2009tp}) gives hope for detecting microlensing of stars with optical surveys and gravitational wave bursts at advanced LIGO. Cosmic superstrings differ from ordinary cosmic strings in a number of fundamental ways: (1) There are 2 types of strings, namely fundamental strings, or $F$-strings, and $D$1-branes, i.e., $D$-strings \cite{Copeland:2003bj}. The intercommutation (reconnection) probability $p$, which is $p \simeq 1$ for vortices, can be $p \ll 1$ for superstrings \cite{Jackson:2004zg}. This property has already been incorporated in a number of cosmic superstring network studies \cite{Sakellariadou:2004wq,Avgoustidis:2004zt}. (2) A $F$-string sitting at the bottom of throat $i$ has tension $G\mu_i \sim GM_S^2 h_i^2 \ll GM_S^2$, where $h_i \ll 1$ is the warp factor at the bottom. An empty throat without branes will have its own strings with a spectrum of tension \cite{Copeland:2003bj,Firouzjahi:2006vp}. The string networks may contain junctions and beads \cite{Gubser:2004qj,Siemens:2000ty,Leblond:2009fq}. A throat with $D$3-branes (or $\bar D$3-branes) at its bottom will have only $D$-strings there \cite{Leblond:2004uc}. This is because branes allow open $F$-strings inside them so the closed $F$-strings inside branes tend to break into tiny open strings. Interactions between strings from different throats are expected to be very weak. We introduce an effective number $N_s$ of types of strings to reflect the presence of the tension spectra present. (3) Depending on the Calabi-Yau manifold chosen by nature, we expect dozens or hundreds of throats in a typical flux compactification. Throats with different warped geometries result in different types of cosmic superstring tension spectra with the fundamental tensions substantially lower than the string scale, since strings tend to sit at the bottoms of the throats. We introduce an effective number $N_T$ of throats with string tensions $G\mu > 10^{-18}$, the lower limit of detectability in the forseeable future for both gravitational wave stochastic backgrounds \cite{Blanco-Pillado:2017rnf} and optical microlensing.\footnote{For microlensing the lower limit of detectability may be crudely estimated as follows. The angular size of a typical star at a typical distance in the galaxy is comparable to the deficit angle for a string with $G \mu \sim 10^{-13}$. The state of the art for measuring relative flux variations of bright nearby stars in exoplanet searches is about $10^{-5}$. A string with $G \mu \sim 10^{-18}$ would lens approximately $10^{-5}$ of the stellar disk and create a hypothetical relative flux variations of this size.} (4) All strings in string theory should be ``charged'' under a two-form field, so cosmic superstrings will emit axions (i.e., two-form fields in 3+1 dimensions) in addition to gravitational waves. Because of this additional decay mode, the density of some types of cosmic string loops may be significantly decreased. Although the emission rate of axions has been generally studied in Ref. \cite{Firouzjahi:2007dp,Gwyn:2011tf}, the emission rate of a particular axion by a specific string depends strongly on axionic properties such as mass, coupling ``charge'' to strings and decay rate to two photons. (5) Cosmic strings that move (oscillate) in a throat will have a tension varying in time and from point to point along its length\cite{Avgoustidis:2007ju,Avgoustidis:2012vc}. For low tension cosmic strings ($G \mu <10^{-9}$), clustering of string loops in our galaxy can enhance the cosmic string density by many orders of magnitude similar to the clustering of dark matter. Five orders of magnitude are expected at the solar position and more at the center of the Galaxy. Clustering substantially increases the potential of detection\cite{Chernoff:2007pd,Chernoff:2009tp}. This property applies to ordinary low tension cosmic strings as well. Increasingly comprehensive studies of gravitational wave bursts from cosmic string network have appeared \cite{DePies:2007bm,Leblond:2009fq,Kuroyanagi:2012wm,Kuroyanagi:2012jf,Blanco-Pillado:2013qja}. Some have already included (1), the low $p$ effect \cite{Jackson:2004zg,Sakellariadou:2004wq,Avgoustidis:2004zt}. We will highlight the other effects, in particular (2) and (3), which can dramatically raise the prospects for detection. Following Ref. \cite{Chernoff:2007pd}, Ref. \cite{DePies:2009mf} has included the clustering effect. Here we provide a more detailed analysis following a better understanding of the clustering effect \cite{Chernoff:2009tp}. We shall describe a simple cosmic string model (with a string tension $\mu$ so $G \mu < 10^{-7}$ and loop size relative to the horizon size $\alpha \sim 0.1$) and discuss how each of the above effect may modify the properties and detectability in microlensing and gravitational wave search/observation. In general, (1)-(3) tend to enhance while (4) tends to decrease the detectability via gravitational wave. The main analysis in this paper focuses on the clustering of low tension strings like dark matter in galaxies and its effect on their detectability via microlensing and gravitational wave bursts. In microlensing, caustics are also possible if the string segment is not straight when compared to the star behind it \cite{Uzan:2000xv}. We do not know the precise compactification geometry so there are quite a number of uncertainties in determining the intrinsic string properties and the string network evolution dynamics. Given this state of current understanding this modeling though precise should be considered as no better than an order of magnitude estimate. Nonetheless, we find with this analysis that a wide range of superstring tensions are potentially detectable and often by several different types of experiments. We attempt to provide enough details to illustrate how the predictions/estimates may vary with respect to the input assumptions/physics as our understanding/knowledge continues to improve. Following the discussion of the properties of the cosmic superstrings in Sec. 2, we outline in Sec. 3 three separate methods by which loops may be detected: cusp emission of axions followed by conversion to photons, microlensing of stellar sources of photons and emission of gravitational waves. We then provide a detailed astrophysical model that summarizes the properties (number density, lengths, velocities, etc.) of string loops relevant to forecasting experimental outcomes. Clustering of low tension loops is a significant effect that enhances the ability of experiments to detect loops. Here we concentrate on estimating the gravitational burst rate for LIGO/VIRGO and LISA taking account of the enhancements from the local source population. For example, we find that cusp bursts from loops in the halo of our Galaxy dominate the contribution from the rest of the homogeneous universe for LIGO for $10^{-15}< G \mu < 10^{-13}$; likewise, cusp bursts for LISA for $G \mu < 10^{-11}$ are halo-dominated. Elsewhere, we will employ the model to forecast the detection rates for microlensing and axion-mediated photon bursts. \section{Properties of Cosmic Superstrings} The first suggestion that string theory's strings might manifest as cosmic superstrings was contemplated in the heterotic string theory\cite{Witten:1985fp}. However, among other issues the tension of superstrings in that description is far too high to be compatible with data. With the discovery of $D$-branes \cite{Polchinski:1995mt}, the introduction of warped geometries in flux compactification \cite{Giddings:2001yu,Kachru:2003aw} and the development of specific, string theory based inflationary scenarios, the prospect has improved dramatically. % In the brane world scenario, the cosmic superstrings are produced after the inflationary epoch and evolve to a scaling network. The network includes long, horizon-crosssing strings and sub-horizon scaled loops. These are the objects of interest for experimental searches. Of the 9 spatial dimensions in Type IIB string theory, 6 dimensions (i.e., $y^m$) are compactified into a Calabi-Yau like manifold, \begin{equation} \label{bmetric} ds^2= h^2(y^m)dx^{\mu}dx_{\mu} + g_{mn}(y) dy^mdy^n \end{equation} where $x^{\mu}$ span the usual 4-dimensional Minkowski spacetime, so $$M_P^2 \simeq M_S^8 \int d^6y \sqrt{g_6(y)} h(y)^2$$ where $g_6$ is the determinant of $g_{mn}$. The manifold consists of the bulk, where $h(y^m) \simeq 1$, and smoothly connected throats. At the bottoms of the throats (i.e., tips of deformed cones), we expect $h(y^m) \ll 1$. A typical compactification can have dozens or hundreds of throats, each with its own warp factor $h_j$. In a simple brane world scenario, one throat, namely the standard (strong and electroweak) model (S) throat, has a stack of $D$3-branes sitting at the bottom, with warp factor $h_S \ll 1$. This stack spans our 3-dimensional observable universe. All standard model particles are open string modes inside the branes. The Higgs Boson mass $m_H$ is considered natural if it satisfies $m_H \sim M_Sh_S$. Since Type IIB string theory has only odd-dimensional branes, i.e., $D$(2n+1)-branes, it does not have $D$2- or $D$0-branes but has $D$1-branes, so there are $D$1-strings but no membrane-like or point-like defects. Both $D$1-strings and fundamental $F$-strings can form cosmic superstrings. Closed strings may be born and move in space outside the $D$3-branes. The ends of an open $F$-string must end on a brane. Both closed $D$-strings and $F$-strings will be present in a throat if it has neither $D$3-branes nor $\bar D$3-branes. If a closed $F$-string comes in contact with the brane it will fragment into open $F$-strings with ends inside the brane. It will not survive as a cosmic superstring. However, a $D$-string may swell inside a $D$3-brane and persist, behaving like a vortex instead of a strictly one-dimensional object\cite{Leblond:2004uc}. Likewise, if we live inside $D$7-branes wrapping a 4-cycle, the same phenomenon happens: only $D$-strings survive as cosmic superstrings in the S throat and other throats with branes. \subsection{Tension Spectrum} Typically, strings of all sizes and types will be produced towards the end of inflation, e.g., during the collision and annihilation of the $D$3-$\bar D$3 brane pair as energy stored in the brane tensions is released\cite{Jones:2002cv,Sarangi:2002yt,Jones:2003da,Sarangi:2003sg}. The lower string modes are effectively particles but some of the highly excited modes are macroscopic, extended objects. Large fundamental strings (or $F$-strings) and/or $D$1-branes (or $D$-strings) that survive the cosmological evolution become cosmic superstrings \cite{Copeland:2003bj}. In 10 flat dimensions, or in the bulk in a flux compactification, supersymmetry dictates that the tension of the bound state of $p$ $F$-strings and $q$ $D$-strings is \cite{Schwarz:1995dk}, \begin{eqnarray} \label{flat} T_{{p,q}} = T_{F1} \sqrt{p^2 +\frac{q^2}{g_s^2}}\, . \label{pqtension10} \end{eqnarray} Coprime combinations of $(p,q)$ can form strings with junctions \cite{Copeland:2003bj}, so their zipping and unzipping will be part of the string evolution dynamics \cite{Avgoustidis:2014rqa}. For $(p,q)$ not coprime, simpler states of fewer $F$ and $D$-strings exist having equivalent energy per component. Recent network studies of this idealized spectra strongly suggest that cosmic superstrings evolve dynamically to a scaling solution with a stable relative distribution of strings with different quantum numbers \cite{Tye:2005fn}, very much like ordinary cosmic strings of either Abelian Higgs or Nambu-Goto type \cite{Vilenkin:2000}. The strings' scaling density decreases roughly $\propto T_{p,q}^{-N}$, where $N \sim 8$, a rapid falloff for higher $(p,q)$. We shall consider scenarios where at least some of the lower $(p,q)$ strings of more realistic spectra are stable enough to realize the scaling solution. Generally if the $F$-strings are stable we expect more $F$-strings than $D$-strings since $g_s<1$. In that case the total number density of all cosmic strings will be comparable to that of $F$-strings with $(p,q)=(1,0)$, enhanced by a factor $1/g_s^N$ relative to D-strings with $(p,q)=(0,1)$. In a more realistic scenario the compactified manifold is not flat but contains warped throats. Since reheating after inflation (e.g., the $D$3-$\bar D$3-brane annihilation) is expected to take place at the bottom of a throat, some of the cosmic superstrings will be produced in that part of the manifold. If $D$3-branes are left in the bottom of the throat, the $F$-strings will fragment while the $D$-strings will be metastable, presumably surviving as cosmic strings \cite{Leblond:2004uc}. In an empty throat new $F$- and $D$-strings will survive and form bound states, resulting in a spectrum of string tensions with junctions and probably beads. The particulars depend on the geometry of the throat but it is illustrative to consider the tension spectrum in the well-studied Klebanov-Strassler (KS) throat \cite{Klebanov:2000hb}. This is a warped deformed conifold with an $S^3$ fibered over $S^2$. Let $r$ be the distance from the bottom of a throat on the manifold and $R$ be the characteristic length scale. The bulk is connected to the edge of the throat at $r=R$, where \begin{equation} \label{KMR} R^4=\frac{27 \pi g_sN}{16 M_S^4}, \quad \quad N=KM \end{equation} where $N=KM$ is the number of $D$3-charges and integers $K$ and $M$ are the NS-NS and RR fluxes respectively. These integers are expected to be relatively large. The tip of a conifold sits at $r=0$. Here, the $S^3$ has a finite size if the conifold is deformed (without breaking supersymmetry), while the $S^2$ has a finite size if the conifold is resolved (breaking supersymmetry), so $r =r_i \gtrsim 0$ at the bottom of the throat. At the top ($r \simeq R$) the warp factor is $h(r=R) \simeq 1$ and at intermediate locations $h(r) \simeq r/R$. In terms of the fluxes the warp factor at the bottom of the $i$th throat is \begin{equation} \label{warpi} h_i= h_i(r_i \simeq 0) = e^{-2\pi K_i/g_sM_i} \ll 1 \end{equation} and a $(p,q)$ bound string near that point has tension \cite{Firouzjahi:2006vp} \begin{eqnarray} \label{Tanswer} T_{p,q} \simeq \frac{M_S^2h_{i}^{2}}{2 \pi} \sqrt{ \frac{q^2}{g_s^2} + \left(\frac{b M_i}{\pi}\right)^2 \sin^2\left(\frac{\pi (p-qC_0)}{M_i}\right)}, \end{eqnarray} where $b=0.93$ is a number numerically close to one, $C_0$ is the RR-zero form scalar expectation value there, and the integer $M_i \gg 1$ is the number of fractional D3-branes, that is, the units of 3-form RR flux $F_3$ through the $S^{3}$ in the KS throat. For integer $K_i$, the infrared field theory at the bottom of the $i$th throat is a pure $N=1$ supersymmetric Yang-Mills theory, and the warp factor $h_i$ is expected to be small. The mass of the bead at the junction is \cite{Gubser:2004qj} \begin{equation} m_b= \frac{h_iM_S}{3}\sqrt{\frac{g_s}{4 \pi}} \left(\frac{b M_i}{\pi}\right)^{3/2} . \end{equation} Ref. \cite{Leblond:2009fq} argues that the cosmic string network will evolve to a scaling limit for modest integers $M_i >10$. The string and bead properties of other geometric throats is an interesting, open question. \subsection{Production of Cosmic Superstrings in Brane Inflation} The production of cosmic superstrings in the early universe depends on the inflationary scenario in string theory. The simplest is probably brane inflation\cite{Dvali:1998pa,Dvali:2001fw,Burgess:2001fx,Kachru:2003sx}, in which brane-anti-brane annihilation releases energy towards the end of the inflationary epoch that generates closed strings. $D$1-strings can be viewed as topological defects in the $D$3-$\bar D$3-brane annihilation so they are produced via the Kibble mechanism. $F$1-strings may be viewed as topological defects in a S-dual description produced in a similar way, since the Kibble mechanism depends only on causality, irrespective of the size of the coupling. As a result, horizon size strings are produced. In the simplest brane inflationary scenario, we focus on two of the many throats in the compactified manifold, namely the inflationary throat $A$ and the standard model throat $S$. Because of the warped geometry a mass $M$ in the bulk becomes $h_{A}M$ at the bottom of throat $A$, where $h_{A} \ll 1$ (\ref{warpi}) is the warp factor there. Since $\bar D$3-branes are attracted towards the bottoms of throats, let us suppose there is a $\bar D$3-brane sitting at the bottom of the $A$ throat. A $D$3-brane in the bulk will be attracted towards the $\bar D$3-brane and inflation (driven by the potential energy from the brane-anti-brane tensions) happens as it moves down the throat. The inflaton $\phi$ is proportional the brane-anti-brane separation in the throat. The attractive potential is dominated by the lightest closed string modes, namely, the graviton and the RR field, yielding a Coulomb-like $r^{-4}$ potential where $r$ is distance from the $\bar D$3-brane at the tip. The warped geometry dramatically flatten the inflaton potential $V(\phi)$ so the attraction is rendered exponentially weak in the throat. For a canonical kinetic term we have $\phi = \sqrt{T_3}r$. The simplest inflaton potential takes the form \cite{Kachru:2003sx} \begin{equation} \label{infpot} \begin{split} V(\phi) = V_A + V_{D \bar D} &=2T_3h_A^4(1-\frac{1}{N_A}\frac{\phi_A^4}{ \phi^4}) \\ &= \frac{64\pi^2\phi_A^4}{27N_A} \left( 1 - \frac{\phi_A^4}{N_A \phi^4} \right) \end{split} \end{equation} where the $D$3-brane tension $T_3=M_S^4/(2\pi g_s)$ is warped to $T_{3}h_A^4$. Note that this inflaton potential has only a single parameter, namely $\phi_A^4/N_A$. Crudely, $h(\phi) \sim \phi/\phi_{edge}$, where $\phi=\phi_{edge}$ when the $D$3-brane is at the edge of the throat at $r=R$. Likewise, at the bottom $\phi=\phi_{A}$, the warp factor is $h_{A} = h(\phi_{A})= \phi_{A}/\phi_{edge}$. The inflaton $\phi$ is an open string mode and the attractive tree-level gravitational plus RR potential can also be obtained via the one-loop open string contribution. The scale of the potential is reduced because $N_{A} \gg 1$ is the $D$3 charge of the throat. We consider an ordering $$ 0 \le \phi_A \lesssim \phi_f \le \phi \le \phi_i < \phi_{edge}$$ where inflation begins at $\phi=\phi_i$ and ends at $\phi_f$, when a tachyon appears signaling the annihilation of the brane-anti-brane pair. At least 55 e-folds of inflation must take place inside the throat to achieve consistency with observations. The combination $\phi^4_A/N_A$ in the inflaton potential $V(\phi)$ (\ref{infpot}) is constrained by the magnitude of the power spectrum in the Cosmic Microwave Background Radiation and one finds \cite{Firouzjahi:2005dh,Shandera:2003gx,Shandera:2006ax} $$n_s=0.967, \quad \quad r \simeq 10^{-9}$$ and the tension of $D$1-strings is \begin{equation} \label{Gtension} G \mu \simeq \frac{4 \times 10^{-10}}{\sqrt{g_s}} . \end{equation} and $h_{A} \sim 10^{-2}$, with a value dependent on details of the throat. The $F$-string tension is smaller by a factor of the string coupling $g_s$: i.e., $\mu_F = g_s \mu$ where $g_s <1$. This is consistent with the present observational bound \cite{Sousa:2016ggw}. Towards the end of inflation (near $\phi_f$), as the $D$3-$\bar D$3-brane separation $r$ decreases, an open string (complex) tachyonic mode appears at \begin{equation} \label{tachyon} \frac{m_{tachyon}^2}{M_S^2} = M_S^2r^2 - {\pi} \end{equation} which triggers an instability due to tachyon rolling. As $\phi$ decreases the $\phi^{-4}$ Coulomb-like form of the potential is chopped off, leaving $V(\phi)$ with a relatively flat form and possessing an imaginary component \cite{Sarangi:2003sg}. In the closed string picture, this happens precisely when the weakening Yukawa suppression of the massive closed string modes' contribution to the potential is overtaken by the rapidly increasing degeneracy of excited closed string modes $A(n) \rightarrow (2n)^{-11/4} \exp (\sqrt{8\pi^2n})$. Here, $n$ is the excitation level (a string with center of mass $m$ has level $n=m^2/{8\pi M_S^2}$). The contribution to $V(r)$, in the large $n$ approximation, is $$V(r) \propto -r^{-4} \sum_n n^{-11/4} \exp \left(\sqrt{2 \pi n}\left[\sqrt{\pi} - M_Sr\right]\right)$$ where the $\sqrt{\pi}$ term comes from the degeneracy while the $-M_Sr$ term comes from the Yukawa suppression factor $\exp(-mr)$. Comparison to Eq(\ref{tachyon}) reveals that the exponential growth of degeneracy leads to a divergent $V(r)$ precisely at the point where the tachyon appears. Regularization introduces an imaginary part for $V(\phi)$, which may be interpreted, via the optical theorem, as the width per unit world volume for a $D$3-$\bar D$3-brane pair decaying to $F$ strings \cite{Sarangi:2003sg}, \begin{equation} \Gamma = {\rm Im} [V(\phi)] \simeq \frac{\pi}{2} h_A^4 \left(\frac{ |m_{tachyon}^2|}{4 \pi} \right)^2 . \end{equation} The appearance of this imaginary part of $V(\phi)$ is due to the large Hagedorn degeneracy of the massive modes and the implication is that $D$3-${\bar D}$3-brane annihilation leads to very massive closed string modes. The energy released first goes to on-shell closed strings. For large mass $m$, the transverse momenta of these strings are relatively small, $$\frac{<k_{\perp}^2>}{m^2} \sim \frac{6}{\sqrt{\pi}} \frac{M_S}{m} $$ so a substantial fraction of the annihilation energy goes to form massive non-relativistic closed strings. Although the above discussion is for the $D$3-${\bar D}$3-brane annihilation channel to $F$ strings, we expect production of $D$1-strings as well, since one may view a $D$3-brane as a di-electric collection of $D$1-strings \cite{Myers:1999ps}. The process of $D$3-${\bar D}$3-brane annihilation producing vortex-like $D$1-strings has been studied in the boundary string field theory framework \cite{Jones:2002sia}. The detailed, quantitative mass distribution of the strings is not critically important as long as evolution proceeds to a scaling cosmic superstring network independent of the initial distribution \cite{Tye:2005fn}. No monopole-like or domain-like defects are produced since there are no $D$0-branes or $D$2-branes present in the Type IIB string theory framework adopted here. Some of the $D$3-${\bar D}$3-brane energy goes to closed $D$1-strings and $F$1-strings in the $A$ throat; the rest is dumped into other throats including the $S$ throat, which initiates the hot big bang. Energetics favor heat transfer to any throat with a larger warp factor than that of the $A$ throat, creating cosmic superstrings of lower tension than those in the $A$ throat. It is interesting to note that all the energy released by the $D$3-$\bar D$3-brane annihilation goes to closed strings first \cite{Chen:2006ni}. In the absence of other branes, this is clear, since open strings end on branes and, after the brane annihilation, no branes exist to anchor endpoints. To understand the fate of an open string in the $D$3-brane consider the $U(1)$ flux tube between its two ends. After annihilation, the flux tube together with the open string now forms a closed string. If an open string stretches between a spectator brane and the $D$3-brane to be annihilated, there is a flux tube linking it to another end of a similar string (its conjugate). After annihilation, the flux tube plus the connected open strings form an open string attached to the spectator brane. Related brane (or brane-flux) inflationary scenarios, share many relevant properties, leading to cosmic superstring production in the manner described. Other stringy inflationary scenarios may also generate cosmic superstrings and classical strings towards the end of the inflationary epoch. This is an important problem to investigate. Our general viewpoint is that since reheating must be present at the end of the inflationary epoch to start the hot big bang, and all particles produced are light string modes, some excited strings should be produced and the Kibble mechanism should be applicable to these. Schematically, cosmic strings contribute to the Hubble parameter $H$, $$H^2 = \frac{8 \pi G}{3} \left( \Lambda + \frac{\rho_{strings,0}}{a^2}+ \frac{\rho_{matter,0}}{a^3} + \frac{\rho_{radiation,0}}{a^4} \right)$$ where $\rho_{strings,0}$ is the initial energy density of cosmic strings at the end of inflation. Even if $\rho_{strings,0}$ is exponentially small, its role (relative to matter and radiation densities) will grow substantially because $a$ increases many orders of magnitude; string inter-commutation and gravitational decay will jointly drive the system to its attractor solution, the scaling cosmic string network. A set of diverse inflationary scenarios in string theory may lead to the scaling networks of interest. \subsection{Low Inter-commutation Probability} Cosmic superstrings have different properties than vortices in the Abelian Higgs model. The inter-commutation probability of vortices in three dimensions approaches $p \simeq 1$. The string density in the scaling solution is often estimated from numerical simulations with an assumed or effective value $p=1$. The situation is more complicated for superstrings in many respects. First, $p \simeq 0$ for a pair of interacting strings from different warped throats. A string network in each throat evolves and contributes separately to the total density. We will discuss the number of throats in the following section. Second, within a single throat $p<1$ because the physics of collisions is more complicated than it is for the Abelian case. It depends on the relative speed and angle of the 2 interacting string segments among other things. From calculations \cite{Jackson:2004zg} we estimate $p \sim g_{s}^{2}$ and take string coupling $g_{s} \sim 1/10$ as not unreasonable. When $p<1$ the chopping of long strings into loops is less efficient. This is the superstring case. The overall string density must increase to compensate and to realize the scaling solution but the precise variation is not well-understood. The one scale model suggests density $\rho \propto 1/p^2$ but small scale structure on the string raises the effective intercommutation probability when two long segments collide. Simulations \cite{Avgoustidis:2005nv} suggest that the density $\rho \propto {1}/{p^{2/3}}$. Third, cosmic superstrings in a single throat will be present with a variety of tensions and charges \cite{Tye:2005fn}. The effective number of independent types per throat $N_s$ is not well understood in this context. It is unclear how the presence of beads (i.e., baryons) in the tension spectrum % will impact the evolution of the string network. The network may contain multiple beads, so-called necklaces \cite{Siemens:2000ty,Shlaer:2005ry}. Let us write the scaling from the density of Nambu-Goto strings to superstrings in a single throat as $$\Omega_{string} \rightarrow \Omega_{superstring} \simeq \frac{N_s}{p} \Omega_{s} \sim \frac{N_s}{g_{s}^{2}} \Gamma G \mu$$ where $\mu$ is the $F$-string tension, $N_s$ is the effective number of non-interacting types of strings and bound states in a throat, e.g. $N_s \sim 1$ to $4$. There are significant uncertainties in evaluating the enhancement in terms of $p$ and $N_s$. \subsection{Multi-throats} As discussed earlier, a typical 6-dimensional manifold has multiple throats. Assuming there are 2 throats along each dimension, we have $2^6=64$ throats while 3 along each dimension yields $3^6=729$ throats, so it is not hard to imagine that a typical manifold has many throats. For example, one of the best studied manifold ${\bf CP}^4_{11169}$ has, in the absence of any specific symmetry imposed, as many as 272 throats \cite{Candelas:1994hw}. Denote the number of throats by $N_T$. The annihilation in the inflationary throat heats the entire manifold. The heating may drive the birth of scaling string networks in the subset of throats which possess greater degrees of warping. (The last epoch of inflation will have diluted away all networks sourced by previous annihilation events.) In general, each throat has its own geometry, warp factor and set of string tensions. For example, since only $D$-strings survive in the S throat, and Eq.(\ref{Tanswer}) shows there is no binding energy for multiple $D$-strings, we expect only one tension in the S throat, the minimal number. The tension spectra of other throats will be at least as complicated. The multiplicity of throats, the range of warping and the possible complexity of the spectra in each throat is the source of the generic expectation that there exist a wide range of string tensions for future experiments to target. If there are more $\bar D$3-branes than $D$3-branes in a throat, then some number of $\bar D$3-branes will be left behind there after all pairs have annihilated. Let us consider the dynamics of $p$ $\bar D$3-branes inside a KS geometry, the deformed conifold with $M$ units of RR 3-form flux around the 3-sphere. If the number $p$ of $\bar D$3-branes left is not too small compared to $M$, then the system will roll to a nearby supersymmetric vacuum with $M-p$ number of $D$3-branes sitting at the bottom of the throat. This happens via the nucleation of an NS 5-brane bubble wall \cite{Kachru:2002gs}. This decreases $K$ by one unit, so the warp factor goes from $h= e^{-2\pi K/g_sM}$ to $h= e^{-2\pi (K-1)/g_sM}$, that is, it is less warped. If $p \ll M$, then the system is classically stable, but it may decay later via quantum tunneling again via the brane-flux annihilation. If this has happened already, a new cosmic superstring network might have been produced relatively late. \subsection{Cosmic Strings in an Orientifold} $F$1-strings are charged under the Neveu-Schwarz (NS) $B_{\mu \nu}$ field ($B_2$, with same strength as gravity) while the $D$1-strings are charged under the Ramond-Ramond (RR) field $C_{\mu \nu}$ ($C_2$) with a definite $D$1-charge. Since $C_{\mu \nu}$ (or $B_{\mu \nu}$) is a massless anti-symmetric tensor field, we can introduce an axion field $a$ related to it via the field strength $F_3$, $F_{\alpha \mu \nu} = \partial_{ [ \alpha} C_{\mu \nu ]} = \epsilon_{\alpha \mu \nu \beta} \partial^{\beta} a$. The massless tensor field has only one degree of freedom in 4-dimensional spacetime. Since $F_{\alpha \mu \nu}$ is invariant under a gauge transformation $C_{\mu \nu} \rightarrow C_{\mu \nu} + \partial_{[\mu} A_{\nu]}$, we infer that the massless $a$ has a shift symmetry, $a \rightarrow a+ {\rm constant}$. So cosmic superstring loops can emit axions as well as gravitons \cite{Firouzjahi:2007dp}. However, in a more realistic orientifold construction, both $C_2$ and $B_2$ are projected out \cite{Copeland:2003bj,Gwyn:2011tf}. Pictorially, the orientifold projection reverses the orientation of a $D$1-string, i.e., turns it to a $\bar D$1-string, so the $D$1-string effectively becomes a $D$1-$\bar D$1 bound state, which is unstable. However, the $D$1-string inside a warped throat is far separated from the $\bar D$1-string in the image throat, so the decay time is expected to be much longer than the age of the universe. That is, they are expected to be cosmologically stable. Furthermore, in any flux compactification of orientifolds, there are multiple complex structure moduli as well as K\"ahler moduli. As a result, we expect multiple axions to be present. Since a 2-form field is dual to an axion, one expects there are strings charged under each axion. What are these strings? Are they additional strings beyond the $D$1- and $F$1-strings? Since at least one axion is associated with each throat, one is led to entertain the possibility that the $D$1- and $F$1-strings inside a throat are charged under the corresponding axions associated with that throat. So we expect the radiation of light axions as well as gravitons by any string in any throat. For a cosmic $D$1-string with an observable tension $\mu_j$ at the bottom of the $j$th throat, we expect the coupling interaction takes the form $$ S \sim \int \left[ \frac{\mu_j}{g_s} g_{\mu \nu} + b_j\mu_j C_{\mu \nu} \right] d \sigma^{\mu \nu}$$ where $g_{\mu \nu}$ is the 4-dimensional metric, $b_j$ an order unity parameter and $C_{\mu \nu}$ is now the dual of the relevant axion while the string is described by $d\sigma^{\mu \nu} = ({\dot x}^{\mu} x'^{\nu}-{\dot x}^{\nu} x'^{\mu})d\tau d\sigma$, where the dot and the prime indicate derivatives with the world sheet variables. We shall define $N_T$ to be the number of throats in which the strings decaying via axions do not overwhelm its gravitational wave emission. \subsection{Domain Walls Bounded by Closed Strings} In a more realistic scenario, an axion will have a mass. There are 2 ways it can pick up a mass: (1) If we identify the above $A_{\mu}$ in the gauge transformation of $C_{\mu \nu}$ as a massless gauge field, we see that $C_{\mu\nu}$ can become massive by absorbing $A_\mu$. This is like the standard Higgs mechanism in which a gauge field $A_{\mu}$ becomes massive by absorbing the massless ``axion'' in spontaneous symmetry breaking. (2) Non-perturbative (instanton) effects typically generate a potential term of the form $V(a) \simeq -M_S^4e^{-S_{inst}} \cos a$, which breaks the shift symmetry of $a$ to a discrete symmetry. The effect is typically exponentially small. It is more convenient to rewrite as $$V(\phi) \simeq m_a^2(f/M)^2 \left(1 - \cos \left(\frac{M \phi}{f} \right) \right) \rightarrow \frac{m_a^2}{2}\phi^2+ . . .$$ where $\phi$ is the axion with a canonical kinetic term, $f$ is the axion decay constant or its coupling parameter and $M$ is the integer related to the $Z_M$ symmetry for the $F$-string (\ref{KMR}). For a potential of the above form with $M>1$, a closed string loop can become the boundary of a domain wall, or membrane. The tension of the membrane is of order $$ \sigma \sim m_a f^2 .$$ In general, an axion mass is hardly restricted; it can be as heavy as some standard model particles or as light as $10^{-33}$ eV. One intriguing possibility is that this axion can contribute substantially to the dark matter of the universe as fuzzy dark matter \cite{Hu:2000ke,Hui:2016ltb}. If so, its contribution to the energy density is roughly given by $\rho_{a} = m_a^2f^2$ while its mass is estimated to be $m\simeq 10^{-22}$ eV $\simeq 10^{-33} M_P$. Hence, $\rho_{a}=m_a^2f^2 \simeq 10^{-118} M_{Pl}^4$ $f \simeq 10^{-10} M_{Pl}$ and $\sigma \simeq 10^{-69} M^3_{Pl} \simeq 10^{-14}$ GeV$^3$. On simple energetic grounds the membrane tension dominates the cosmic string tension for large loops, i.e. when loop of size $r$ satisfies $r>2 \mu/\sigma$. Write $\mu = (\Lambda/M_{Pl})^2$ for string energy scale $\Lambda$, adopt and fix the membrane parameters above and take the loop size equal to the size of the universe today $r \sim 4.2$ Gpc. The membrane energy dominates if the string tension is less than a critical size: $\Lambda < \Lambda_c$ with $\Lambda_c/M_{Pl} = 2.5 \times 10^{-5}$, or string energy scale $\Lambda_c<3.1 \times 10^{14}$ GeV. Observationally, however, the string loops of greatest interest today are much smaller than the horizon scale today. Their size is set by the condition they can just evaporate in the age of the universe. Assuming gravitational radiation determines the rate of evaporation the loop size today is $\ell = \Gamma G \mu t_0$ and the condition $\ell > 2 \mu/\sigma$ is independent of $\mu$. For such loops the string tension dominates over membrane tension at any epoch such that $t < 2/(G \Gamma \sigma) \sim 2 \times 10^8 t_0$. \subsection{Varying Tension} So far, we have been assuming that cosmic superstrings sit at the bottoms of the throats. In general, they can move around the bottoms. Because of the deformation of a throat (from a conifold), the bottom of a throat is at $r=r_i$, which is small but not zero. For a Klebanov-Strassler (deformed) throat \cite{Klebanov:2000hb}, the bottom is $S^3$, so a cosmic superstring at the bottom of a throat can move around. In fact, it may oscillate \cite{Avgoustidis:2007ju,Avgoustidis:2012vc}and at times move to $r > r_i$. Observationally, the tension of an upward displaced piece of the string would appear to be larger, since the local warp factor $h(r)=r/R$ is bigger (i.e., closer to the bulk). Tension varying along a string and/or in time is a direct consequence of the extra dimensions and warped geometry. Observation of such a behavior can be very informative. \vspace{3mm} \subsection{Comparing Cosmic Superstring Density to Cosmic String Density} Suppose the typical mass scale of our standard model throat (S throat) is of order of the electroweak (or supersymmetry breaking ) scale, i.e., TeV scale. The CMBR observations (see Eq.(\ref{Gtension}) implies the inflation throat (A throat) has a much higher scale $\sqrt{\mu} \simeq 10^{14}$ GeV. The energy released from the $D$3-${\bar D}$3-bane annihilation will be able to heat up our universe (i.e., our branes) \cite{Chen:2006ni,Chialva:2005zy}. In addition to the S and A throats, consider another throat C with a warped factor $h_{C}$. Let the reheating (RH) temperature at the beginning of the hot big bang be $T_{RH} < \sqrt{\mu}$. We have argued that strings in the $C$-throat will be produced if $T_{RH}>h_{C}M_{s}$. Hence, in addition to cosmic strings in the $A$-throat, we expect small tension cosmic strings will appear in throats with large warping. These light cosmic strings interact very weakly with cosmic strings in the $A$-throat. On the other hand, if $T_{RH} < h_{C}M_{s}$ string production will be suppressed by a Boltzmann factor. When the number of cosmic strings produced is less than one per horizon it may still be possible to reach the scaling solution if the string loop decay rate is much smaller than the expansion rate and if there are sufficient long (superhorizon) strings present. The onset of the scaling of the cosmic string network is delayed. Beads (or baryons) on cosmic superstrings typically move at similar speeds as strings themselves, since they are being dragged along by the motions of the strings. One expects that the beads may merge or annihilate each other along the strings while junctions are being created and removed. Numerical investigations for necklaces \cite{Siemens:2000ty} indicate that string loops with many beads tend to have periodic self-intersecting solutions, so string loops may quickly chop themselves up into smaller and smaller loops, some of which will be free of beads/baryons. As a result, the superstring network may end up with smaller loops and hence the pulsar timing bounds on string tension should be relaxed somewhat \cite{Hindmarsh:2016dha}. Since cosmic superstrings with junctions and baryons are more involved than simple necklaces, a detailed study is important to pin down the loops sizes and the effective bound from the pulsar timing data. Let us summarize here. The number density of cosmic strings in the universe, when compared to the Nambu-Goto model or the Abelian Higgs model, is enhanced by 3 factors: the decreased intercommutation probability $p$, the effective number $N_s$ of string species in each throat, and the number of throats $N_T$ each of which has an independent scaling string network with fundamental tension $\mu_j$ ($j=1,2,...,N_T$) and subdominant axion emission. The overall enhancement is \begin{equation} {\cal G}=\frac{N_s}{p \mu} \sum_j \mu_J \simeq N_sN_T/p \gg 1, \quad \quad \Omega_{superstring} \sim {\cal G}\Gamma G \mu \end{equation} where $\mu$ is some average tension. Based on the above contributions ${\cal G}$ can easily be as big as ${\cal G} \sim 10^4$, with a distribution in tensions that are roughly bounded by $G\mu \lesssim 2 \times 10^{-10}$. The tension in our S throat might be as small as $G\mu \sim 10^{-30}$ (i.e., TeV scale). On theoretical grounds it might be as high as GUT scale but observationally the highest tension in any throat should not exceed $G \mu \simeq 2 \times 10^{-10}$. If strings are created at energy scales below $T_{RH}$ it is easy to imagine scenarios where there are dozens of throats with separate scaling superstring networks. In estimating the probability of detectability, and for the sake of simplicity, we gather all differences of cosmic superstrings from ordinary cosmic strings into a single scaling parameter ${\cal G}$. At times, we take ${\cal G} = 10^2$ as the canonical value for a fixed $F$-string tension. It is clear that further studies, the properties of cosmic string spectrum (including baryons), their productions, stabilities and interactions, and the cosmic evolution of the network as well as their possible detections will be most interesting. It is reasonable to be optimistic about the detectability of cosmic superstrings, but this is far from guaranteed. There are other inflationary scenarios in string theory, mostly with the inflaton as a closed string mode, in contrast to brane inflation, in which the inflaton is an open string mode. Although the reheating process has not yet been carefully studied, energy released towards the end of inflation is expected to go to closed strings directly, so the production of some cosmic superstrings may be expected. If any throat still contains a few $\bar D$3-branes today, the system would have relaxed to a non-supersymmetric NS 5-brane ``giant graviton'' configuration; that is, these $\bar D$3-branes can provide the uplift of our universe from a supersymmetric Anti-deSitter space to a non-supersymmetric deSitter space with a small positive cosmological constant. If so, our universe today is classically stable but not fully stable and will decay at some point in the future. \section{Possible Detections of Cosmic Superstrings} In the braneworld scenario, there are many warped throats in the Calabi-Yau manifold, one of which must contain the standard model branes but all of which may contain cosmic superstrings. Strings in the standard model throat are limited to D-strings thickened in a stack of D3 branes (or D7 branes wrapping 4-cycles) \cite{Leblond:2004uc}. Generically the throats have different warp factors, so the string tensions span a range of values. It is noteworthy that the strings in these other throats may dominate the string content of the universe. Throats without D3 branes may harbor a spectra of bound states of F- and D-strings. Each throat contains its own scaling string network. We have subsumed all these effects in the detectability parameter ${\cal G} \sim 10^2$. How can all these strings be detected? Let us mention 3 possibilities, starting with the least promising one first. {\bf Fast radio bursts:} Fast radio bursts have been observed at cosmological distances with some repetitions but no evidence for periodicity thus far \cite{Lorimer:2007qn,Thornton:2013iua}. Among other astrophysical possibilities, cosmic strings have been suggested as a possible source of such bursts. Strings may carry charges and interact with fields present within the throats they occupy. For example, superconducting cosmic strings can carry currents and interact with electromagnetic fields \cite{Vachaspati:2008su,Cai:2011bi,Zadorozhna:2009zza,Cai:2012zd,Yu:2014gea,Ye:2017lqn}. To be able to emit standard model photons, such strings must sit in the same throat as the standard model particles. If string segments annihilate in the standard model throat, they can generate particles/fields belonging to the standard model. Historically, there have been many proposals to explain cosmic rays, neutrinos and gamma ray bursts in this manner \cite{Berezinsky:2001cp,Gruzinov:2016hqs}. In general, cosmic superstrings are not superconducting but these considerations are important. A string in string theory is charged under a specific 2-form field, i.e., an axion field in 4 spatial dimensions. Strings are universally coupled to gravity and specifically to the axions under which they are charged. Cusps on strings have long been identified as sources for gravitational wave bursts. A cusp is a bit of string that momentarily approaches the speed of light. In doing so a small region of the string doubles back on itself for a short period of time. Since it is charged under an axionic field, it behaves like a string-anti-string pair, completely unstable to annihilation and decay via axionic and gravitational wave bursts. In essence, gravitational and axionic beams emerge from the tip simultaneously when it is moving close to the speed of light. The production of axions is similar to that of gravitational waves (in terms of beaming, periodicity, etc.). Both would appear as bursts with the same characteristic time-dependence. Assuming generic mixing a light axion (a closed string mode) produced in any throat may decay in the standard model throat to give two photons. In fact, no other standard model particle products are possible. An observer in the standard model throat may hope to detect not only gravitational waves but also photons from the cusp. Although the gravitational waves bursts are expected to be beam-like, the photons that result from the decay of the axion bursts will have larger angular spread, giving rise to diffuse radiation (the photons still suffer the relativistic headlight effect). However, when the axion beam passes through a magnetic field the Primakoff effect can take place, due to the coupling $ \propto a{\bf E \cdot B}$, converting axion $a$ to a photon. Since the inter-galactic magnetic field carries little momentum, the momentum of the axion is largely carried by the photon produced, so this stimulated axion decay yields a beam of photons, in roughly the same direction as the axionic beam. This might be the origin of some of the fast radio bursts observed. To test the idea we suggest a study of the correlations of fast radio bursts with gravitational wave observations. Such a study is practical because the angular direction to certain fast radio burst sources are precisely known. If strings are responsible the radio bursts and gravitational wave emission will be correlated in both space and time. Ref. \cite{Brandenberger:2017uwo} considered a somewhat different physical picture in which the cosmic string cusps decay directly to produce radio signals. Such a string must sit in the same throat as the standard model just like the superconducting cosmic strings that emit standard model photons. Both the axion-mediated and direct, standard model bursts from cusps would give similar space and time correlations of radio and gravitational wave bursts. Strings that can decay directly in the standard model throat will have intrinsically shorter lifetimes and are likely to represent a small subset of all the superstrings in the Calabi-Yau manifold. {\bf Microlensing:} The usual pictorial description of cosmic string lensing in 3+1 begins with a deficit angle in the geometry of the disk perpendicular to the string. When source, string and observer are all nearly aligned there exist two straight line paths from source to observer that circumnavigate the string in opposite senses, clockwise and counterclockwise. This leads to double images, i.e., cosmic string lensing. When the string tension is high enough ($G \mu \sim 10^{-7}$), lensing of galaxies is possible, i.e., double images of the galaxy. For low tensions, the deficit angle is small, so point-like lensing is possible only for objects of small angular size like stars. For a typical distant star, we cannot resolve the double images, but only observe a doubling of flux, i.e., microlensing. Microlensing of stars have been discussed in Ref. \cite{Chernoff:2007pd,Chernoff:2014cba}. If the string lies in another throat no standard model photons can ``circumnavigate'' the string in the sense that the above simple picture implies. Nonetheless we show in Appendix \ref{stringlensing} how the geometry bends the photon path from a source in the S throat to reach the observer in the S throat when the string lies in another throat. This issue is important because the most sensitive tool for direct detection of low tension strings may turn out to be microlensing, a variant of normal lensing in which the observer measures flux changes without resolving the lensed images. String microlensing has been studied in some detail \cite{Bloomfield:2013jka}. {\bf Gravitational wave bursts:} Gravitational interactions are the traditional means of detecting the presence of minimally coupled cosmic strings. String-sourced plasma perturbations may be imprinted on the CMB power spectrum. Strings may create resolved images of background galaxies by lensing. These are examples of the direct consequences of a gravitational interaction. A somewhat less direct method of probing the string content is to measure the light element yield of big bang nucleosynthesis since extra mass energy alters the cosmological expansion rate. Finally, one can hope to measure the gravitational emission in the form of bursts and a stochastic background with pulsar timing arrays and LIGO. For a recent review see \cite{Chernoff:2014cba}. In this paper, we present a more detailed analysis of the rate of gravitational wave bursts expected from cosmic superstrings for LIGO/VIRGO and LISA. In summary, the hunt for superstrings will be based on both gravitational and axionic degrees of freedom because they are sensitive to the string content of all the throats in a Calabi-Yau manifold. \section{Models for cosmologically-generated loops} To calculate the expected rate of gravitational wave bursts, stellar microlensing occurrences or two photon decays of emitted axions requires a model for the string loop sources. We begin by describing the number of strings in the universe as a whole including the distribution of loop sizes. We utilize results for the dynamical motion of strings in growing matter perturbations to estimate the concentration of strings within our own Galaxy and on larger scales. Building a model makes explicit the dependence of the demography of the string population on microscopic parameters like string tension, number of throats, number of effective string species and probability of intercommutation even if they are not precisely known. We include as appendices a detailed description of the model and describe the context and most important consequences below. \subsection{Two Loop Sizes} In Kibble's description of the network \cite{Kibble:1984hp} long strings are stretched by the universe's expansion and intercommutations chop out loops which ultimately evaporate. With stretching, chopping and evaporation the scaling solution is an attractor and all the macroscopic cosmic strings properties (length of string per volume, correlation length, etc.) appear to scale \cite{Kibble:1976sj}. Virtually all analytic descriptions of network evolution for traditional cosmic strings begin with these processes. Cross sections, rate coefficients and efficiencies have generally been derived from simulations in which a realization of the network is followed in a large enough spacetime volume to infer its statistical properties. Luckily, such simulations rapidly enter the scaling regime so that the macroscopic properties can be established. However, important differences amongst various simulations have been observed, especially regarding the small structures on the network's long strings and the size of the loops formed from such strings. Since the loops provide the main observational diagnostic for superstrings, this is an important point and a consensus has only recently emerged. Some early simulations generated only tiny loops at the smallest available grid scale \cite{Bennett:1987vf,Bennett:1989ak,Bennett:1990uza,Allen:1990tv} while others \cite{Albrecht:1984xv,Albrecht:1989mk} found the network created predominantly large loops with sizes within a few orders of magnitude of the scale of horizon. Small sized features on long strings are expected to damp by gravitational radiation so that there is a natural physical cutoff but all simulations omit the direct calculation of the gravitational backreaction. It may be reasonable to imagine the grid scale cutoff plays a similar, dissipative role. Why small loops should predominate in some simulations and not others was unclear. Grid-based and discrete numerical simulations of cosmological string dynamics are generally expected to treat large scales easily and accurately so why only some simulations led to large loops was also perplexing. In fact, intercommutations generate substructure on the long strings so the dynamics at the horizon scale cannot be separated from that on small scales \cite{Austin:1993rg}. Recent work finds the string substructure has a fractal character which influences dissipation and small loop formation \cite{Polchinski:2006ee,Dubath:2007mf,Polchinski:2007rg,Polchinski:2007qc}. The current understanding is that string loops of two characteristic scales are generated in a scaling cosmological network during epochs of powerlaw expansion. Roughly 5-20\% of the string invariant length (invariant length equals the total energy per spatial increment divided by tension) that is chopped out of the expanding, horizon-crossing strings finds its way into large loops where ``large'' means the size at time $t$ is roughly $\ell/t \sim 10^{-4}-10^{-1}$ \cite{Martins:2005es,Ringeval:2005kr,Vanchurin:2005pa,Olum:2006ix,BlancoPillado:2010sy,BlancoPillado:2011dq,Blanco-Pillado:2013qja}. In other words, large loop are comparable to the size of the horizon at formation. The remaining part of the excised invariant string length yields very small loops which move relativistically and evaporate in less than a characteristic expansion time ($\ell \propto (G \mu)^{1.2}$ and $(G \mu)^{1.5}$ or $H \tau \sim (G \mu)^{0.3}$ and $(G \mu)^{0.13}$ for radiation era and matter era, respectively) \cite{Polchinski:2007qc}. The mechanism for the production of the small loops today is intimately tied to the small scale structure introduced by intercommutation on horizon scales at earlier epochs. When a loop is chopped out of smooth string (with continuous paths on the tangent sphere for right and left moving modes) the loop configuration typically contains a transient cusp in which two nearly parallel string segments approach each other and then recede (this occurs each period of loop oscillation). If the loop forms from horizon crossing string that is not smooth then the first approach to the cusp configuration results in intercommutation, explosive sub-loop formation and excision of the cusp from the loop. Ref. \cite{Polchinski:2006ee} showed on analytic grounds that after many e-folds of expansion the large horizon crossing loops are replete with small scale structure. This mechanism and ultimately gravitational backreaction dispose of the bulk of the string invariant length. The important question is how much of the network escapes this fate and what are the properties of those loops? Numerical simulations have not yet been able to follow the buildup and scaling of the small scale structure on long strings because its generation takes much longer than the macroscopic measures usually used to judge whether a simulation has reached the scaling regime but the properties of the larger loops have now converged and the uncertainties that remain due to the small loop effects are subdominant\cite{Blanco-Pillado:2013qja}. To summarize the essentials: the model, that used in our previous analyses \cite{Chernoff:2009tp}, assumes the fraction $f \sim 0.05 - 0.2$ of the invariant length goes into large loops of size $\ell = \alpha t$ at time $t$ for $\alpha \sim 0.1$. For comparison, simulations for the radiative era \cite{Blanco-Pillado:2013qja} imply $\alpha \sim 0.1$ (their $\alpha$ is written in terms of the horizon scale $2t$) and $f \sim 0.1$. The remaining fraction $1-f$ goes into small loops of invariant size $\ell \sim (G \mu)^{1.2}$ moving relativistically (radiation era). The results are similar to model $M=2$ used for constraining cosmic strings from the first Advanced LIGO/VIRGO observing run \cite{Abbott:2017mem}. The large loop distribution is effectively established on horizon scales utilizing a small fraction $f$ of the excised network. In our model after a large loop forms it evolves independently, shrinking in size; it is the primary object of interest. The rest of the excised network forms small, short-lived loops (suddenly, as soon as a cusp appears) at roughly the same epoch as fraction $f$ makes large loops. More detailed descriptions of the evolution of small loops (e.g. $M=2$ based on \cite{Blanco-Pillado:2013qja} and $M=3$ based on \cite{Lorenz:2010sm,Ringeval:2017eww}) combine simulation results and theoretical models. These models give different small loop distributions; one extrapolates simulation results ($M=2$) and the other fits a theoretical model for ongoing (not explosive) loop formation ($M=3$). \subsection{Velocity One Scale Model and Loop Density} Let $V$ be physical volume and $E$ be the energy of a network of long (horizon-scale) strings of tension $\mu$. Let $L$ be the length such that there is 1 string of invariant length $L$ in volume $V=L^3$ (loops are not included in $L$). The physical energy density is $\rho_\infty = E/V = \mu L/V = \mu/L^2$. From the encounter rate of strings with other strings and intercommutation probability $p$ we deduce the expected energy transformed from network to loops (loop formation) and vice-verse (reconnection). The newly formed loop size distribution is assumed to scale with the horizon size. We account for how the energy in the network's long strings is increased by stretching, lost by formation of loops and gained by reconnection and conversely how the energy in the loop population is altered (loops are assumed small compared to the horizon and have negligible stretching). This is just Kibble's original network model \cite{Kibble:1984hp,Bennett:1985qt,Bennett:1986zn} with the addition of $p \ne 1$ and omitting the reconnection terms which may be shown to be small. The Velocity One Scale (VOS) model \cite{Martins:1995tg,Martins:2000cs,Battye:1997hu,Pogosian:1999np} is supplemented with simulation-determined fits to describe chopping and velocity. Write $L = \gamma t$. In exact scaling $\gamma$ is constant but we regard it and other quantities like it as slowly varying. The summary of the model is \begin{eqnarray} \rho_\infty & = & \frac{\mu}{ \gamma^2 t^2} \\ \frac{t}{\gamma}\frac{d\gamma}{dt} & = & -1 + Ht \left( 1 + v^2 \right) + \frac{C(t) p v}{2 \gamma} \end{eqnarray} where $H$ is the Hubble constant and $C(t)$ is chopping efficiency parameter fit from numerical simulations. The equation for the velocity $v$ \cite{Martins:1995tg} is \begin{eqnarray} \frac{dv}{dt} & = & \left( 1 - v^2 \right) H \left( \frac{k(v)}{Ht \gamma} - 2 v \right) \\ k(v) & = & \frac{2 \sqrt{2}}{\pi} \frac{1 - 8 v^6}{1 + 8 v^6} \end{eqnarray} where $k(v)$ is a fit. When $Ht$, $C(t)$ and $v(t)$ are constant in time this gives the exact scaling solution. We treat these as two coupled ODEs for $\gamma$ and $v$ to be solved numerically. We begin at large $z$ when $Ht=1/2$ setting the left hand sides to zero ($d/dt \to 0$) and find an equilibrium point for $\gamma$ and $v$. As $z$ decreases the system begins to evolve because $C$ and $H t$ vary. We infer $H t$ from the multicomponent $\Lambda$-CDM cosmology. The rate at which energy is transformed to loops ${\dot E}_{\ell}$ is \begin{equation} \frac{\dot E_{\ell}}{a^3} = C(t) \rho_\infty \frac{pv}{L} = C(t) \mu \frac{p v}{\gamma^3 t^3} \end{equation} where $a$ is the scale factor. We integrate this using the solutions for $\gamma$ and $v$ from the VOS expressed as a single modestly varying function ${\cal A}$ so that \begin{equation} \frac{\dot E_{\ell}}{a^3} = \frac{\mu}{p^2 t^3} {\cal A} . \end{equation} \subsection{Loop Size Distribution Born of Large Loops} We assume that the large loops formed at a given time $t$ have size $\alpha t$ and consume a fraction $f$ of the invariant length (energy) being chopped out of the network. Large loops are non-relativistic with comparable geometric and invariant lengths. The birth rate density for loops of size $\ell$ at time $t$ is \begin{equation} \label{dndtdl} \frac{d n_\ell}{dt d\ell} = \frac{f {\cal A}}{\alpha p^2 t^4} \delta(\ell - \alpha t) . \end{equation} The loops evaporate by gravitational and axionic emission at constant rate. The length at $t$ for a loop born at $t_b$ with size $\ell_b$ is \begin{equation} \label{looplengthatt} \ell[\ell_b, t_b, t] = \ell_b - \Gamma G \mu (t - t_b) \end{equation} Integrating over birth times and sizes gives the differential loop size density \begin{eqnarray} \frac{dn}{d\ell} & = & \frac{ {\cal A}_b f \alpha^2 }{p^2} \left( \frac{a_b}{a} \right)^3 \frac{ \Phi^3 }{(\ell + \Gamma G \mu t)^4} \\ t_b & = & \frac{ \ell + \Gamma G \mu t}{ \alpha \Phi } \\ \Phi & = & 1 + \frac{ \Gamma G \mu}{ \alpha } \end{eqnarray} for $t_b < t$ and $\ell < \alpha t$. The quantities ${\cal A}_b$ and $a_b$ are evaluated at $t=t_b$. The form for $dn/d\ell$ peaks at $\ell=0$ but the quantities of greatest observational interest are weighted by $\ell$ or higher powers. The characteristic dissipative scale for the large loops is $\ell_d = \Gamma G \mu t$. If $G \mu < 7 \times 10^{-9} (\alpha/0.1)(50/\Gamma)$ the loops near $\ell_d$ today were born before equipartition, $t_{eq}$. For a simple numerical estimate today near the dominant small end of size spectrum in the radiative era $t<t_{eq}$ we write \begin{eqnarray} \frac{a_b}{a} & = & \left( \frac{a_b}{a_{eq}}\right) \left( \frac{a_{eq}}{a} \right) \\ & \simeq & \left( \frac{t_b}{t_{eq}}\right)^{1/2} \left( \frac{t_{eq}}{t} \right)^{2/3} \end{eqnarray} and using $x \equiv \ell/\ell_d$ we have \begin{eqnarray} \ell \frac{dn}{d\ell} & = & \frac{x}{(1+x)^{5/2}} \left( \frac{ {\cal A} {\it f} } {p^2} \right) \left( {\Gamma G \mu} \right)^{-3/2} \left( \frac{\alpha t_{eq}}{t_0} \right)^{1/2} \left( \frac{1}{t_0} \right)^3 \label{approxdndl} \end{eqnarray} In the radiative era ${\cal A}_b \sim 7.68$ is close to constant; write the other string-related parameters $p=1$, $f=0.2 f_{0.2}$, $\alpha=0.1 \alpha_{0.1}$ and $\Gamma=50 \Gamma_{50}$; and from $\Lambda$CDM $t_{eq}=4.7 \times 10^4$ yr and $t_0=4.25 \times 10^{17}$ s. These give a baseline result that applies to cosmic strings \begin{eqnarray} \ell \frac{dn}{d\ell} & = &1.15 \times 10^{-6} \ \ \frac{x}{(1+x)^{5/2}} \left(\Gamma_{50} \mu_{-13}\right)^{-3/2} f_{0.2} \alpha_{0.1}^{1/2} \ \ {\rm kpc}^{-3}\\ x & = & \frac{\ell}{\ell_d} \\ \mu_{-13} & = & \left( \frac{ G \mu}{10^{-13}} \right) \end{eqnarray} The characteristic length and mass of the loops just evaporating are \begin{eqnarray} \ell_d & = & 0.0206 \ \Gamma_{50} \mu_{-13} \ \ {\rm pc} \\ M_{\ell_d} & = & 0.043 \ \Gamma_{50} \mu_{-13}^2 \ \ {\rm M}_\odot \end{eqnarray} Numerical results for mass densities based on the approximate forms are \begin{eqnarray} \frac{d \rho}{d\ell} & = & \mu \ell \frac{dn}{d\ell} \\ & = & 2.41 \times 10^{-6} \ \ \frac{x}{(1+x)^{5/2}} \left(\Gamma_{50} \right)^{-3/2} \mu_{-13}^{-1/2} f_{0.2} \alpha_{0.1}^{1/2} \ \ {\rm M}_\odot {\rm kpc}^{-3} {\rm pc}^{-1} \\ \frac{d \rho}{d\log M_\ell} & = & \ell \frac{d\rho }{d\ell} = \mu \ell^2 \frac{dn}{d\ell} \\ & = & 4.98 \times 10^{-8} \ \ \frac{x^2}{(1+x)^{5/2}} \left(\Gamma_{50} \right)^{-1/2} \mu_{-13}^{1/2} f_{0.2} \alpha_{0.1}^{1/2}\ \ {\rm M}_\odot {\rm kpc}^{-3} \\ \frac{d \Omega_{\ell,0}}{d \log M_\ell} & = & \frac{1}{\rho_{cr,0}} \frac{d \rho}{d\log M_\ell} \\ & = & 3.66 \times 10^{-10} \ \ \frac{x^2}{(1+x)^{5/2}} \left(\Gamma_{50} \right)^{-1/2} \mu_{-13}^{1/2} f_{0.2} \alpha_{0.1}^{1/2} \end{eqnarray} In Fig. \ref{dndlogl} the solid lines show $\ell dn/d\ell$ today ($t=t_0$) for $G \mu = 10^{-13}$ to $10^{-9}$ in powers of 10; each peak is near $\ell = (2/3) \ell_d$. \begin{figure}[ht] \centering \includegraphics{dndlogl.eps} \caption{\label{dndlogl} The size distribution of loops today for a range of string tensions $ G\mu = 10^{-13}$ to $10^{-9}$ at the current epoch $t=t_0$. Solid lines are exact; dotted lines are approximate; ``x'' marks the expected peak $x=2/3$. The density is the average, homogeneous density in the universe without clustering; intercommutation $p=1$, fraction of large loops formed $f=0.2$ and scale of large loop size $\alpha=0.1$.} \end{figure} The approximate results are denoted with the ``dots'' which are quite close to the exact evaluation given by smooth lines. Note that both sets of lines have peaks near $x=2/3$. We will denote all these results as ``baseline'' -- they apply to one species of normal cosmic strings with intercommutation probability $p=1$, spatially averaged throughout the universe. Modifications to the baseline that originate in the differences between superstring and field theory are lumped into a common factor ${\cal G}$ including the reduced intercommutation probability of superstrings as follows \begin{equation} \left( \frac{dn}{d\ell} \right)_{homog} = {\cal G} \left( \frac{dn}{d\ell} \right)_{baseline} . \end{equation} The homogeneous, cosmologically averaged, superstrings have loop densities that exceed the baseline densities by the factor $1 < {\cal G} < 10^4$. \subsection{String loop clustering} If a loop is formed at time $t$ with length $\ell = \alpha t$ then its evaporation time $\tau = \ell/\Gamma G \mu$. For Hubble constant $H$ at $t$ the dimensionless combination $H \tau = \alpha/(\Gamma G \mu)$ is a measure of lifetime in terms of the universe's age. Superstring loops with moderate $\alpha$ and very small $\Gamma G \mu$ live many characteristic Hubble times. New large loops are born with mildly relativistic velocity. The peculiar center of mass motion is damped by the universe's expansion. A detailed study \cite{Chernoff:2009tp} of the competing effects (formation time, velocity damping, evaporation, efficacy of anisotropic emission of gravitational radiation) in the context of a simple formation model for the galaxy shows that loops accrete when $\mu$ is small. The degree of loop clustering relative to dark matter clustering is a function of $\mu$ and approximately independent of $\ell$. Smaller $\mu$ means older, more slowly moving loops and more effective clustering. The spatially dependent dark matter enhancement in a collapsed object is \begin{equation} {\cal E} = \frac{\rho_{DM}}{\Omega_{DM} \rho_c} \end{equation} where $\rho_{DM}$ is the dark matter density and $\rho_c$ is the critical density. The dark matter enhancement is very substantial throughout the Galaxy. At the local position ${\cal E} \sim 10^{5.5}-10^6$. The formation of the Galaxy by cold dark matter infall inevitably is accompanied by loops with low center of mass motions. The tension dependent enhancement to the homogeneous distribution of loops is \begin{eqnarray} {\cal F} & = &{\cal E} \beta(\mu) \end{eqnarray} where $0 < \beta(\mu) \lta 0.4$. For a fixed tension there is only weak $\ell$ dependence of ${\cal F}$, i.e. the enhancement is roughly independent of the individual loop length. The specific form for $\beta$ derived for the Galaxy is given in the appendix. Lower tension strings behave more and more like cold dark matter, i.e. $\beta$ increases as $\mu$ decreases. In fact, $\beta$ does not reach $1$ partially because loops do not survive in the Galaxy forever, each is eventually accelerated by the rocket effect and ejected before complete evaporation occurs. The tension dependent enhancement saturates ($\beta \to 0.4$) near $\mu = 10^{-15}$. The {\it local} string loop population is enhanced by the factor ${\cal F}$ with respect to the homogeneous distribution. Since dark matter is strongly clustered it follows that string loops with small $\mu$ are strongly clustered. We summarize the enhancement of the local Galactic population by \begin{equation} \left( \frac{dn}{d\ell} \right)_{local} = {\cal F} \left( \frac{dn}{d\ell} \right)_{homog} = {\cal F} {\cal G} \left( \frac{dn}{d\ell} \right)_{baseline} . \end{equation} This is the basis for rate calculations of microlensing and of gravitational wave bursts. Large ${\cal F}$ and large ${\cal G}$ make microlensing and gravitational wave detections of nearby loops feasible. \section{Estimate of the Rate of Gravitational Wave Bursts} Loops and long horizon-crossing strings will generate gravitational radiation. Strong emission is expected when a string element accelerates rapidly, notably at kinks and cusps. Here, we concentrate on the bursts expected from large loops and on the determination of the confusion limit which delimits the stochastic gravitational wave background of unresolved bursts from the same sources. We do not address the emission from the small loops or from the long strings or from any other sources. There are many current and future experiments with the potential to make direct detections of gravitational wave emission from string loops and/or set upper limits on it. These include Earth-based laser interferometers (LIGO\cite{TheLIGOScientific:2014jea}, VIRGO\cite{Accadia:2011zzc}, KAGRA\cite{Aso:2013eba}; for overview \cite{Evans:2014cwa}), space-based inteferometers (LISA\cite{Audley:2017drz}, DECIGO\cite{Kawamura:2011zz}) and pulsar timing arrays (NANOGRAV\cite{Arzoumanian:2015liz}, European timing array\cite{Lentati:2015qwp},Parkes\cite{Hobbs:2013aka}). The calculation methodology was formulated in\cite{Damour:2004kw, Damour:2001bk,Damour:2000wa,Siemens:2006yp} and we schematically follow the treatments \cite{Kuroyanagi:2012wm,Kuroyanagi:2012jf} with several modifications that play an important role in Earth-based experiments \cite{Chernoff:2007pd,DePies:2007bm,Chernoff:2009tp,Chernoff:2014cba}). The major changes with respect to previous treatments are: \begin{itemize} \item Tension-dependent clustering of string loops in the Galaxy halo (${\cal F}$). \item Only a small portion of the long strings' invariant length transformed into large loops (fraction $f=0.1$ to make large loops of parameterized size $\alpha=0.1$). The rest is lost for the purpose of direct detection of gravitational wave emission. \item Enhancements of the string density with respect to field theory strings on account of multiple species of strings in each throat, diminished intercommutation probability and multiple throat (${\cal G}$). \end{itemize} There is no clustering in \cite{Abbott:2017mem}; their model $M=2$ has a similar fraction of large loops and they explore a range of $1/p$ comparable to the range ${\cal G}$ we have discussed. We outline the methodology and provide examples of the results. \subsection{Homogeneous Methodology} We follow \cite{Kuroyanagi:2012wm,Kuroyanagi:2012jf} for calculating event rates in a homogeneous universe with ${\cal F}={\cal G}=1$. The birth rate density for loops (eq. \ref{dndtdl}) is expressed in terms of $\gamma=L/t$. The function $\gamma(t)$ is numerically derived for a $\Lambda$CDM cosmology (eqs. \ref{modelstart}-\ref{modelend}). The cosmological treatment is essentially exact although the description of the network presumes that it is close to scaling at all times and this will fail (1) when the network first forms, (2) at the radiation-matter transition and (3) on large scales at late times as $\Lambda$ comes to dominate expansion. For the loops of interest none of these are consequential. After a loop is formed its length shrinks at a constant rate until complete evaporation (eq. \ref{looplengthatt}). The Fourier transform of the gravitational wave amplitude $h(f)$ observed at Earth with frequency $f$ from a single passage of a cusp or kink on a loop at red-shift $z$ and having loop length $\ell$ at the time of emission has asymptotic form\cite{Damour:2001bk} \begin{eqnarray} h_{cusp} f & = & \frac{A_{cusp} G \mu \ell^{2/3}}{f_{em}^{1/3} r(z)} \\ h_{kink} f & = & \frac{A_{kink} G \mu \ell^{1/3}}{f_{em}^{2/3} r(z)} \\ r(z) & = & \int_0^z dz' \frac{1}{H(z')} \\ f_{em} & = & f(1+z) \end{eqnarray} where $r(z)$ is comoving distance, $H(z)$ is the Hubble constant and $f_{em}$ is the emission frequency. The transform $h(f_{em})$ vanishes for frequencies (approximately) less than the loop fundamental $f_{em} < 2/\ell$. The numerical quantities $A_{cusp}$ and $A_{kink}$ are order unity coefficients \cite{Damour:2001bk}. We conservatively fix $A_{cusp} = A_{kink} = 1$ (cf. $A_{cusp} \sim 2.68$ in \cite{Damour:2001bk,Kuroyanagi:2012wm,Kuroyanagi:2012jf}). Each cusp or kink on a loop emits beamed radiation with angular scale $\Theta$ and solid angle $\Omega$ and is observed (at Earth) with repetition frequency $f_{rep}$: \begin{eqnarray} \Theta & = & \left( \frac{ f_{em} \ell }{2} \right)^{-1/3} \\ \Omega_{cusp} & = & \pi \Theta^2 \\ \Omega_{kink} & = & 2 \pi \Theta \\ f_{rep} & = & \frac{2}{\ell (1+z)} . \end{eqnarray} Assume that there are $n$ active cusps or kinks per loop per fundamental period (and note separate ``cusp'' and ``kink'' labels are omitted when the same form applies to both types). Typically, a loop has an even number of cusps and an integer number of kinks. For numerical examples we take $n_{cusp}=2$ and $n_{kink}=4$. The solid angle for the kink presumes the beam pattern traces a great circle on the sky. Let $R$ be the rate of reception of signals of frequency $f$ at Earth and let $dR/dzdt_b$ be the differential rate with respect to the emission redshift $z$ of loops born at time $t_b$. Distinguishing the emission rate density and birth rate density for clarity gives \begin{eqnarray} \frac{dR}{dz dt_b} & = & \left(\frac{dn}{dt}\right)_{em,z} \frac{dV}{dz} \times \frac{\Omega \ n \ f_{rep}}{4 \pi} \\ \left(\frac{dn}{dt}\right)_{em,z} & = & \left(\frac{dn}{dt}\right)_{b,z_b} \left( \frac{ a(z_b) }{ a(z) } \right)^3 \end{eqnarray} because the loop density scales like cold matter. It is straightforward to transform from time of birth to Fourier amplitude $d/dt_b \to d/dh$ by first writing $\ell$ in terms of $h$, $f$, $z$ and $G \mu$ for a given cosmology \begin{eqnarray} \ell_{cusp} = \left( \frac{ hf r f_{em}^{1/3} }{A_{cusp} G \mu} \right)^{3/2} \\ \ell_{kink} = \left( \frac{ hf r f_{em}^{2/3} }{A_{kink} G \mu} \right)^3 \end{eqnarray} and then substituting these expressions into the birth time $t_b$ and differentiating to find $dt_b/dh$ \begin{eqnarray} t_b & = & \frac{ \ell + \Gamma G \mu t}{\alpha + \Gamma G \mu} \\ \left(\frac{dt_b}{dh}\right)_{cusp} & = & \frac{3 \ell_{cusp}}{2 h (\alpha + \Gamma G \mu)} \\ \left(\frac{dt_b}{dh}\right)_{kink} & = & \frac{3 \ell_{kink}}{h (\alpha + \Gamma G \mu)} \end{eqnarray} With the change of variables \begin{equation} \frac{dR}{dz dh} = \frac{dR}{dz dt_b} \frac{dt_b}{dh} \end{equation} the rate for cusps and kinks is evaluated as an integral over $z$ for any given $h$ \begin{equation} \frac{dR}{dh} = \int dz \frac{dR}{dz dh} . \end{equation} Single strong signals can be separated from the background of overlapping signals if they occur less frequently than $f$, the frequency at which observations are made. We will define the amplitude $h_*$ at the frequency of highest sensitivity $f$ by \begin{equation} \int_{h_*}^\infty \frac{dR}{dh} dh = f . \end{equation} The rate per log amplitude is $h dR/dh$. Generalizing to a frequency-dependent amplitude $h_*(f)$ we write the stochastic background as \begin{equation} \Omega_{GW}(f) = \frac{2 \pi^2 f^3}{3 H_0^2} \int_0^{h_*} dh h^2 \frac{dR}{dh} . \end{equation} The homogeneous changes scale $\frac{dn}{dt_b} \to f {\cal G} \frac{dn}{dt_b}$ where $f$ is the fraction of the long string length that enters large loops and and ${\cal G}$ accounts for the number of effectively independent species of strings. \subsection{Clustering} We treat clustering inhomogeneity in a spherically symmetric fashion as if the Earth were at the center of the Galaxy, ignoring the CDM density variation at radii less than $R_{gc}=8.5$ kpc, the Sun's distance from the center. We retain the density variation for $R_{gc}<r<R_{TA}$ where $R_{TA}$ is the turn around radius of the halo of the Galaxy in a spherically symmetric infall model. The measured rotation curve of the galaxy and the age of the universe imply $R_{TA}=1.1$ Mpc \cite{Chernoff:2009tp}. This power law description of the CDM halo density is accurate beyond the central regions where baryons concentrate out to the scale where infall times become comparable to the age of the universe. Quantitatively, \begin{eqnarray} \frac{\rho_{DM}(r)}{\rho_c \Omega_{DM}} = 10^{3.2} \left( \frac{r}{100 \ {\rm kpc}} \right)^{-2.25} . \end{eqnarray} The density scaling of the infall model is close to that of an ideal, flat rotation curve for which $\rho \propto r^{-2}$. Although very simple, the spherically symmetric description is suitable starting a few kpc from the Galactic center and reaching half way to Andromeda, i.e. $\sim 400$ kpc. Beyond the midpoint a monolithic collapse can't be inaccurate, of course. The bulk of the matter in the halo lies at large distance but the falloff of burst signals with distance emphasizes nearby sources. The detailed density profile at the Galactic center (core or cusp) is less important than the total mass within $r<R_{gc}$ which is well-fixed by the rotation curve. Likewise, the burst rate will not be overly sensitive to the cutoff at the turn around radius because nearby sources are easier to detect. Consider a sphere of radius $R_{gc}$ with inner flat overdensity which joins smoothly onto a density profile like that of the infall model for $r>R_{gc}$. The CDM overdensity and the tension-dependent enhancement factor give the loop overdensity: \begin{eqnarray} {\cal E}(r) & = & \max( 1, \frac{\rho_{DM}(\max(r,R_{gc}))}{\rho_c \Omega_{DM}} ) \\ {\cal F}(r) & = & \max(1, \beta(\mu) {\cal E}(r) ) . \end{eqnarray} A schematic picture shows the relationship between the power law density profile in CDM and the adopted ${\cal F}(r)$. \begin{figure}[ht] \centering \includegraphics{radialprofiles.eps} \caption{The black line is the power law CDM profile that forms in spherical infall. The red line is the adopted string enhancement for $G \mu = 10^{-13}$. It is based on the truncated CDM profile at $r < R_{gc}$, the position of the Sun. This string loop distribution has been used to estimate the local contribution to gravitational wave bursts that emanate from the Galaxy and from larger distances.} \end{figure} The outer edge is close to the point where $\rho = \rho_c \Omega_{DM}$. The enhancement factor is included in the calculation by the replacement \begin{equation} \frac{dn}{dt} \frac{dV}{dz} \to {\cal F}(r=z/H) \frac{dn}{dt} \frac{dV}{dz} \end{equation} which alters the integration at small $z$. This is a conservative means of estimating the effect of clustering on detection because it discounts the most concentrated inner regions. \subsection{Noise} For a given $h(f)$ the signal to noise over a band $[f_{lo},f_{hi}]$ is \begin{equation} \rho = 2 \sqrt{ \int_{f_{lo}}^{f_{hi}} \frac{ |h|^2 } {S(f)} df } \end{equation} where $S(f)$ is the one-sided power spectral density (or, equivalently $S(f)=A(f)^2$ where $A(f)$ is amplitude spectral density). The magnitude of the Fourier transform (in the continuum limit) of an observed cusp or kink signal follows an approximate power law form in $f$: $h(f) \propto f^{-n}$ with $n=4/3$ for a cusp and $n=5/3$ for a kink. The cusp's envelope is smooth and the kink's somewhat more variable. The non-zero range of $h(f)$ extends from the frequency of the loop fundamental to a value set by how the direction of observation relates to the beam's emission axis. If the observational band $[f_{lo},f_{hi}]$ falls within the intrinsic range of the power law form then it suffices to calculate the signal to noise using a single frequency $f_{char}$. Write $(h f)_{char} = h(f_{char}) f_{char}$. For a given instrument with fixed $f_{lo}$, $f_{hi}$ and $S(f)$ the quantity of interest is \begin{equation} \frac{1}{(hf)_{inst}} = 2 \sqrt{ \int_{f_{lo}}^{f_{hi}} \left( \frac{f_{char}}{f} \right)^{2n} \frac{ df } {f_{char}^2 S(f)} } \end{equation} so that the signal to noise is simply written \begin{equation} \rho = \frac{(h f)_{char}}{ (hf)_{inst} } . \end{equation} Calculating $h dR/dh$ at $f=f_{char}$ for independent $h$ is equivalent to calculating $dR/d \log \rho$ as function of signal to noise $\rho$. We will present results in terms of $dR/d \log \rho$ where $R$ is measured in events per year. The selection of the frequency band to describe a given instrument must balance several considerations: ideally the range of $[f_{lo},f_{hi}]$ should be small so that the power law approximation has maximal validity and large to encompass as much of the signal as possible. Specifically, $f_{lo}$ should be small because large loops have higher amplitudes at smaller frequencies and $f_{hi}$ should be large enough to reach the most sensitive part of the instrument's noise curve. Our choices for $[f_{lo},f_{hi}]$ are given in the Table \ref{tabnoise}. For $S(f)$ we use an analytic noise model for LISA and numerical values of a plotted noise curve for LIGO. \footnote{Taken from ``LISA Unveiling a hidden Universe'', an ESA study chaired by Danzmann and Prince, Feb. 2011, section 3.4 available at \url{sci.esa.int/science-e/www/object/doc.cfm?fobjectid=48363} and AdvLIGO noise curve in document LIGO-T0900288-v3 available at \url{dcc.ligo.org/public/0002/T0900288/003/AdvLIGO\%20noise\%20curves.pdf} } The calculations take $f_{char}=f_{hi}$ and the sensitivities to cusp and kink bursts are given in Table \ref{tabnoise}. \begin{table} \begin{center} \begin{tabular}{ccccc} \hline Instrument & $f_{lo}$ & $f_{hi}$ & $(hf)_{inst}$(cusp) & $(hf)_{inst}$(kink) \\ LISA & $3 \times 10^{-5}$ & $5 \times 10^{-3}$ & $3.45 \times 10^{-22}$ & $2.70 \times 10^{-22}$ \\ LIGO & $10$ & $220$ & $8.97 \times 10^{-24}$ & $5.32 \times 10^{-24}$ \end{tabular} \caption{\label{tabnoise}All frequencies in Hz. $f_{char}=f_{hi}$. } \end{center} \end{table} \subsection{Results} Calculations with ${\cal G}=1$, $\alpha=0.1$, $f=0.1$ and $\Gamma=50$ describe loops formed by traditional field theory (FT) strings based on the current understanding of network evolution. Superstring (SS) calculations assume ${\cal G}=10^2$. We find that for both LISA and LIGO the locally clustered strings can dominate the statistics of detected bursts over specific ranges of string tension. This statement is true for cusps and kinks in both FT and SS calculations. Fig. \ref{LIGO_Cusp_-14} shows a very wide view of LIGO cusp detections for FT strings with $G \mu = 10^{-14}$. The abscissa is $\log_{10} \rho$ and the ordinate is the $\log_{10}$ of the rate per year (per log interval of $\rho$). Roughly speaking, in similar graphs in this section we are observationally most interested if/when the lines enter the upper right hand quadrant: here the typical rates exceed one per year for non-trivial signal to noise ratios. \begin{figure}[ht] \centering \includegraphics{LIGO_Cusp_-14._.eps} \caption{\label{LIGO_Cusp_-14} Advanced LIGO detects cusp bursts for field theory (FT) strings with string tension $G \mu = 10^{-14}$. Blue lines illustrate differential rates as a function of log signal to noise ratio. The solid blue line is the total rate from all sources, the dotted blue line is the rate from homogeneous cosmology (no local clustering) and the dashed blue line is the rate for sources with $z>0.68$.} \end{figure} The solid blue line is the total forecast of detections for FT strings. It includes strings clustered within the halo of the Galaxy plus those throughout a $\Lambda$CDM homogeneous universe. The dotted blue line excludes the Galactic halo's local contribution -- it shows the contribution of the homogeneous cosmological distribution. The dashed blue line displays separately high redshift contributions (defined by $z > 0.68$). For a wide range of $\rho$ the detections are dominated by strings in the local halo. The vertical green line is the confusion limit for the total (solid) and for the homogeneous components (dotted) which overlap in this case. The greatest impact of string loop clustering on LIGO detections of cusps for field theory strings occurs near $G \mu \sim 10^{-13.3}$. Over the range $10^{-14.8} <G \mu < 10^{-12.5}$ the solid blue curves cross into the upper right hand quadrant and the rate of burst detections is dominated by loops in the halo. A more useful way to display the same results is shown in Fig. \ref{LIGO_Cusp_-14.alt2} where a split log-linear abscissa using the quantity \begin{equation} Q(\rho) = \left\{ \begin{tabular}{cc} $\rho - 1$ & $\rho > 1$ \\ $\log_{10} \rho$ & $\rho < 1$ \end{tabular} \right. \end{equation} which has the effect of spreading out the interesting signal to noise ratios on the right hand side and compressing the very small ones on the left. The right hand side of the plot shows that the situation with many bursts per year only occurs for $\rho < 3$ ($Q<2$), a weak signal to noise for an experiment like LIGO. Signals of this sort are likely to fall below the threshold for many LIGO based searches. The linear scale on the right hand side makes this important fact more obvious than the previous rendition. \begin{figure}[ht] \centering \includegraphics{LIGO_Cusp_-14._alt2.eps} \caption{\label{LIGO_Cusp_-14.alt2} Advanced LIGO detects cusp bursts for $G \mu = 10^{-14}$ for field theory (FT) cosmic strings using same line types as Fig. \ref{LIGO_Cusp_-14} with split log-linear abscissa. To the left of the y-axis $\log_{10}\rho$, to the right $\rho - 1$ where $\rho$ is signal to noise.} \end{figure} Now consider the impact of the move to superstrings shown in Fig. \ref{LIGO_Cusp_-14.alt3}. We have argued that several factors suggest ${\cal G}=10^2$ is a reasonable summary of the enhancement effects of superstrings over field theory strings. This choice shifts all rates upward by the same factor and yields the purple line for the total LIGO burst rate for superstrings of this tension. The high signal to noise ($\rho > 10$) and large total rate (many per year) implies such strings are detectable. \begin{figure}[ht] \centering \includegraphics{LIGO_Cusp_-14._alt3.eps} \caption{\label{LIGO_Cusp_-14.alt3} LIGO detects cusp bursts for $G \mu = 10^{-14}$ for both field theory (FT) strings and superstrings (SS). The blue and green lines are the same as Fig. \ref{LIGO_Cusp_-14.alt2}. The purple line shows the superstrings' total detection rate for Advanced LIGO the pink line supplements that with a hypothetical factor of 3 decrease in the noise spectral density. Only totals are shown for the superstring cases (no dotted or dashed lines and no confusion limits). } \end{figure} As a final consideration we included a hypothetical improvement to Advanced LIGO that decreases $S(f)$ the power spectral density (PSD) by a factor of 3. Improved PSD might arise from sensitivity upgrades to the LIGO/VIRGO detectors and/or from extending the network of detectors to include KAGRA\cite{Aso:2013eba} and India-LIGO\cite{Unnikrishnan:2013qwa}. This change has the effect of shifting the purple curve to the right and yields the pink line. These superstring cusp burst rates are likely to be realistically detectable. We will refer to this scenario as SS$^*$. (Below we will also consider a hypothetical improvement by a factor of 3 for LISA as well.) For each source and experiment we will define the minimally-interesting minimum tension (MIMT). The MIMT is the minimum characteristic tension for the curve to enter the upper right quadrant of figures like these, specifically, $dR/d\log\rho>1$ yr$^{-1}$ at $Q>0$. Tensions greater than the MIMT {\it might} be seen but tensions less than the MIMT are {\it very unlikely}. We will also define the probably-detectable minimum tension (PDMT). The PDMT is the minimum characteristic tension for one significant (arbitrarily taken to be $Q>10$) event per year, or $dR/d\log\rho>1$ yr$^{-1}$ at $Q>10$. Tensions greater than the PDMT {\it can} realistically be expected to generate detectable events for the assumed source and experiment. Figure \ref{fig:yaxLIGOCusp} plots $Q$ as a function of string tension for $dR/d\log\rho = 1$ yr$^{-1}$ for LIGO cusp bursts. This information is extracted by creating figures like Fig. \ref{LIGO_Cusp_-14.alt3} for many different tensions and locating where lines intersect the abscissa. \begin{figure}[ht] \centering \includegraphics{yaxLIGOCusp.eps} \caption{\label{fig:yaxLIGOCusp} LIGO cusp detection: $Q$ as a function of string tension for $dR/d\log\rho = 1$ yr$^{-1}$. The blue line is for field theory (FT) strings, the purple line is for superstrings (SS), the pink line is for superstrings with reduced noise detector (SS$^{*}$). (These colors are the same as Fig. \ref{LIGO_Cusp_-14.alt3}.) The minimally-interesting minimum tensions (MIMTs) correspond to the left-most intersection of the solid line with the $Q=0$ axis; the probably-detectable minimum tensions (PDMTs) correspond to the left-most intersection of the solid lines with the horizontal $Q=10$ lines. The dashed lines are the results without clustering. The separation between dashed and solid lines of the same color at low $\mu$ is the enhancement due to clustering. } \end{figure} The dashed lines are calculations without clustering while the solid lines include that effect. The most immediate implication is that clustering enhances $Q$ at $G \mu \lta 10^{-12}$. Large signal to noise detections are expected for a tension range about two orders of magnitude in width, a range where unclustered loops primarily give weak signals. The solid lines cross the horizontal $Q=0$ axis (leftmost) at the MIMT; they cross the $Q=10$ ordinate (leftmost) at the PDMT. We will always quote the MIMT and PDMT taking account of clustering. Analogous figures for LIGO kink detection and LISA cusp and kink detection are given by Figs. \ref{fig:yaxLIGOKink}, \ref{fig:yaxLISACusp} and \ref{fig:yaxLISAKink}. See Tables \ref{tab:LIGOCusp}, \ref{tab:LIGOKink}, \ref{tab:LISACusp} and \ref{tab:LISAKink} for details. Now consider LIGO detection rates for kink bursts, summarized in Fig. \ref{fig:yaxLIGOKink}. The FT strings (blue lines) never cross the $Q=0$ axis (no MIMT, PDMT). The SS (purple lines) and SS$^*$ (pink lines) do not ever reach $Q=10$ (no PDMT). We do not anticipate frequent, strong kink events in LIGO. Generally speaking, the fundamental frequency of these loops is small compared to the range of frequencies at which LIGO is sensitive. The power radiated in the higher harmonics falls off more rapidly for kinks than cusps and so kinks prove to be harder to detect at the characteristic frequency at which LIGO is sensitive. \begin{figure}[ht] \centering \includegraphics{yaxLIGOKink.eps} \caption{\label{fig:yaxLIGOKink}LIGO kink detection: Same as Fig. \ref{fig:yaxLIGOCusp} but with a reduced vertical scale. The solid lines never reach $Q=10$ even though there is an enhancement from clustering.} \end{figure} This fundamental frequency mismatch between source and detector strongly motivates consideration of space-based detectors like LISA that are designed for lower frequencies. For cusp bursts, LISA can detect FT strings, SS and SS$^*$ with MIMT $G \mu = 10^{-15.6}$, $10^{-16}$ and $10^{-16.3}$, respectively. The clustering dramatically enhances the sensitivity at $G \mu \lta 10^{-11}$, a number somewhat dependent upon FT, SS or $SS^*$. Conversely, the cusp burst rates for tensions in the range $G \mu \gta 10^{-11}$ do not bear a strong imprint (greater than factor 2 enhancement) from the local halo population. To illustrate, Fig. \ref{LISA_Cusp_-11._alt3} shows the situation for $G \mu = 10^{-11}$. Note that the FT clustered and unclustered results lie on top of each other. \begin{figure}[ht] \centering \includegraphics{LISA_Cusp_-11._alt3.eps} \caption{\label{LISA_Cusp_-11._alt3} LISA detects cusp bursts for $G \mu = 10^{-11}$ for field theory (FT) strings, superstrings (SS) and superstrings with less noise (SS$^*$). The line types are the same as Fig. \ref{LIGO_Cusp_-14.alt3}. Clustering is irrelevant -- the blue solid line for total and blue dotted line for homogeneous cosmology give essentially identical results. } \end{figure} The universe as a whole provides the dominant source of loops, in part because high tension strings are less clustered (they do not track the dark matter profile as closely as low tension ones) and in part because higher tension strings emit signals of larger intrinsic amplitude that are detectable at larger distances. Detected bursts from strings with $G \mu < 10^{-11}$ are largely sourced by the halo population and all rates are significantly enhanced by clustering. Fig. \ref{LISA_Cusp_-13._alt3} illustrates the case for $G \mu = 10^{-13}$. \begin{figure}[ht] \centering \includegraphics{LISA_Cusp_-13._alt3.eps} \caption{\label{LISA_Cusp_-13._alt3} LISA detects cusp bursts for $G \mu = 10^{-13}$ from field theory (FT) strings, superstrings (SS) and superstrings with less noise (SS$^*$). The line types are the same as Fig. \ref{LIGO_Cusp_-14.alt3}. The highly significant detections $\rho \sim 20$ are mostly sourced by the halo -- the blue solid line for total rate is much larger than the blue dotted line for homogeneous cosmology. } \end{figure} Note that the clustered and unclustered FT strings are now quite different. Of course, the rate of SS detection is enhanced with respect to FT strings by the factor ${\cal G}=10^2$. On the other hand, the detectable range in $G \mu$ (see Fig. \ref{fig:yaxLISACusp}) is not significantly widened because $dR/d\log \rho$ is a steep function of $G \mu$. In terms of the MIMT the changes are rather small: $G \mu = 10^{-15.6}$ for FT, $10^{-16}$ for SS and $10^{-16.3}$ for SS$^*$. \begin{figure}[ht] \centering \includegraphics{yaxLISACusp.eps} \caption{\label{fig:yaxLISACusp}LISA cusp detection: Same as Fig. \ref{fig:yaxLIGOCusp} with a larger vertical scale. The clustering greatly enhances the sensitivity at small tension.} \end{figure} The situation for LISA kinks is summarized in Fig. \ref{fig:yaxLISAKink}. FT strings with $G \mu \gta 10^{-11}$ traverse the upper right hand quadrant and those with $G \mu \gta 10^{-9}$ have sufficient numbers and rates to be seen. These results are not impacted by the clustering. Interestingly, for $G \mu < 10^{-11}$ the clustering turns on and allows FT strings to be detected in a lower range of tensions $10^{-13.6} < G \mu < 10^{-11.3}$. A more extreme version of this situation holds for SS and SS$^*$. Kink bursts $G \mu \gta 10^{-11}$ are not sourced by the halo; those below are. \begin{figure}[ht] \centering \includegraphics{yaxLISAKink.eps} \caption{\label{fig:yaxLISAKink}LISA kink detection: Same as Fig. \ref{fig:yaxLIGOCusp} with a larger vertical scale. The clustering greatly enhances the sensitivity at small tension.} \end{figure} The tension range $G \mu \gta 10^{-14}$ should allow reliable detections. Fig. \ref{LISA_Kink_-13._alt3} illustrates the situation when clustering is important and strong signals are produced. \begin{figure}[ht] \centering \includegraphics{LISA_Kink_-13._alt3.eps} \caption{\label{LISA_Kink_-13._alt3} LISA detects kink bursts for $G \mu = 10^{-13}$ for field theory strings and superstrings. The line types are the same as Fig. \ref{LIGO_Cusp_-14.alt3}. Most detections are sourced by the halo -- the blue solid line for total is much larger than the blue dotted line for homogeneous cosmology. } \end{figure} The MIMT for LISA kink bursts is $G \mu \simeq$ few times $10^{-16}$. The detectable rates for tensions $G \mu < 10^{-11}$ are significantly enhanced by clustering. We can condense and summarize the outcomes in terms of the MIMT and PDMT. Table \ref{tab:MIMTPDMT} lists LIGO and LISA experiments, for FT, SS and SS$^*$ (all with the effects of clustering included). \begin{table} \begin{center} \begin{tabular}{cccc}\\ \hline \multicolumn{4}{c}{LIGO Cusp}\\ $Q$ & FT & SS & SS$^*$ \\ 0 & -14.8 & -15.4 & -15.7 \\ 10 & -10.0 & -14.2 & -14.5 \\ \hline \multicolumn{4}{c}{LIGO Kink}\\ $Q$ & FT & SS & SS$^*$ \\ 0 & & -13.2 & -13.6 \\ 10 & & & \\ \hline \multicolumn{4}{c}{LISA Cusp}\\ $Q$ & FT & SS & SS$^*$ \\ 0 & -15.6 & -16.0 & -16.3 \\ 10 & -14.6 & -15.0 & -15.5 \\ \hline \multicolumn{4}{c}{LISA Kink}\\ $Q$ & FT & SS & SS$^*$ \\ 0 & -14.5 & -15.1 & -15.4 \\ 10 & -13.6 & -14.1 & -14.4 \end{tabular} \caption{\label{tab:MIMTPDMT} $log_{10} G \mu$ for MIMT ($Q=0$) and PDMT ($Q=10$) for LIGO cusps, kinks and LISA cusps, kinks. Blank entries mean there is no value in the range $10^{-17} < G \mu < 10^{-7}$. These estimates include the effect of clustering. For a characteristic burst rate $dR/d\log\rho = 1$ yr$^{-1}$ the line labeled $Q=0$ is the tension below which detection is very unlikely. The line labeled $Q=10$ is the tension for a strong, probably-detectable signal (signal to noise $\rho=11$). Greater tensions generally yield stronger signals but see Figs. \ref{fig:yaxLIGOCusp}, \ref{fig:yaxLIGOKink}, \ref{fig:yaxLISACusp} and \ref{fig:yaxLISAKink} for the non-monotonic impact of clustering on the rate forecasts. } \end{center} \end{table} Summarizing the situation for SS (superstring loops with ${\cal G}=10^2$, no PSD enhancement), clustering within the Galaxy has a favorable impact on the forecast for experimentally accessible gravitational wave bursts (frequency of occurrence $> 1$ yr$^{-1}$ and S/N $>10$) from cusps on strings with tensions $G \mu < 10^{-11.9}$ for LIGO/Virgo, $G \mu < 10^{-11.2}$ for LISA and for bursts from kinks for tensions $G \mu < 10^{-10.6}$ for LISA. Frequent, high S/N detections of cusps are expected for $G \mu \gta 10^{-14.2}$ (LIGO) and $G \mu \gta 10^{-15}$ (LISA) and of kinks for $G \mu \gta 10^{-14.1}$ (LISA). The table provides similar information for FT (field theory strings) and SS$^*$ (superstrings with enhanced PSD). More detailed information is available by inspection of \ref{fig:yaxLIGOCusp}, \ref{fig:yaxLIGOKink}, \ref{fig:yaxLISACusp} and \ref{fig:yaxLISAKink} and the Tables \ref{tab:LIGOCusp}, \ref{tab:LIGOKink}, \ref{tab:LISACusp} and \ref{tab:LISAKink} for numbers. \newpage \section{Summary} We have reviewed the physical basis in string theory for the occurrence of cosmic superstrings produced during the epoch of inflation with particular attention paid to the braneworld scenario. The Calabi-Yau manifold likely hosts many different varieties of cosmic superstrings (in terms of tension, charge, etc.), each with its own scaling network, uncoupled except for mutual expansion of the large dimensions. The main signals of cosmic superstrings in other throats will be carried by gravitational and axionic degrees of freedom. We have reviewed the cosmology of superstrings contrasting it with field theory strings. The warping of the throats of the Calabi-Yau manifold lowers the string tension. The loops formed by the scaling network dominate the total superstring contribution to the critical density. The expansion of the universe allows low tension strings to slow down enough to cluster. We have presented a simple model that quantitatively encapsulates these understandings and allows straightforward evaluation of microlensing rates for stars in the galaxy, cusp and kink gravitational radiation and two photon decays from axions in the standard model throat. Here, we present forecasts for bursts for LIGO and LISA and note that clustering of loop sources within the Galaxy raises the rates of detection and signal strengths for low tension strings in these experiments. Conversely, these results imply that stricter upper limits are achievable for bursts from strings in tension ranges where local clustering dominates the signal. Elsewhere we will discuss the implications for the stochastic background, microlensing and two photon production from axions. \section*{Acknowledgment} We thank Tom Broadhurst, Eanna Flanagan, Ariel Goobar, Craig Hogan, Liam Mcallister, Xavier Siemens, Masahiro Takada and Barry Wardell for valuable discussions. DFC acknowledges that this material is based upon work supported by the National Science Foundation under Grant No. 1417132. SHHT is supported by the CRF Grant HKUST4/CRF/13G and the GRF 16305414 issued by the Research Grants Council (RGC) of the Government of the Hong Kong SAR. \newpage \begin{longtable}[c]{ccccccc} \label{tab:LIGOCusp}\\ \toprule $\log_{10} G \mu$ & \multicolumn{6}{c}{Q}\\ & \multicolumn{2}{c}{Field Thy} & \multicolumn{2}{c}{Superstrings} & \multicolumn{2}{c}{Enhanced S/N} \\ & Hmg & Cl & Hmg & Cl & Hmg & Cl \\ \midrule \endfirsthead \toprule $\log_{10} G \mu$ & \multicolumn{6}{c}{Q}\\ & \multicolumn{2}{c}{Field Thy} & \multicolumn{2}{c}{Superstrings} & \multicolumn{2}{c}{Enhanced S/N} \\ & Hmg & Cl & Hmg & Cl & Hmg & Cl \\ \midrule \endhead \input{muQ-LIGO-CuspShort} \caption{Q as a function of string tension for LIGO detection rate $dR/d\log\rho=1$ yr$^{-1}$ of cusps for 3 cases: FT (field theory strings ${\cal G}=1$), SS (superstrings ${\cal G}=10^2$ and SS$^*$ (superstrings with improved PSD). Separate unclustered (Hmg) and clustered (Cl) calculations are reported for each type of source/experiment. Entries with $Q<0$ are suppressed. } \end{longtable} \begin{longtable}[c]{ccccccc} \caption{LIGO Kink} \label{tab:LIGOKink}\\ \toprule $\log_{10} G \mu$ & \multicolumn{6}{c}{Q}\\ & \multicolumn{2}{c}{Field Thy} & \multicolumn{2}{c}{Superstrings} & \multicolumn{2}{c}{Enhanced S/N} \\ & Hmg & Cl & Hmg & Cl & Hmg & Cl \\ \midrule \endfirsthead \toprule $\log_{10} G \mu$ & \multicolumn{6}{c}{Q}\\ & \multicolumn{2}{c}{Field Thy} & \multicolumn{2}{c}{Superstrings} & \multicolumn{2}{c}{Enhanced S/N} \\ & Hmg & Cl & Hmg & Cl & Hmg & Cl \\ \midrule \endhead \input{muQ-LIGO-KinkShort} \caption{Q as a function of string tension for LIGO detection rate $dR/d\log\rho=1$ yr$^{-1}$ of kinks for 3 cases: FT (field theory strings ${\cal G}=1$), SS (superstrings ${\cal G}=10^2$ and SS$^*$ (superstrings with improved PSD). Separate unclustered (Hmg) and clustered (Cl) calculations are reported for each type of source/experiment. Entries with $Q<0$ are suppressed. } \end{longtable} \begin{longtable}[c]{ccccccc} \caption{LISA Cusp} \label{tab:LISACusp}\\ \toprule $\log_{10} G \mu$ & \multicolumn{6}{c}{Q}\\ & \multicolumn{2}{c}{Field Thy} & \multicolumn{2}{c}{Superstrings} & \multicolumn{2}{c}{Enhanced S/N} \\ & Hmg & Cl & Hmg & Cl & Hmg & Cl \\ \midrule \endfirsthead \toprule $\log_{10} G \mu$ & \multicolumn{6}{c}{Q}\\ & \multicolumn{2}{c}{Field Thy} & \multicolumn{2}{c}{Superstrings} & \multicolumn{2}{c}{Enhanced S/N} \\ & Hmg & Cl & Hmg & Cl & Hmg & Cl \\ \midrule \endhead \input{muQ-LISA-CuspShort} \caption{Q as a function of string tension for LISA detection rate $dR/d\log\rho=1$ yr$^{-1}$ of cusps for 3 cases: FT (field theory strings ${\cal G}=1$), SS (superstrings ${\cal G}=10^2$ and SS$^*$ (superstrings with improved PSD). Separate unclustered (Hmg) and clustered (Cl) calculations are reported for each type of source/experiment. Entries with $Q<0$ are suppressed. } \end{longtable} \begin{longtable}[c]{ccccccc} \caption{LISA Kink} \label{tab:LISAKink}\\ \toprule $\log_{10} G \mu$ & \multicolumn{6}{c}{Q}\\ & \multicolumn{2}{c}{Field Thy} & \multicolumn{2}{c}{Superstrings} & \multicolumn{2}{c}{Enhanced S/N} \\ & Hmg & Cl & Hmg & Cl & Hmg & Cl \\ \midrule \endfirsthead \toprule $\log_{10} G \mu$ & \multicolumn{6}{c}{Q}\\ & \multicolumn{2}{c}{Field Thy} & \multicolumn{2}{c}{Superstrings} & \multicolumn{2}{c}{Enhanced S/N} \\ & Hmg & Cl & Hmg & Cl & Hmg & Cl \\ \midrule \endhead \input{muQ-LISA-KinkShort} \caption{Q as a function of string tension for LISA detection rate $dR/d\log\rho=1$ yr$^{-1}$ of kinks for 3 cases: FT (field theory strings ${\cal G}=1$), SS (superstrings ${\cal G}=10^2$ and SS$^*$ (superstrings with improved PSD). Separate unclustered (Hmg) and clustered (Cl) calculations are reported for each type of source/experiment. Entries with $Q<0$ are suppressed. } \end{longtable} \newpage
{ "timestamp": "2018-02-07T02:03:46", "yymm": "1712", "arxiv_id": "1712.05060", "language": "en", "url": "https://arxiv.org/abs/1712.05060" }
\section{Introduction} \label{sec:intro} The history of formation of the Mars' moons Phobos and Deimos is still an open question. It has been the subject of several studies which point to a capture origin, in-situ or impact generated formation \citep[and references therein]{2011A&ARv..19...44R,2015Icar..252..334C,2016NatGe...9..581R}. Accretion within an impact-generated disk scenario \citep[]{2011Icar..211.1150C,2016NatGe...9..581R} is gaining more support as it can explains several properties of the Mars' moons such as the mass and the orbital parameters \citep{2016NatGe...9..581R,hesselbrock2017ongoing,2017ApJ...845..125H,2017ApJ...845..125HBIS}. Phobos has a very peculiar infrared spectra. Although mid-infrared (MIDIR) show different features, the visibile (VIS) and near-infrared (NIR) spectra are characterized by a lack of absorption features \citep[]{1999JGR...104.9069M,2011P&SS...59.1308G,2011A&ARv..19...44R,2015aste.book..451M}. \citet{1999JGR...104.9069M} isolated two main regions named ``red" and ``blue" on the Phobos' surface that have different spectral characteristics which can be best matched by D- and T-type asteroids respectively \citep[]{1991JGR....96.5925M,1999JGR...104.9069M,2002Icar..156...64R}. \citet{2011P&SS...59.1308G} presented a detailed investigation on the possible chemistry of Phobos' surface. They found that the ``blue" region can be fitted with a phyllosilicates-rich material, while the ``red" region has a best fit when tectosilicates, mainly feldspar, are included in the model. Moreover they found that no class of chondritic material can match the observed spectra. Nevertheless, they pointed out that different more complex mixtures of dust could be able to reproduce the observed trends. The featureless VIS-NIR spectra are often associated with a strong space weathering \citep{1999JGR...104.9069M,2011A&ARv..19...44R}. However, \citet{2011P&SS...59.1308G} following the spectroscopical studies of \citet{1981JGR....86.7967S}, \citet{1989JGR....94.9192S},\citet{1990JGR....95.8323C}, \citet{1990JGR....95..281C}, \citet{1990Icar...84..315C}, \citet{1990Icar...86..383C}, \citet{burns1993mineralogical}, \citet{2007M&PS...42..235K} list a series of possible mechanisms that can reduce the strength of the spectra and match the observation: i) as the 1-2$\mu$m feature arise from iron-bearing material such as pyroxene and olivine, the absence of those compounds may reduce the spectra; ii) a mixture of opaque material such as metallic iron, iron oxides and amorphous carbon mixed with olivine and pyroxene can reduce dramatically the VIS/NIR bands; iii) solids which results from quenching from the liquids state may have their reflectance properties reduced as they lack of perfect crystalline structure; iv) the reflectance of fine-grain materials decreases as the size of the grains decreases. \citet{2017ApJ...845..125H} presented detailed Smoothed Particle Hydrodynamics (SPH) simulations in which they determined the dynamical, physical and thermodynamical properties of an impact-generated disk. They found that the material that populate the disk is initially a mixture of gas ($\sim 5$\%) and melts ($\sim 95$\%). These information together with the Martian composition and hypothesis on the impactors, can be used for modelling the building blocks of Phobos and Deimos. In this work we present a study of the bulk composition of the Mars's moons following the giant-impact scenario. Our aim is to provide more clues on the origin of the moons, their chemical composition, infrared spectra, and the nature of the impactor itself. Furthermore, the JAXA's MMX\footnote{http://mmx.isas.jaxa.jp/en/index.html} mission plans to observe Phobos and Deimos in detail, and return samples (at least 10g) from the surface of Phobos. Our results could be then used as guidelines for helping in their analysis and interpretation. Starting from different initial compositions of the impactor (from mars-like to chondritic-like), we compute thermodynamic equilibrium \citep{DeHoff1993} to solve for stable phases that may condense from the gas in the impact-generated disk. Additionally, we compute the composition of the cooling melt to investigate how it will eventually differs from condensates. The resulting condensates and solidified melt are then taken as proxies for the building block of Phobos and Deimos and further discussion are made. In this work we will mainly focus on Phobos, as more observation are available and as it will be the main sampling target of the JAXA’s MMX mission. Nevertheless, the formation of Deimos follows the same proposed scenario. The paper is structured as follow: in section~\ref{methods} we describe the techniques and the model we use in our calculations. In section~\ref{results} we present our results that will be discussed in section~\ref{discussion}. Conclusions are summarized in section~\ref{conclusions}. \section{Model and methods} \label{methods} \citet{2017ApJ...845..125H} calculated that the temperature in the Mars' moons forming region of the disk reaches $T\sim2000$~K just after the impact. The value of $P\sim10^{-4}$~bar is chosen as our fiducial pressure as it is, for the given temperature, the average saturation pressure for several mixtures calculated in \citet{2013ApJ...767L..12V} and the average pressure in the disk profile in \citet{2016ApJ...828..109R} and \citet{2017ApJ...845..125H} where gas and melt coexist. Under these conditions the material in the disk that comes from Mars and from the impactor will result in a mixture composed of gas and melt \citep{2017ApJ...845..125H}. \begin{figure} \center {\includegraphics[width=0.5\columnwidth]{fig1.pdf}} \caption{This cartoon describes the considered scenario. After the impact, part of Mars material will be ejected out at high temperature and will vaporize into gas as well as part of the impactor. The gas mixture will then condense into dust. On the other hand, the not vaporized material from Mars and the impactor will form a melt and then solidify. Phobos and Deimos will be the result of the accretion of these two components. The yellow region represents the part of disk within the Roche limit \citep{2017ApJ...845..125H}.} \label{fig1} \end{figure} \citet{2017ApJ...845..125H} showed that the building blocks of Phobos and Deimos would be composed of a mixture of about half-martian material and half-impactor material. We, thus, assume that the gas is made of a well mixed two-components: the gas that is released by heating up Mars-material plus the gas that is released by heating up impactor-material. We then assume that the melt is a mixture of the not vaporized material from the two bodies \citep{2017ApJ...845..125H}. Figure~\ref{fig1} shows a cartoon of the proposed model. As the disk cools down, the gas will eventually re-condense and the melt will solidify. In this work, we define, for ease of understanding, {\it dust} as the condensates from the gas phase and {\it solids} as the material that result from the solidification of the melt. In order to determine the composition of the dust that will condense from the gas phase we assume thermodynamic equilibrium \citep{DeHoff1993}: at constant temperature and pressure, the stability of a system is determined by its Gibbs free energy, and, in fact, by the composition which minimizes the potential of the system. Although it is an approximation, thermodynamic equilibrium is a powerful tool to understand the evolution of the chemical composition of complex systems. This technique has been extensively used in the study of the chemistry of gas and dust in several astrophysical environments: from the Solar Nebula, meteorites and protoplanetary disks \citep{1979Icar...40..446L,Yoneda1995,2003ApJ...591.1220L,2006mess.book..253E,2016MNRAS.457.1359P} to stars dusty envelopes \citep{1999A&A...347..594G,1997AIPC..402..391L,2001GeCoA..65..469E} and exoplanets composition \citep{2010ApJ...715.1050B}. To compute the thermodynamic equilibrium we use the HSC software package (version 8) \citep{roine2002outokumpu}, which includes the Gibbs free energy minimisation method of \citet{White1958}. Thermodynamic data for each compound are taken from the database provided by HSC \citep[and references therein]{roine2002outokumpu}. HSC has been widely used in material science and it has been already tested in astrophysics showing very good reliability in predicting the composition of different systems \citep{2005Icar..175....1P,2011MNRAS.414.2386P,2010ApJ...715.1050B,2012ApJ...759L..40M}. To calculate the composition of the solids from the cooling melts we use the normative mineralogy (CIPW-norm) \citep{10.2307/30060535} and the work of \citet{2016ApJ...828..109R} as benchmark. CIPW-norm is one of the most used technique to determine, in a first approximation, the equilibrium composition of a multicomponent melt \citep{10.2307/30060535}. \citep{2017ApJ...845..125H} showed that the melt phase of Mars and the impactor will likely never completely equilibrate between each other. Mars-only and impactor-only melt with different degrees of equilibration in between are indeed expected. Nevertheless, calculating the resulting compositions of a equilibrated melt represents a first interesting add-on to investigate the differences that condensation and solidification would bring to the final Phobos bulk composition. Moreover, our model suggests that the MMX may be confronted with two distinguishable family of material, the $\it{dust}$ and the $\it{solids}$. As a consequence, this investigation can bring further information and clues that can be used in the MMX samples analysis. During the planet formation, Mars and the other inner rocky planets experienced impacts with other bodies. The impact histories strongly depend on the timing and location of the planets \citep{2017E&PSL.468...85B,2017Icar..297..134R,2017GeoRL..44.5978B,bottke2017post}. The nature of the impactors is unconstrained as the dynamical interactions of Jupiter and Saturn with the surrounding minor bodies may have scattered and delivered in the inner Solar System material of different nature and of chondritic origin \citep{2017Icar..297..134R,2017GeoRL..44.5978B,bottke2017post}. Our aim is to determine the changes that different impactors would bring in the chemical composition of Phobos, and if these differences can be traceable. In order to keep our selection as chemically heterogeneous as possible we, thus, consider the following types of impactor: Mars$_{type}$, CV$_{type}$, EH$_{type}$, CI$_{type}$, comet$_{type}$. As a proxy of Mars composition we take the Bulk-Silicates-Mars (BSM) from \citet{2013ApJ...767L..12V}. Compositions for the EH, CV and CI chondrites are taken from \citet{1988RSPTA.325..535W}. Elemental composition for the comet is taken from \citet{Huebner1997}. Table~\ref{table1} shows the elemental distribution for all considered impactors. In order to help to understand the differences between the impactors we also report several elemental ratios such as the Mg/Si, Fe/O, C/O ratios. These ratios play an important role in determining the resulting chemical composition of a mixture. This will be discussed in section~\ref{discussion}. We also report values from the Sun photosphere\footnote{We take the elemental abundances from \citet{2009ARA&A..47..481A} for the given set of elements. Please note that \ce{He} is not included in the system and, as a consequence, the abundance of \ce{H} raises to $\sim$99\% of the total.} as reference \citep{2009ARA&A..47..481A}. Note the H/O ratio of the solar nebula (Sun) and the abundances of other elements relatives to O and C. Looking at table~\ref{table1}, we can already notice that we will deal with wide different environments. Moreover, the relative abundances between elements clearly indicate that our systems will return chemical distributions that are far from that one predicted for a solar composition. \begin{table*} \centering \caption{Elemental abundances (mol\%) for single impactors. Abundances of the solar photosphere (Sun) are also shown.} \begin{tabular} {l c c c c c c } \hline & Mars & CV & CI & EH & comet & Sun \\ \hline Element & \multicolumn{4}{c}{Abundances (mol\%)} & & \\ \hline Al & 1.250E+00 & 1.356E+00 & 4.668E-01 & 7.596E-01 & 7.000E-02 & 2.82E-04 \\ C & 0.000E+00 & 9.747E-01 & 3.902E+00 & 8.427E-01 & 1.137E+01 & 2.69E-02 \\ Ca & 9.300E-01 & 9.911E-01 & 3.362E-01 & 5.367E-01 & 6.000E-02 & 2.19E-04\\ Fe & 5.440E+00 & 8.797E+00 & 4.773E+00 & 1.314E+01 & 5.200E-01 & 3.16E-03 \\ H & 0.000E+00 & 5.808E+00 & 2.906E+01 & 0.000E+00 & 5.464E+01 & 99.9 \\ K & 4.800E-02 & 1.658E-02 & 2.098E-02 & 5.177E-02 & 0.000E+00 & 1.07E-05 \\ Mg & 1.632E+01 & 1.247E+01 & 5.845E+00 & 1.104E+01 & 9.900E-01 & 3.98E-03 \\ Na & 7.000E-01 & 3.001E-01 & 3.121E-01 & 7.484E-01 & 1.000E-01 & 1.74E-04 \\ O & 5.815E+01 & 5.157E+01 & 4.491E+01 & 4.724E+01 & 2.834E+01 & 4.89E-02\\ S & 0.000E+00 & 1.434E+00 & 2.695E+00 & 4.577E+00 & 7.100E-01 & 1.32E-03 \\ Si & 1.673E+01 & 1.624E+01 & 7.656E+00 & 2.104E+01 & 1.830E+00 & 3.23E-03 \\ Ti & 4.000E-02 & 4.280E-02 & 1.285E-02 & 2.379E-02 & 0.000E+00 & 8.90E-06 \\ Zn & 2.000E-04 & 3.709E-03 & 6.988E-03 & 9.674E-03 & 0.000E+00 & 3.63E-06 \\ \hline Mg/Si & 0.98 & 0.77 & 0.76 & 0.52 & 0.54 & 1.25 \\ Fe/O & 0.09 & 0.17 & 0.11 & 0.28 & 0.02 & 0.06 \\ Fe/Si & 0.33 & 0.54 & 0.62 & 0.62 & 0.28 & 0.98 \\ (Fe+Si)/O & 0.38 & 0.49 & 0.28 & 0.72 & 0.08 & 0.13 \\ C/O & 0.00 & 0.02 & 0.09 & 0.02 & 0.40 & 0.54 \\ H/O & 0.00 & 0.11 & 0.65 & 0.00 & 1.93 & 2041 \\ \end{tabular} \label{table1} \end{table*} The 13 considered elements in Table~\ref{table1} can form $\sim$6800 possible compounds, including complex organics, gas and solids (and excluding liquids). Most of these compounds are not stable at our chosen $T$ and $P$. We derive our fiducial list of compounds starting from the list reported in \citet{2013ApJ...767L..12V} and the set of compounds in \citet{2011MNRAS.414.2386P}. Complex organics have been excluded from calculations as their chemistry is driven more by kinetics rather than thermodynamic equilibrium. \ce{C}-graphite is taken as representative of the main carbon condensates together with \ce{Fe3C}, \ce{Fe2C} and \ce{SiC}. Calcium and aluminium refractory species, all main oxides and main silicates (Mg and Fe silicates) have been taken into account. Sulfides are included as well as water-vapour and water-ice. We report the complete list of considered species in Table~\ref{table2}. The following nomenclature will be used: olivine (forsterite, \ce{Mg2SiO4}, and fayalite, \ce{Fe2SiO4}), pyroxene (enstatite, \ce{MgSiO3}, and ferrosilite, \ce{FeSiO3}), plagioclase (anorthite, \ce{CaAl2Si2O8} and albite, \ce{NaAlSi3O8}), melilite (gehlenite, \ce{Ca2Al2SiO7}, and akermanite, \ce{Ca2MgSi2O7}), fassaite (Ca-Tschermak, \ce{CaAl2SiO6}, and diopside, \ce{CaMgSi2O6}), spinel (\ce{MgAl2O4} and \ce{FeAl2O4}), magnesiowustite (\ce{MgO} and \ce{FeO}), sulfide (\ce{FeS}, \ce{MgS} and \ce{CaS}), metal (\ce{Fe}, \ce{Al} and \ce{Zn}). Only the endmembers of each solids solution are considered and no predictions of intermediate compositions are made. To summarize, we calculate the thermodynamic equilibrium for each of the considered cases in table~\ref{table1} at the given temperature ($T=2000$~K) and pressure ($P=10^{-4}$~bar). The resulting gas phase of Mars plus the gas phase of the selected impactor will constitute the gas mixture from which the ${\it dust}$ will condense. The material that is not in the gas phase will form the melt from which the ${\it solids}$ will form. To derive the ${\it dust}$ composition we then proceed to the computation of the condensation sequence in the interval of temperatures of $150< T\rm(K)<2000$ with a constant pressure of $P=10^{-4}$~bar. To derive the ${\it solids}$ composition we compute the CIPW-norm. In order to test our thermodynamic model we also run equilibrium calculation using the solar abundances in Table~\ref{table1} and compare the results with previous calculations available in the literature. Results of the test and a brief discussion are presented in Appendix~\ref{solar}. \begin{table*} \centering \caption{Complete list of gas and dust species in the equilibrium calculations.} \begin{tabular} {c} \hline Gas \\ \hline Al(g) Al2O2(g) Al2O3(g) AlO(g) AlO2(g) \\ C(g) Ca(g) CaO(g) CH4(g) CO(g) \\ Fe Fe(g) FeO(g) FeS(g) \\ H(g) H2(g) H2O(g) H2S(g) HS(g) \\ K(g) K2(g) K2O(g) KO(g) Mg(g) MgO(g) \\ Na(g) Na2(g) Na2O(g) NaO(g) \\ O(g) O2(g) OH(g) \\ S(g) Si(g) SiO(g) SiO2(g) \\ Ti(g) TiO(g) TiO2(g) \\ Zn(g) ZnO(g) \\ \hline Dust \\ \hline Al2O3 \\ C \\ Ca2Al2SiO7 Ca2MgSi2O7 Ca2SiO4 CaAl12O19 CaAl2O4 CaAl2Si2O8 \\ CaAl2SiO6 CaAl4O7 CaMgSi2O6 CaO CaS CaSiO3 CaTiSiO5 \\ Fe Fe2C Fe2O3 Fe2SiO4 Fe3C Fe3O4 \\ FeAl2O4 FeO FeSiO3 FeTiO3\\ H2O \\ K K2O K2Si4O9 KAlSi2O6 KAlSi3O8 KAlSiO4\\ Mg2Al4Si5O18 Mg2SiO4 Mg2TiO4 MgAl2O4 MgO MgS MgSiO3 MgTi2O5 MgTiO3\\ Na2O Na2SiO3 NaAlSi3O8 \\ Si SiC SiO2 \\ TiO2 \\ Zn Zn2SiO4 Zn2TiO4 ZnO ZnSiO3\\ \end{tabular} \label{table2} \end{table*} \section{Results} \label{results} Table~\ref{table3} shows the elemental abundances (mol\%) of the gas mixture that results from equilibrium calculation at $T=2000$~K and $P=10^{-4}$~bar of Mars plus the considered impactor (Mars+Mars, Mars+CV, Mars+CI, Mars+EH, Mars+comet). These abundances are used as input to compute the condensation sequence. Table~\ref{table4} show the oxides budget of different melt mixtures in case of complete equilibration between Mars and given impactors. These budgets are used to compute the CIPW-norm. \begin{table*} \centering \caption{Elemental abundances (mol\%) of the gas mixture which is released after the impact assuming $T=2000~K$, $P=10^{-4}$~bar, different types of impactors, and equal contribution between Mars and the considered impactor.} \begin{tabular} {l c c c c c} \hline Gas mixture & +Mars & +CV & +CI & +EH & +comet \\ \hline Element & \multicolumn{4}{c}{Abundances (mol\%)} & \\ \hline Al & 8.059E-06 & 1.382E-05 & 5.058E-06 & 2.336E-05 & 1.412E-05 \\ C & 0.000E+00 & 3.762E+00 & 5.791E+00 & 2.060E+00 & 1.141E+01 \\ Ca & 2.348E-05 & 4.508E-05 & 9.962E-06 & 8.897E-05 & 1.804E-04 \\ Fe & 1.974E+01 & 2.972E+01 & 6.652E+00 & 3.117E+01 & 8.041E-01 \\ H & 0.000E+00 & 2.241E+01 & 4.314E+01 & 0.000E+00 & 5.481E+01 \\ K & 3.360E+00 & 2.489E-01 & 1.008E-01 & 2.423E-01 & 4.808E-02 \\ Mg & 3.539E-01 & 4.512E-01 & 1.000E-01 & 8.458E-01 & 9.982E-01 \\ Na & 4.905E+01 & 3.859E+00 & 1.502E+00 & 3.543E+00 & 8.026E-01 \\ O & 2.608E+01 & 2.501E+01 & 3.695E+01 & 2.775E+01 & 2.858E+01 \\ S & 0.000E+00 & 5.519E+00 & 4.000E+00 & 1.120E+01 & 7.121E-01 \\ Si & 1.146E+00 & 8.663E+00 & 1.735E+00 & 2.310E+01 & 1.825E+00 \\ Ti & 2.663E-01 & 3.370E-01 & 2.347E-02 & 6.733E-02 & 3.844E-03 \\ Zn & 1.051E-02 & 1.505E-02 & 1.158E-02 & 2.398E-02 & 2.007E-04 \\ \hline Mg/Si & 0.31 & 0.05 & 0.06 & 0.04 & 0.55 \\ Fe/O & 0.76 & 1.19 & 0.18 & 1.12 & 0.03 \\ Fe/Si & 17.23 & 3.43 & 3.83 & 1.35 & 0.44 \\ (Fe+Si)/O & 0.80 & 1.53 & 0.23 & 1.96 & 0.09 \\ C/O & 0.00 & 0.15 & 0.16 & 0.07 & 0.40 \\ H/O & 0.00 & 0.90 & 1.17 & 0.00 & 1.92 \\ \end{tabular} \label{table3} \end{table*} \begin{table*} \centering \caption{Oxide composition (wt\%) of the melt that results after the impact assuming $T=2000~K$, $P=10^{-4}$~bar and different types of impactor. Total equilibration between Mars and impactor is also assumed. The Mg/Si and Fe/O ratios are the elemental mole ratios.} \begin{tabular} {l c c c c c} \hline & +Mars & +CV & +CI & +EH & +comet \\ \hline \ce{Al2O3} & 2.96 & 3.55 & 3.06 & 3.02 & 3.10 \\ \ce{CO2} & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \ce{CaO} & 2.40 & 2.87 & 2.47 & 2.41 & 0.28 \\ \ce{FeO} & 17.16 & 12.55 & 14.42 & 12.35 & 17.06 \\ \ce{H2O} & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \ce{K2O} & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \ce{MgO} & 30.60 & 30.88 & 31.15 & 32.04 & 31.09 \\ \ce{Na2O} & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \ce{SiO2} & 46.75 & 49.35 & 48.80 & 50.10 & 45.11 \\ \ce{TiO2} & 0.13 & 0.80 & 0.10 & 0.09 & 3.37 \\ \ce{ZnO} & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \ce{SO3} & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline Mg/Si & 0.98 & 0.93 & 0.95 & 0.95 & 1.03 \\ Fe/O & 0.09 & 0.06 & 0.07 & 0.06 & 0.09 \\ \end{tabular} \label{table4} \end{table*} \subsection{Dust from condensing gas} \label{condensationsequence} Figure~\ref{fig2} shows the dust distribution for all the considered impactors in mol\% (being 100 gas+dust) as a function of temperature. From left to right and from top to bottom, the different cases are ordered with decreasing Fe/O ratio of the initial gas mixture (see Table~\ref{table3}). Mars+CV impact results in large quantities of metallic iron, \ce{FeS} and \ce{SiO_2}. Small amount of pyroxene (enstatite (\ce{MgSiO3}) and ferrosilite (\ce{FeSiO3})), $\sim1$~mol\%, is distributed all along the temperature range. At $T~\sim700$~K, we see the appearance of \ce{Fe2C} and C (graphite). Similarly to Mars+CV, the Mars+EH impact shows large quantities of metallic iron, \ce{FeS} and \ce{SiO_2}. Moreover, we do see small percentage of Si, MgS and SiC. Traces of pyroxenes are seen at high temperatures only. The Mars+Mars impact produces several oxides such as FeO, \ce{Fe3O4}, metallic iron and volatiles such as \ce{Na2O}, \ce{Na2SiO3}. Traces of olivine (forsterite (\ce{Mg2SiO4}) and fayalite (\ce{Fe2SiO4})) are present at high temperature. Mars+CI impact returns iron-rich olivine such as fayalite (\ce{Fe2SiO4}), then FeO, \ce{Fe3O4}, \ce{Fe2O3}, \ce{FeS} and smaller amount of \ce{SiO2}. At lower temperature we see the condensation of C and \ce{H2O}. The dust from Mars+comet impact is mainly made of pyroxene, \ce{SiO2} and FeS. Mars+comet impact is that one that produces, as expected, a large amount of water ice together with solid carbon. Figure~\ref{fig3} shows the condensation sequence for the more volatiles species. All the considered cases return a very similar behaviour as these volatiles are less effected by the changes of other elemental ratios. Na(g) has higher condensation temperature than K(g) and Z(g) is the last one to condense. \ce{Na2SiO3}, \ce{K2Si4O9} and \ce{Zn2SiO4} are the main respective condensates, together with Zn in case of Mars+CV, Mars+EH, and Zn and K for Mars+Mars. \begin{figure} {\includegraphics[width=0.5\columnwidth]{fig2a.pdf}} {\includegraphics[width=0.5\columnwidth]{fig2b.pdf}} \\ {\includegraphics[width=0.5\columnwidth]{fig2c.pdf}} {\includegraphics[width=0.5\columnwidth]{fig2d.pdf}} \\ {\includegraphics[width=0.5\columnwidth]{fig2e.pdf}} \caption{Condensation sequences for major dust species (mol\%) that result from the gas mixtures in Table~\ref{table3}. Note the changes of colours when different compounds are considered.} \label{fig2} \end{figure} \begin{figure} {\includegraphics[width=0.5\columnwidth]{fig3a.pdf}} {\includegraphics[width=0.5\columnwidth]{fig3b.pdf}} \\ {\includegraphics[width=0.5\columnwidth]{fig3c.pdf}} {\includegraphics[width=0.5\columnwidth]{fig3d.pdf}} \\ {\includegraphics[width=0.5\columnwidth]{fig3e.pdf}} \caption{Condensation sequences for K, Zn, and Na compounds (mol\%) that result from the gas mixtures in Table~\ref{table3}. Sodium compounds for Mars+Mars were included in Fig.~\ref{fig2} as, in that case, they represent major species. Note the changes of scales in the y-axis.} \label{fig3} \end{figure} \subsection{Solids from cooling melts} \label{CIPWmelts} \begin{table*} \centering \caption{Resulting CIPW-norm of the melt phase. Calculations for the BSM are also performed to compare the resulting CIWP-norm with values derived by \citet{2016ApJ...828..109R}.} \begin{tabular} {l c c c c c c c} \hline &+Mars & +CV & +EH & +CI & +COMET & BSM & BSM \citep{2016ApJ...828..109R}\\ \hline Anorthite &8.08 & 9.69 & 8.24 & 8.35 & 1.39 & 3.16 & \\ Diopside &3.08 & 3.63 & 2.97 & 3.13 & 0.00 & 6.89 & 6.97 \\ Pyroxene & 43.41 & 55.48 & 57.58 & 52.35 & 54.97 & 21.03& 21.29\\ Albite& 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 8.29 & \\ Orthoclase& 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.65& 0.66\\ Olivine& 45.19 & 29.68 & 31.05 & 35.98 & 34.66 & 58.50& 59.22 \\ Ilmenite& 0.25 & 1.52 & 0.17& 0.19 & 6.40 & 0.27&0.00\\ Corundum &0.00 & 0.00 & 0.00 & 0.00 & 2.59 & 0.00 & 0.00\\ Anorth+Alb& 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & (11.45) & 11.59\\ \hline Oli/Pyr & 1.04 & 0.53 & 0.62 & 0.68 & 0.63 & 2.78 & 2.78\\ \end{tabular} \label{table5} \end{table*} Table~\ref{table5} reports the resulting CIPW-norm if complete equilibration between the melt belonging to Mars and to the impactor occurs (see Table~\ref{table4}). To establish the reliability of our CIPW-algorithm, we performed calculation using the BSM and compare our result with that in \citet{2016ApJ...828..109R} finding a very good agreement (see the last two columns in Table~\ref{table5}). The resulting solids will be generally characterized by pyroxene\footnote{hypersthene in \citet{2016ApJ...828..109R}.} and olivine, with the former in larger abundances, except for the Mars+Mars case for which olivine is slightly more abundant. Although our selected impactors have initially different chemical composition, the resulting CIPW-norm is quite similar for all cases. It is interesting to note that diposide (\ce{CaMgSi2O6}) is not predicted for a cometary impactor, while corundum (\ce{Al2O3}) is a tracer of that impact. Enstatite and forsterite will be largely stable and common compounds for all the considered cases. Albite (\ce{ NaAlSi3O8}) and orthoclase (\ce{ KAlSi3O8}) are not present in the solids because Na and K are totally vaporized after the impact (see Tables~\ref{table3} and~\ref{table4}). \section{Discussion} \label{discussion} \subsection{Dust composition} \label{outlook} Our calculations clearly show different behaviour when compared with the classical condensation sequence with a solar composition (see Fig.~\ref{fig5}). One of the main reasons is the amount of H, C and O in our systems that is very different from the solar values. Moreover, in our calculations the amount of Fe, Mg, and Si is of the same order of magnitude as O. This is not the case for the Solar Nebula where H is predominant, C is comparable with O and Fe, Mg, Si are orders of magnitude smaller than O \citep{2009ARA&A..47..481A}. Here we try to qualitatively understand our results and emphasize the differences from the well known condensation sequence of the Solar Nebula. The stability of forsterite (\ce{Mg2SiO4}) and enstatite (\ce{MgSiO3}) is driven by the Mg/Si ratio where higher Mg/Si ratios ($>1$) favour forsterite while lower Mg/Si ratios ($<1$) favour enstatite \citep{2001A&A...371..133F}. However, at very high temperature, forsterite can still be the first magnesium silicates that condenses out before being converted in enstatite \citep{2001A&A...371..133F}. From Table~\ref{table3} we see that the Mg/Si is well below 1 in all cases and, as expected the dust is generally enstatite-rich (see Fig.~\ref{fig2}). The excess of Si that is not consumed in the magnesio-silicates will then be bound with O to form stable \ce{SiO2}. Generally \ce{SiO2} tends to be more stable than iron-oxides (see for example ellingham diagrams in \citet{DeHoff1993}). Fe and \ce{SiO2} can then react to form iron-rich silicates. If oxygen is still available for reaction it will start to bind iron to form iron-oxides. If there is lack of oxygen, iron will be mainly in the metallic form. The presence of sulfur further modifies the expected composition as sulfidation of Fe occurs. As a consequence, looking at the elemental ratios reported in Table~\ref{table3} the behaviours found in Fig.~\ref{fig2} become clearer. Let us consider the two extreme cases of the Fe/O ratio, Mars+comet (Fe/O=0.03) and Mars+CV (Fe/O=1.19). In the case of Mars+comet we have Mg/Si=0.55. As such we expect the oxygen to form mainly enstatite \ce{MgSiO3}. Then we expect the appearance of \ce{SiO2} as there is Si in excess with Si more abundant than Fe (Fe/Si=0.44). As Fe/O=0.03 and (Fe+Si)/O=0.09, there is a large reservoir of oxygen to oxidise the iron which, indeed, is found in ferrosilite \ce{FeSiO3}. On the other hand, let us focus to the Mars+CV case. Here the small amount of Mg will form magnesium-silicates, then the excess of Si will form \ce{SiO2} and \ce{FeSiO3}. The Fe/Si=3.43 tells us that there is iron in excess compared to Si. The Fe/O=1.19 and (Fe+Si)/O=1.53 also tell us that there is not enough oxygen available to oxidise all the iron. As a consequence, we expect to see a certain amount of iron to be stable in its metallic form, and \ce{FeS} given the presence of sulfure in the mixture. Sulfur is also present in the case for Mars+EH, Mars+CI and Mars+Comet. If large amount of sulfur is available, as in the the Mars+EH impact, MgS also becomes a stable sulfide. Interesting cases are also the Mars+Mars and Mars+CI. Here, the high Fe/Si ratios but low Fe/O ratios return large amount of several iron oxides: there is no enough Si to form large amount of iron-silicates, thus the O binds directly the iron in several iron-oxides. The Mars+EH impact clearly shows the effect of the sulfide-rich impactor given the presence of MgS together with FeS. Our calculations show that different impactors result in dust with traceable different composition. This open the possibility to identify the individual composition of the impactor from the determination of the dust composition of Phobos and from the samples collected by the MMX mission. In conclusion, dust from different bodies will be characterized by different i) degrees of iron-oxidation, ii) presence of iron-silicates and/or iron-oxides, iii) amount of sulfides, iv) amount of carbon and ice. All the condensation sequences return a generally poor content of olivine and pyroxene, with a preference of the latter. A qualitative analysis of different elemental ratios can then be useful to derive the chemistry of impactors that are not considered in this work. \subsubsection{Carbon, water ice and other volatiles} \label{carbonwater} In our calculations we see the appearance of solid C in the case of Mars+CV, Mars+CI and Mars+comet, while \ce{SiC} is the most stable C-bearing compound in the case of Mars+EH . Mars+CV, Mars+CI and Mars+comet have carbon and hydrogen in the gas mixture, while in the Mars+EH case there is carbon only. The carbon-to-oxygen ratio (C/O) is an important parameter that determines the presence of solid carbon, water vapour and other oxides. At high temperature CO(g) will consume all the C available before allowing the formation of water vapour \citep{1975GeCoA..39..389L}. This has strong implication in the formation of complex organics and water (if hydrogen is present in the system). The chemistry of carbon cannot be totally determined by thermodynamic equilibrium. The behaviour of CO(g) in a \ce{H2O}(g)-\ce{H2}(g) gas in the temperature range of $100< T\rm{(K)}<700$~K is ruled by kinetics and the environmental condition. The classical transition around $T\sim700$~K where \ce{CO}(g) is transformed in \ce{CH4} can be described, for example, by a reaction of the type $n\ce{CO}+(1+2n)\ce{H2} \rightleftharpoons \ce{C_{n}H_{2n+2}} + n\ce{H2O}$. This can be noticed in Fig.\ref{fig5}(right) where $n=1$ and the reaction is $\ce{CO}(g)+3\ce{H2}(g)\rightleftharpoons \ce{CH4}(g) + \ce{H2O}(g)$. In fact, this reaction is just a first simplified transcription of the many Fischer-Tropsh-like reactions that can occur in this temperature range and at different \ce{H2}/\ce{CO} ratios \citep{2006M&PS...41..715S,2008ApJ...673L.225N}. Fischer-Tropsh processes produce complex organics on the surface of dusty grains in the presence of the right catalyst. There are different catalysts with their own properties, but usually, in astrophysics context, the iron-based Fischer-Tropsh is the most considered because the abundances of this element in the Solar Nebula \citep{1988oss..conf...51F,2009ARA&A..47..481A}. There are numerous competitive reactions that determine the rate and the production of organics \citep{1988oss..conf...51F} and several theoretical models have been largely described in material science (see for example the work of \citet{Zimmerman1990}). However, the resulting amount of organics is extremely difficult to calculate theoretically when an astrophysical environment is considered. This comes from the large uncertainty in determining, for example, the amount and the surface of catalyst available. \begin{figure} \center {\includegraphics[width=0.5\columnwidth]{fig4.pdf}} \caption{Thermodynamic calculations in the +comet case shows a possible pathway for the condensation of graphite. In fact, together with the well known transformation of \ce{CO}(g) in \ce{CH4}(g) ($\ce{CO}(g)+3\ce{H2}(g)\rightleftharpoons \ce{CH4}(g) + \ce{H2O}(g)$), we do see the following reaction: $\ce{CO}(g)+\ce{H2}(g)\rightleftharpoons \ce{C} + \ce{H2O}(g)$ that occurs at temperature lower than T=600~K.} \label{fig4} \end{figure} Nevertheless, the possible pathways to the formation of carbon-rich material are vastly more numerous. In our Mars+comet case, for example, the reaction $\ce{CO}(g)+\ce{H2}(g)\rightleftharpoons \ce{C} + \ce{H2O}(g)$ becomes active as well. This is clearly seen in figure~\ref{fig4} where \ce{H2}(g) and \ce{CO{g}} are depleting as \ce{CH4}(g), \ce{H2O}(g) and C become more stable. In this work we do not perform any kinetics calculation and, since we have only few carbon-solids in our list, we can only suggest that the presence of C and H in Mars+CI, Mars+Comet and Mars+CV impacts can produce complex organics and carbon-enriched dust. The case of Mars+CV is then extremely interesting because there may be enough metallic iron, together with carbon, to enhance the production of complex organics. Equilibrium calculations return an efficient evaporation of the carbon rich dust present in the impactor. However, the rate of vaporisation will be driven by the physical and chemical properties of the carbon species. For example, carbon rich insoluble organic material (IOM), if present, could survive the impact as it is refractory \citep{2006mess.book..625P}. This could reduce the amount of carbon released in the gas phase. Nevertheless, the presence of carbon in the MMX samples in the form of new condensates and/or in IOM will still be tracer of the nature of the impactor. In section~\ref{results} we pointed out that the gas-mixture is volatile-enriched. Figure~\ref{fig3} shows that K-, Na-, and Zn-silicates are stable as the temperature drops. Mono atomic Zn is also predicted for the Mars+EH and Mars+Mars case. There are no dramatic differences between the considered impactors when K, Na and Zn are taken into account. In conclusion, the condensing dust will be volatile (and in some cases, also carbon) enriched compared to the solids that result from the cooling melts. The presence of volatile-rich dust in MMX samples will thus indicate that vaporization followed by condensation had occurred and no volatiles left the system. Moreover, since Na, K and Zn condense at different temperature, their presence/absence could return information on the temperature at which aggregation of a given sample occurred. Mars+CI and Mars+comet are the only cases for which, at low temperature, we see condensation of water vapour into ice. The presence of ice will favour secondary alteration of the dust allowing, for example, the formation of phyllosilicates \citep{1998M&PS...33.1113B}. \subsection{Solids composition} \label{melts} In table~\ref{table5} we reported the resulting CIPW-norm of the solids if complete equilibration between Mars and impactor occurred. The composition of the solids generally comprises olivine (forsterite and fayalite) and pyroxene (enstatite and ferrosilite). \citet{2016ApJ...828..109R} calculated the CIPW normative mineralogy for a Mars, Moon and IDP like impactor. They found that the resulting composition of a mars-like impactor would be olivine and pyroxene rich. In particular their CIPW-norm for Mars is characterized by high olivine content ($olivine/pyroxene > 1$). Our results for the Mars+Mars case show that the solids (deprived of all the vaporized material) will have an $olivine/pyroxene \sim 1$, whereas the other cases return a $olivine/pyroxene < 1$. There are no dramatic differences between the solids that results from different impactors (except for the aforementioned corundum in the comet case). It is interesting to note that solids that result form cooling melts do not show as much as variation in their composition compared to the dust. The resulting composition of solids appears, thus, to contain less information on the origin of the impactor compared to the large quantities of clues that can be derived from condensed dust. Nonetheless, the composition of melts can be affected by different cooling conditions, microgravity and gas fugacities. \citet{NAGASHIMA2006193} and \citet{2008070620c} performed laboratory experiments on cooling forsterite and enstatite melts. They found that different cooling rates and microgravity can alter and even suppress crystallisation only allowing the formation of glass material. Further experimental investigations are already planned in order to derive predictions of the composition of the cooling melts. On the physical point of view, condensates and solids from melts may be distinguished by their different crystalline structure, microporosity, zoning, interconnections between different phases. Indeed, the resulting physical properties of dust from gas and solids from melts are determined by many factors \citep{nishinaga2014handbook}. Furthermore \citet{2017ApJ...845..125H} showed that while the size of dust would be in the order of 0.1$\sim$10$\mu$m, solids from melts can reach 1$\sim$10m in size and then they can be grinded down to $\sim$100$\mu$m. We could, thus, expect to find different size distributions when dust and solids are compared. \subsection{Infrared spectra of Phobos} \label{infraredspectra} \citet{2011P&SS...59.1308G} presented a detailed investigation on the possible composition of the dust and rocks present on the Phobos surface. They suggested that the ``blue'' part of Phobos is consistent with phyllosilicates while the ``red'' region is compatible with the presence of feldspar. No bulk chondrite compositions are able to reproduce the current observation \citep{2011P&SS...59.1308G}. Phyllosilicates are not product of condensations but derive from secondary alterations of silicates \citep{1998M&PS...33.1113B} and, as a consequence, they are not predictable with our calculations, although we do have all the dust (silicates) at the base of their formation. The major feldspar compounds are orthoclase (\ce{KAlSi3O8}), albite (\ce{NaAlSi3O8}), and anorthite (\ce{CaAl2Si2O8}). Not all of them are compatible with our model as Na and K are separated from Al after the impact and others are the predicted stable compounds. On the other hand we do see the formation of anorthite (see Table~\ref{table5}). Nevertheless \citet{2011P&SS...59.1308G} stressed that more fine modelling is needed as a mixture of different materials made of fine grains could also produce the observed trends. This becomes important as previous modelling focused on the analysis of the spectral properties of ``external'' objects (such as different types of asteroids) to match the observed spectra, as the capture scenario suggests. The impact-generated scenario imposes to re-think this approach. \citet{2016ApJ...828..109R} analysed the resulting composition of the melts generated by different impactors in order to find a match with the observations. They concluded that more than the melt, the gas-to-dust condensation in the outer part of an impact-generated disk could be able to explain the Phobos and Deimos spectral properties. This further suggests that our derived dust may play an important part in producing the observed trends. Moreover, since the \citet{2017ApJ...845..125H} disc model shows that melt would be mixed with the gas, the combined effect of {\it dust} and {\it solids} should be taken into account. In this section we try to predict the effects of our mixed material (dust plus solids) on the infrared spectra. In the introduction we presented possible mechanisms listed by \citet{2011P&SS...59.1308G} that could be able to reproduce the observed trend in the VIS-NIR. Here we recall them and compare with our results: i) {\it Low percentage of iron-rich olivine and pyroxene can reduce the spectra}. Our resulting dust mixtures generally have a very low concentration ($\sim 1/$~mol\% ) of iron-rich olivine (fayalite, \ce{Fe2SiO4}) and pyroxene (ferrosilite, \ce{FeSiO3}). Only the high temperature region of Mars+CI shows a larger amount of fayalite (see Fig.~\ref{fig2}). ii) {\it A mixture of opaque material (metal iron, iron-oxide, and carbon) reduce the emissivity}. We do have a metallic iron-rich dust that results from several impactors (Mars+CV, Mars+EH, and Mars+Mars only at low temperature). Carbon dust is also seen in our calculations (Mars+CV, Mars+CI, Mars+comet). Moreover, together with Fe, iron sulfide (\ce{FeS}), that we see in Mars+CV, Mars+EH, Mars+CI, Mars+comet, is opaque and featureless in the NIR , but may be recognizable in the MID-IR \citep{Wooden2008,2011ppcd.book..114H}. iii) {\it Quenched material lacks of perfect crystalline structure and, thus, reflectance}. Solids from the melts, in our final assemblage, can have the characteristics suggested by \citet{2011P&SS...59.1308G}. iv) {\it The reflectance of fine grains is reduced}. The average size of the condensed dust is in the order of 0.1$\sim$10$\mu$m \citet{2017ApJ...845..125H}. Our proposed model of Phobos as a result of accretion of dust from gas condensation and solids from melts, together with our derived chemical composition looks promising when discussing spectra. Nevertheless there are some aspects that needs further investigation: i) it is important, at this point, to derive the MIDIR spectra of our propose mixtures, and then ii) estimate the effect of space weathering on it and on the resulting albedo. These points can be set as main topic for future works. \subsection{Limitations} \label{limitations} In this work we assume thermodynamic equilibrium (where all the reactions rates are much shorter than the disk cooling timescale) and mass conservation. All the material is available for reaction until the equilibrium is reached at any given temperature. This may be not always the case. As dust condenses out from the gas, it can be subject to different drag forces and it can separate from the current environment. This process can lead to the so called ``fractionated'' condensation sequence. If this is the case, a given dust grain can become representative of the temperature, pressure, gas mixture and dust species that were present when its condensation occurred. Moreover, dust from secondary condensations (from a now fractionated gas) may then form. These fractionated and incomplete condensation sequences have been the subject of several studies \citep{2000M&PS...35..601H,2009M&PS...44..531B,2016MNRAS.457.1359P} which all show that different pathways of condensation can depart from the main line. For example, \citet{2016MNRAS.457.1359P} showed that, starting with a gas of solar composition, a mixture of enstatite-rich and \ce{SiO2}-rich dust can be produced in case of systematic sharp separation between dust and gas. \ce{SiO2} is a condensate that is not predicted (``incompatible'') when solar abundances are considered. In this work we do not perform fractionated condensation sequences as numerous are the possible pathways. However, in the same way, the presence of some ``incompatible'' condensates together with ``predictable'' dust in the same MMX sample may point out to incomplete or secondary condensation. As mentioned, \citet{2017ApJ...845..125H} demonstrated that there will be likely no complete equilibration between the melts of Mars and the melts of the impactor. What is likely to occur is a wide spectrum with different degrees of equilibration. A random sampling of solids may, thus, show material that come from Mars, the impactor or several degrees of mixing. In our calculations we kept the pressure constant and fixed at $P=10^{-4}$~bar as in \citet{2017ApJ...845..125H}. In general, lower pressures (in order of magnitude) move the condensation of the dust toward lower temperatures \citep{Yoneda1995,1998A&A...332.1099G}. In this case, for our given temperature, more material could vaporize and go into the gas-phase. Increasing the pressure (in order of magnitude) has the opposite effect as the condensation temperature increases. As a consequence, we could observe different amount of Fe, Mg, and Si moving to the gas phase. Changes in disk pressure may also occur if large amount of volatiles are injected in the system after the impact. This could be the case of the Mars+comet impact where the release of \ce{H2O}(g) and \ce{CO(g)} could change the total pressure in the disk increasing it, or when a water-rich Mars is considered \citep{2017ApJ...845..125H}. Observed deviations from the predicted trends could then be associated to strong variation in the pressure in the Mars' Moons formation region of the disk or, in fact, to a radial gradient of temperature and pressure in the disk. As reference for future experimental work we report in appendix B (see Fig.\ref{fig6}) the partial pressures of the major gas component for Mars+CI, Mars+CV, and Mars+comet impacts. These are the impacts that produce the larger amount of gas such as \ce{H2O}(g) and \ce{CO}(g). These values can be used to set up the conditions in which experiments can be performed. \section{Conclusions} \label{conclusions} In this work we used thermodynamic equilibrium calculation to investigate the chemical composition of dust (from condensing gas) and solids (from cooling melt) as the building blocks of Phobos and Deimos in the impact-generated scenario with the thermodynamic conditions of \citet{2017ApJ...845..125H}. We found that dust and solids have different chemical and physical properties. Dust carries more information on the impactor than the solids. Our results show that it would be possible to distinguish from different types of impactors as each case returns several unique tracers in the dust: a Mars+CV has large quantities of metallic iron, \ce{SiO2}, iron sulfides and carbon; Mars+Comet has pyroxenes and the largest carbon and ice reservoir. Mars+EH impact has dust with high metallic iron content, \ce{SiO2}, sulfides (\ce{FeS} and \ce{MgS}) and traces of \ce{SiC}. Impact with Mars-like objects returns several iron-oxides, and the dust in Mars+CI has iron-oxides, water ice and carbon. The presence/absence of metallic iron, iron-silicates, iron-oxides, sulfides, carbon and water ice can be considered as clues of different impactors. Deviations from the derived compositions can be then ascribed to fractionated condensation sequences and/or strong variations in the disk pressure and/or impactors with different elemental composition than investigated in this study. The giant impact scenario imposes to re-think the dust modelling for the infrared spectra, as Phobos, in this case, would be made of a complex mixture of dust and solids and not of a pre-built object as the capture scenario suggests. A qualitative analysis suggests that our derived composition of dust and solid can be compatible with the characteristic of the Phobos VIS-NIR spectra. In conclusion, the proposed scenario of Phobos as the result of accretion of dust and solid in an impact-generated disk can reconcile with both the dynamical and spectral properties of the Mars' moon. Our dust tracers can be then used in the analysis of the samples returned by the JAXA's MMX mission. \software{HSC (v8; \citet{roine2002outokumpu})} \acknowledgments The authors wish to thank the anonymous referee for their comments and suggestions that let us investigate our assumptions in more details which improved the manuscript. The authors wish to acknowledge the financial support of ANR-15-CE31-0004-1 (ANR CRADLE), INFINITI (INterFaces Interdisciplinaires Num\`erIque et Th\`eorIque), UnivEarthS Labex program at Sorbonne Paris Cit´e (ANR-10- LABX-0023 and ANR-11-IDEX-0005-02). PR has been financially supported, for his preliminary contribution to this work, by the Belgian PRODEX programme managed by the European Space Agency in collaboration with the Belgian Federal Science Policy Office. SC, RH, and HG acknowledge the financial support of the JSPS-MAEDI bilateral joint research project (SAKURA program). HG also acknowledges JSPS KAKENHI Grant Nos. JP17H02990 and JP17H06457, and thanks the Astrobiolgy Center of the National Institutes of Natural Sciences (NINS). \vspace{5mm}
{ "timestamp": "2017-12-18T02:05:24", "yymm": "1712", "arxiv_id": "1712.05154", "language": "en", "url": "https://arxiv.org/abs/1712.05154" }
\section*{Acknowledgements} We thank Giacomo Bighin and Luca Salasnich for stimulating discussions. We also thank Stefano Giorgini and Tomoki Ozawa for useful comments. This work has been supported by the QUIC grant of the European Horizon2020 FET program and by Provincia Autonoma di Trento. \par
{ "timestamp": "2017-12-15T02:01:52", "yymm": "1712", "arxiv_id": "1712.05022", "language": "en", "url": "https://arxiv.org/abs/1712.05022" }
\section{Relativistic heavy-ion collisions} \label{sec:1} \sectionmark{Relativistic heavy-ion collision} The relativistic heavy-ion collision experiments are a perfect tool for studying, in a controlled and reproducible manner, the properties of strongly interacting matter at high energies. Unlike the collisions of more elementary particles, they provide unique opportunity to reach the thermodynamic equilibrium needed for investigating the phase diagram and transport properties of the QCD matter. Assuming that (local) thermal equilibrium is achieved promptly, and that the interactions are strong enough to maintain this state throughout subsequent evolution, the expansion of such a system should, in principle, follow the laws of relativistic fluid dynamics \cite{Yan:2017ivm} \footnote{Note that several recent studies indicate that fluid dynamics may be applicable also in the situations where the produced system is locally far off equilibrium \cite{Florkowski:2010cf,Martinez:2010sc,Alqahtani:2017jwl,Strickland:2014pga,Alqahtani:2017mhy}, see also \cite{Florkowski:2013lya,Florkowski:2013lza,Denicol:2014xca,Denicol:2014tha}.}. Given initial conditions for initialization of fluid dynamical fields, and the prescription for the hadron emission from the fluid, the fluid dynamics provides straightforward and intuitive way to study properties of the produced matter, as encoded in its equation of state. Although still some important questions remain, \emph{e.g.}, concerning the formulation of the theory itself \cite{Florkowski:2017olj} (in particular related to the form of the transport coefficients \cite{Jaiswal:2014isa,Florkowski:2015lra,Tinti:2016bav}), the successes of the models employing fluid dynamics concepts already have shown (almost)-perfect fluidity of the quark-gluon plasma and established a sort of hydro-like ``Standard Model'' of heavy-ion collisions. While recently a significant progress in the determination of the initial state of heavy-ion collisions has been made, using, \emph{i.a.}, non-equilibrium effective field theory and gauge/gravity correspondence, the hadronic production from such a system is still poorly understood. Huge majority of the approaches use (to some level heuristic, yet surprisingly successful) prescriptions for the hadronization process dating back to times of Fermi, Landau, and Hagedorn. In these lectures we will briefly review some of the concepts of particle decoupling and statistical hadronization as applied to heavy-ion collisions showing, in the end, their remarkable efficacy in describing some of experimentally observed phenomena. In this work the we use natural units where $c=k_B=\hbar =1$. The bold font denotes the vectors in the transverse $x-y$ plane. \section{Relativistic perfect fluid dynamics} \label{sec:2a} \sectionmark{Relativistic perfect fluid dynamics} The simplest and, at the same time, the only unambiguously~\footnote{Some sort of ambiguity arises in the case when dissipative effects are present in the system. In such a situation additional assumptions on the evolution of dissipative quantities (\emph{e.g.} shear stress tensor and bulk viscous pressure) are required, resulting in additional equations of motion. The latter may differ significantly in various approaches~\cite{Florkowski:2016kjj}.} derivable relativistic fluid dynamical equations are that of \emph{relativistic perfect fluid dynamics}~\cite{Landau:1959,Misner:1974qy,deGroot:1980,rezzolla2013relativistic}. Due to their simplicity they are extensively applied to various systems in physics, including the evolution of strongly interacting matter produced in relativistic heavy-ion collisions. Although the literature on the subject is rather extensive (see \emph{i.e.} Refs.~\cite{Florkowski:2017olj,Stoecker:1986ci,Rischke1999,Kolb:2003dz,Huovinen:2006jp,Florkowski:2010zz,Romatschke:2009im,Gale:2013da,Jaiswal:2016hex} and the references therein), we will review herein its basic aspects to set the stage for further discussion. The equations of relativistic perfect fluid dynamics for the (net) charge-free matter follow solely from the local conservation laws of energy and momentum~\cite{Landau:1959,Misner:1974qy,deGroot:1980,rezzolla2013relativistic}, which in the Minkowski coordinates may be formulated in the following covariant form \footnote{In the case of curvilinear coordinates, even if the space-time is considered to be flat (in the sense of globally vanishing Riemann tensor), one should replace the partial derivative $\partial_\mu=(\partial_t, - \nabla)$ in Eq.~(\ref{emc}) with \emph{covariant derivative} $d_\mu$.} \begin{equation} \partial_\mu T^{\mu\nu}(x) = 0, \label{emc} \end{equation} where $T^{\mu\nu}(x)$ is the energy-momentum tensor and $x^\mu = (t, \xT, z)$. In addition, for a multicomponent system which possesses $N$ conserved charges $Q_i$ one should supplement Eq.~(\ref{emc}) with $N$ continuity equations for the respective charge currents $N_i^\mu$ \begin{equation} \partial_\mu N_i^{\mu}(x) = 0 \qquad \left(i=1\dots \,N\right). \label{nc} \end{equation} One may consider, \emph{i.e.}, $Q_i=\{B, I_3, S, C\}$, where $B, I_3, S$ and $C$ denote baryon number, third component of the isospin, strangeness and charm, respectively \footnote{Equivalently, instead of the third component of the isospin one may use the electric charge.}. The main assumption defining the perfect fluid is that each fluid element, when considered in its local rest frame (LRF), is exactly in \emph{thermal and chemical equilibrium} state. This is expressed by the static equilibrium (isotropic) form of the energy-momentum tensor \begin{equation} T^{\mu\nu}_{\rm LRF} (x) = {\rm diag}\left(\vphantom{\frac{}{}} \e(x), \p(x), \p(x), \p(x) \right), \label{emtLRF} \end{equation} where $\e(x)$ and $\p(x)$ denote the equilibrium energy density and pressure, respectively. One may also easily convince oneself that in the perfect fluid case the charge currents must all have the following LRF form \begin{equation} N^\mu_{i, {\rm LRF}} (x) = \left(\vphantom{\frac{}{}}\!\!{\cal N}_i(x), 0, 0, 0 \right), \label{ncLRF} \end{equation} where ${\cal N}_i(x)$ represents the density of the charge $Q_i$; otherwise dissipative effects have to occur. In general (laboratory) frame each fluid element moves with a fluid four-velocity $u^\mu (x) \equiv \gamma \left(1, \textbf{v}_T, v_z\right)$, satisfying normalization condition $u^\mu u_\mu=1$. The form of the energy-momentum tensor in this frame can be obtained by applying a general (canonical) Lorentz boost transformation to Eq.~(\ref{emtLRF}) \begin{equation} T^{\mu\nu}(x) = \Lambda^\mu_{\,\,\,\alpha}\ (u^\lambda)\,\Lambda^\nu_{\,\,\,\beta} (u^\lambda)\,T_{\rm LRF}^{\alpha\beta}(x), \label{transform} \end{equation} where the boost matrix $\Lambda^\mu_{\,\,\,\nu}$ is defined as follows % \begin{equation} \Lambda^{\mu}_{ \,\,\,\nu}(u^{\lambda}) \equiv \left( \begin{array}{rrrr} \gamma & -\gamma v_x & -\gamma v_y & -\gamma v_z \\ -\gamma v_x & 1 + (\gamma - 1) \frac{v_x^2}{v^2} & (\gamma - 1) \frac{v_x v_y}{v^2} & (\gamma - 1) \frac{v_x v_z}{v^2} \\ -\gamma v_y & (\gamma - 1) \frac{v_x v_y}{v^2} & 1 + (\gamma - 1) \frac{v_y^2}{v^2} & (\gamma - 1) \frac{v_y v_z}{v^2} \\ -\gamma v_z & (\gamma - 1) \frac{v_x v_z}{v^2} & (\gamma - 1) \frac{v_y v_z}{v^2} & 1 + (\gamma - 1) \frac{v_z^2}{v^2} \end{array} \right).\nonumber \label{genboost} \end{equation} Using covariant notation the result of Eq.~(\ref{transform}) may be expressed in the following form \begin{eqnarray} T^{\mu \nu} =\e u^\mu u^\nu - \p\Delta^{\mu\nu}, \label{emt} \end{eqnarray} where we introduced the symmetric projection operator on the space orthogonal to the fluid four-velocity, $\Delta^{\mu\nu} \equiv g^{\mu\nu} - u^\mu u^\nu$, which satisfies conditions $u_{\mu} \Delta^{\mu\nu} = 0$, $\Delta^{\mu}_{\,\,\,\alpha}\Delta^{\alpha\nu}=\Delta^{\mu\nu}$ and $\Delta^{\mu}_{\,\,\,\mu}=3$, and $g^{\mu\nu} = {\rm diag} \left(1,-1,-1,-1\right)$ is the metric tensor. Similarly, the Lorentz boost transformation applied to Eq.~(\ref{ncLRF}) leads to the following tensor decomposition of $N_i^\mu$ in the general frame \begin{equation} N_i^{\mu} = {\cal N}_i \,u^\mu, \label{pflux} \end{equation} so that in the LRF, where $u^\mu_{\rm LRF} =\left(1, \textbf{0}, 0\right)$, one has ${\cal N}_i=N_i^\mu u_{\mu, \rm LRF}$. Equation (\ref{emc}) may be rewritten in a somewhat more familiar form using Eq.~(\ref{emt}) and performing projections perpendicular and parallel to the fluid four-velocity \begin{eqnarray} \label{emc1} \Delta^\alpha_{\,\,\,\nu} \partial_\mu T^{\mu \nu} = (\e+\p)D u^\alpha-\nabla^\alpha \p &=&0, \\ \label{emc2} u_\nu \partial_\mu T^{\mu \nu}= D\e + (\e+\p)\theta &=&0, \end{eqnarray} where $D\equiv u^\mu \partial_\mu$ is the co-moving time derivative, $\nabla^\mu \equiv \Delta^{\mu \alpha} \partial_\alpha$ is the spacial gradient, and $\theta\equiv\partial_\mu u^\mu$ is the expansion scalar. Equations (\ref{emc1})-(\ref{emc2}) are relativistic analogs of the Euler and continuity equations, respectively. Similarly, putting decompositions (\ref{pflux}) in Eqs.~(\ref{nc}) yields \begin{eqnarray} \label{nc2} \partial_\mu N^{\mu}_i = D {\cal N}_i+ {\cal N}_i \theta&=&0 . \end{eqnarray} Equations (\ref{emc1})-(\ref{nc2}) contain together $4+N$ independent partial differential equations for the space-time evolution of $5+N$ quantities (three components of four-velocity, energy density, pressure and $N$ charge densities). In order for the system to be closed one has to introduce a material-specific \emph{equation of state} relating the pressure, the energy density and the charge densities in the system, $\p=\p(\e,{\cal N}_i)$. Since the system is locally in equilibrium such a relation exists and has to follow from the underlying microscopic theory describing the system. Henceforth, we will assume that the system created in heavy-ion collisions is charge-free, ${\cal N}_i(x) \equiv0$, which is a reasonable assumption for central rapidity region at the ultra-relativistic energies. The energy density and pressure for the charge-free matter in equilibrium may be directly related to the temperature of the system, $\e=\e(T)$, $\p=\p(T)$, see Sec.~\ref{sec:2c}. A number of studies show that the successful description of the experimental data requires the use of ``cross-over''-type equation of state of strongly interacting matter with the transition from the quark-gluon plasma phase to the hadron gas phase. For the numerical results presented in the remaining part of these lectures we will use the results of the Ref.~\cite{Chojnacki:2007jc}. The temperature dependence of the square speed of sound $c_s^2 (T)=d\p/d\e$ obtained in Ref.~\cite{Chojnacki:2007jc} is shown in Fig.~\ref{fig:eos} \footnote{Note that in the case of dissipative fluid dynamics due to the existence of transport coefficients the inclusion of the equation of state is usually more involved \cite{Tinti:2016bav}.}. \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.6 \textwidth]{figcs2.pdf} \end{center} \caption{Temperature dependence of the squared speed of sound $c_s^2=d\p/d\e$ for the hadron gas, lattice QCD quark-gluon plasma, and interpolation thereof as found in Ref.~\cite{Chojnacki:2007jc}.} \label{fig:eos} \end{figure} \begin{figure} \begin{center} \includegraphics[angle=0,width=0.67 \textwidth]{milne.pdf} \end{center} \caption{The Milne coordinates shown in the Minkowski space-time. Directions $x$ and $y$ are suppressed.} \label{fig:milne} \end{figure} Except for quite limited number of special cases of highly-symmetric flow patterns, such as the Bjorken \cite{Bjorken:1982qr} or Gubser \cite{Gubser:2010ze} flows, Eqs. (\ref{emc1})-(\ref{nc2}) have to be solved numerically. In the case of modeling the collisions at relativistic energies, in which case the system is approximately boost-invariant (with respect to Lorentz boosts along the beam ($z$) direction) in the central rapidity region, the hydrodynamic evolution is preferably performed in \emph{Milne coordinates} $x^\mu = (\tau,\xT,\varsigma)$ instead of Minkowski coordinates $\tilde{x}^\mu = (t,\xT,z)$ \cite{Bjorken:1982qr}. The relation between the two is given by the following coordinate transformation \begin{eqnarray} t&=&\tau \cosh \varsigma,\\ z&=& \tau \sinh \varsigma, \label{coord} \end{eqnarray} with $\tau= \sqrt{t^2-z^2}$ and $\varsigma = \tanh^{-1} \left(z/t\right)$ denoting longitudinal proper time and space-time rapidity, respectively; see Fig.~\ref{fig:milne}. In this coordinate system it is also convenient to parametrize the fluid four-velocity in the following form \begin{equation} u^\mu = \left(u_0 \cosh {\rm y}_u, \textbf{u}_T, u_0 \sinh {\rm y}_u \right), \label{umilne} \end{equation} where ${\rm y}_u$ is the longitudinal rapidity of the fluid, and $u_0 = \sqrt{1+u_T^2}$, with $u_T=\sqrt{u_x^2+u_y^2}$. \section{Fluid dynamics from kinetic theory} \label{sec:2c} \sectionmark{Fluid dynamics from kinetic theory} It is instructive to see the relation between the fluid dynamics, as a general classical field theory, and the relativistic kinetic theory. The latter is based on the knowledge of the \emph{single-particle distribution function} $f(x,p)$, which is defined through the number of (on-shell) particles $d N$ in the phase-space volume $d^3 \!x\,\dP$ located at the phase-space point $(x^\mu, p^\mu)$, where $p^\mu=(E_p, \textbf{p}, p_z)$ with $E_p=\sqrt{m^2+\textbf{p}^2+p_z^2}$. The evolution of $f(x,p)$ follows from the standard relativistic Boltzmann equation \begin{equation} p^\alpha \partial_\alpha f=-C[f], \label{BE} \end{equation} where $C[f]$ is the collisional kernel, which, in general, may have highly complicated form. In global equilibrium $f(x,p)$ is stationary, which means that $C[f]$ vanishes in two very different regimes: free-streaming (no interactions) and equilibrium (strongest possible interactions). Often the collisional kernel is treated in the relaxation-time approximation, \begin{equation} C[f]=p_\mu u^\mu\frac{ f-f_{\rm eq} }{\tau_{\rm eq}}, \label{ccRTA} \end{equation} where $\tau_{\rm eq}$ is the relaxation time, and $f_{\rm eq}$ is the equilibrium distribution. Equations of motion for the soft modes of the system, identified with the fluid dynamical sector of the theory, may be derived by taking the lowest-$n$ momentum moments \cite{Denicol:2014loa,Strickland:2014pga,Alqahtani:2017mhy}, \begin{equation} \hat{{\cal I}}^{\mu_1\cdots\mu_n}\equiv \int \!dP\, p^{\mu_1}p^{\mu_2}\cdots p^{\mu_n}, \qquad \qquad \int dP \equiv \int \frac{\dP}{E_p}, \end{equation} of the Boltzmann equation (\ref{BE}), which gives \begin{equation} \partial_{\alpha} {\cal I} ^{\alpha \mu_1\cdots\mu_n}= -{\cal C}^{\mu_1\cdots\mu_n}[f], \label{EOM} \end{equation} where we defined \begin{eqnarray} {\cal I} ^{\alpha\mu_1\cdots\mu_n} &\equiv& \hat{{\cal I}} ^{\alpha\mu_1\cdots\mu_n} f , \label{mom1} \\{\cal C}^{\mu_1\cdots\mu_n}[f] &\equiv& \hat{{\cal I}} ^{\mu_1\cdots\mu_n} C[f]. \label{mom2} \end{eqnarray} Explicitly, the first two moments lead to the following set of dynamical equations, \begin{equation} \partial_\mu {\cal I} ^\mu = u_\mu \frac{{\cal I}^\mu_{\rm eq}-{\cal I} ^\mu}{\tau_{\rm eq}}, \label{ncK} \end{equation} \begin{equation} \partial_\mu {\cal I} ^{\mu\nu}=u_\mu \frac{{\cal I}^{\mu\nu}_{\rm eq}-{\cal I} ^{\mu\nu}}{\tau_{\rm eq}}. \label{emcK} \end{equation} One may immediately identify zeroth and first moments of the distribution function as particle four-current and the energy-momentum tensor, \begin{eqnarray} N^\mu &\equiv& {\cal I}^{\mu} \label{ident1} \\T^{\mu\nu} &\equiv& {\cal I}^{\mu\nu}. \label{ident2} \end{eqnarray} The conservation of particle current $u_\mu ({\cal I}^\mu_{\rm eq}-{\cal I} ^\mu)=0$, and energy-momentum tensor $u_\mu ({\cal I}^{\mu\nu}_{\rm eq}-{\cal I} ^{\mu\nu})=0$ thus leads to Eqs.~(\ref{emc})-(\ref{nc}). Using decompositions of the particle four-current (\ref{pflux}) and the energy-momentum tensor (\ref{emt}), and the knowledge of the LTE distribution function $f_{\rm eq} = f(p_\mu u^\mu, T, \mu_{\rm i})$, see Eq.~(\ref{eqdistr}) in Sec.~\ref{sec:6}, one may find explicit forms of the thermodynamic variables, $\e=\e(T,\mu_{\rm i})$, $\p=\p(T,\mu_{\rm i})$, and ${\cal N}={\cal N}(T,\mu_{\rm i})$. The latter define the equation of state of the system within kinetic theory. For the conformal charge-free system one gets $\e(T) =3\p(T) = 3 T {\cal N} (T) \sim T^4$ \cite{Florkowski:2010zz}. \section{Event-averaged initial conditions for fluid dynamics} \label{sec:2b} \sectionmark{Initial conditions for fluid dynamics} In general, Eqs.~(\ref{emc1})-(\ref{nc2}) have to be supplemented with proper \emph{initial conditions}, specified on the hypersurface of constant longitudinal proper time $\tau=\tau_{\rm i}$, which usually defines beginning of the fluid dynamical evolution. In particular, one has to provide $\e(\tau_{\rm i},\xT,\varsigma)$, $u_x(\tau_{\rm i}, \xT,\varsigma)$, $u_y(\tau_{\rm i}, \xT,\varsigma)$, $y_u(\tau_{\rm i},\xT,\varsigma)$, and ${\cal N}_i(\tau_{\rm i},\xT,\varsigma)$. These should follow from some microscopic models of the initial state created in heavy-ion collision, though usually they are, to some extent, just fitted to reproduce the data. \begin{figure} [t] \begin{center} \includegraphics[angle=0,width=0.67 \textwidth]{coll.pdf} \end{center} \caption{The geometry of heavy-ion collision in the Milne coordinates.} \label{fig:coll} \end{figure} For the initial energy density profile, we will use the \emph{tilted source} model \cite{Bozek:2010bi} of the initial state, which was applied quite successfully to describe various experimental observables measured at RHIC, including the so called directed flow $v_1$ component of the Fourier decomposition of the azimuthal particle spectra. The initial energy density profile within this model is proportional to the density of sources $n({\bf x}_T,\varsigma,b)$ \begin{equation} \e(\tau_{\rm i},{\bf x}_T, \varsigma,b) = \e_{\rm i} \, \frac{ n({\bf x}_T,\varsigma,b)}{n(\textbf{0},0,0)} , \label{sig2} \end{equation} where \begin{equation} n({\bf x}_T,\varsigma,b) = G(\varsigma)\left\{(1-\kappa) \Big[ W_A({\bf x}_T,b) F(\varsigma) + W_B({\bf x}_T,b) F(-\varsigma)\Big] + \kappa B({\bf x}_T,b) \right\}. \label{denofsourc} \end{equation} The functions $W_{A(B)}$, and $B$ are the density of wounded nucleons from the nucleus A (B), and the density of binary collisions, respectively, both specified at a certain value of the impact parameter $b=|\textbf{b}|$, see Fig.~\ref{fig:coll}. These quantities are determined entirely from the optical limit of the \emph{Glauber model} \cite{Miller:2007ri}. The admixture of binary collisions is controlled by the parameter $\kappa =0.14$ that is typically fitted to reproduce the centrality dependence of charge hadron multiplicity. As herein we assume that the system is charge-free, ${\cal N}_i(\tau_{\rm i}, x,y,\varsigma)\equiv 0$, the initial central energy density $\e_{\rm i}$ of the system may be translated to its initial central temperature $T_{\rm i}=T_{\rm i}(\e_{\rm i})$, which is fitted to reproduce the total number of charged particles produced in the experiment. \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.7 \textwidth]{ini3d.pdf}\\ \includegraphics[angle=0,width=0.47 \textwidth]{ini-xy.pdf} \includegraphics[angle=0,width=0.47 \textwidth]{ini-xs.pdf} \end{center} \caption{(Top panel) The isothermal surfaces $(T\in\{0.4,0.3,0.2,0.1\} {\rm \,GeV})$ of the tilted source in the Milne coordinates. (Bottom panels) The isothermal contours of the initial temperature profile of fluid dynamic evolution in the Milne coordinates in $x-y$ (left) and $x-\varsigma$ (right) plane.} \label{fig:ini} \end{figure} The functional form of the density profile in rapidity in Eq.~(\ref{denofsourc}) is \begin{equation} G(\varsigma) \equiv \exp \left[ - \frac{(\varsigma - \Delta \varsigma)^2}{ 2 \sigma_\varsigma^2 } \, \Theta (|\varsigma| - \Delta \varsigma) \right] \, . \label{eq:rhofunc} \end{equation} The parameters in Eq.~(\ref{eq:rhofunc}) are deduced from the fits to the final rapidity spectrum of charged hadrons. For RHIC the fit results in $\Delta\varsigma = 2.3$ and $\sigma_{\varsigma} = 1.6$. The tilt of the source results from the preferred particle emission from the moving participant nucleon into its forward hemisphere, and may be parametrized as follows \cite{Bozek:2010bi} \begin{eqnarray} F(\varsigma) = \left\{ \begin{array}{lcccc} 0 & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\ & \mbox{ } & \,\,\, & \varsigma < -y_{\rm N} \, , \\ (\varsigma+y_{\rm N})/(2 y_{\rm N}) & & \mbox{if} & & -y_{\rm N} \leq \varsigma \leq y_{\rm N}\, , \\ 1 & & \mbox{ } & & \varsigma > y_{\rm N}\, , \end{array}\right. \,\,\,\,\,\,\,\,\,\,\,\,\, \end{eqnarray} where $y_{\rm N} = \log(2\sqrt{s_{\rm NN}}/(m_{\rm N}))-\varsigma_{\rm shift}$ is the nucleon rapidity shifted by the value $\varsigma_{\rm shift}=2$ (treated as a phenomenological parameter), $\sqrt{s_{\rm NN}}$ is the center-of-mass energy per nucleon pair, and $m_{\rm N}$ is the nucleon mass. The resulting initial temperature profile in $x-y$ plane and $x-\varsigma$ plane is shown in bottom panels of Fig.~\ref{fig:ini}, respectively. Finally, the flow in the transverse plane, as usual, is assumed to vanish initially, $u_x(\tau_{\rm i}, \xT,\varsigma) =0$, $u_y(\tau_{\rm i}, \xT,\varsigma)=0$, and the flow in the longitudinal direction is assumed to have the Bjorken-type scaling form $y_u(\tau_{\rm i}, \xT,\varsigma)=\varsigma$. \section{Particle decoupling} \label{sec:3} \sectionmark{Particle decoupling} \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.47 \textwidth]{ini-xtau.pdf} \includegraphics[angle=0,width=0.47 \textwidth]{ini-stau.pdf} \end{center} \caption{The isothermal contours of the fluid dynamical evolution in the Milne coordinates.} \label{fig:evo} \end{figure} Relativistic perfect fluid dynamics describes, by definition, the infinitely strongly-coupled system of particles evolving from one local thermal equilibrium state to another. When applied to relativistic heavy-ion collisions one immediately realizes that the latter assumption breaks down as the evolution proceeds. The rapid expansion of the created fireball into the vacuum leads to its cooling and dilution, see Fig.~\ref{fig:evo}. Eventually, the particle scatterings become too rare to prevent the particles from leaving the fluid. As a result the local thermal equilibrium cannot be maintained anymore and the fluid description breaks down. This complicated gradual process of particle decoupling from the fluid is often called the \emph{freeze-out} \cite{Stoecker:1986ci,Rischke1999,Kolb:2003dz,Huovinen:2006jp,Florkowski:2010zz,Romatschke:2009im,Gale:2013da,Jaiswal:2016hex}. As the interactions cease and the system becomes rarefied the kinetic theory description in terms of hadronic degrees of freedom and scattering cross sections becomes more adequate. A possible way to describe this process is to compare locally the time scale of the expansion of the fluid $\tau_{\rm exp}$ (which drives the system out of equilibrium), and the time scale characterizing collisions between the particles $\tau_{{\rm coll}}^k$ (which tend to restore it) \cite{Bondorf:1978kz}. In the differential form, the decoupling may be formulated as the following inequality \cite{Heinz:2007in} \begin{eqnarray} \tau_{{\rm coll}}^{k} \ge \tau_{\rm exp} , \label{freezeout} \end{eqnarray} where $\tau_{\rm exp}\sim 1/\theta(x)$ (see Sec.~\ref{sec:2a}), and $\tau_{\rm coll}^{k}\sim 1/\sum\limits_l\langle \sigma_{kl} v_{kl} \rangle\, \tilde{{\cal N}}_l(x)$, with $\sigma_{kl}$ denoting the scattering cross section between the particle species $k$ and $l$, $v_{kl}$ being their relative velocity in the center of mass frame, and $\tilde{{\cal N}}_l(x)$ describing respective particle densities. If the condition (\ref{freezeout}) is satisfied the particle species $k$ start to decouple from the fluid. A few comments are in order here: \begin{itemize} \item Both, the flow velocity $u^\mu(x)$ and particle densities $\tilde{{\cal N}}_l(x)$ are, in general, space-time dependent quantities, which means that the freeze-out process begins at different space-time points of the fluid. \item In general, the cross sections $\sigma_{kl}$ depend on the particle species, thus some particle species decouple ``before'' others. In the case of ultra-relativistic heavy-ion collisions the scattering cross section is usually dominated by a single species (pions), whose freeze-out triggers others. \item In perfect fluid dynamics, to which we restrict ourselves, the particle density $\tilde{{\cal N}}_l(x)\sim T^3(x)$. As a result, the condition (\ref{freezeout}) is usually significantly simplified to the condition of temperature dropping below a certain freeze-out temperature \cite{Rischke1999} \begin{eqnarray} T_{\rm freeze} \ge T(x). \label{freezeoutT} \end{eqnarray} \item The total cross section $\sigma_{kl}=\sigma_{kl}^{\rm el}+\sigma_{kl}^{\rm in}$ is always larger than the elastic one $\sigma_{kl}^{\rm el}$, which implies that the particle-number changing processes cease before the momentum changing processes. This results in the distinction between the \emph{chemical freeze-out} (inelastic collisions stop), and the \emph{kinetic/thermal freeze-out} (elastic collisions stop). It means that, typically, chemical freeze-out takes place at higher temperature than the thermal one, \begin{eqnarray} T_{\rm chem} \ge T_{\rm therm}. \label{freezeout2} \end{eqnarray} \item Equation (\ref{freezeout}) may, in general, involve additional unknown parameter of the order of $1$, which sets the overall scale for the freeze-out processes \cite{Heinz:2007in}. \end{itemize} \section{\emph{Single-freeze-out} scenario} \label{sec:4} The dynamical description of the particle decoupling according to Eq.~(\ref{freezeout}) is quite difficult to realize in practice. Instead, usually a significant simplification of the freeze-out dynamics, often called the \emph{single-freeze-out} model, is adopted \cite{Broniowski:2001we,Broniowski:2001uk,Broniowski:2002nf,Broniowski:2002wp,Bozek:2003qi,Broniowski:2003ax,Kisiel:2006is}. The latter relies on the assumption that the chemical and thermal freeze-out occur simultaneously. Within this framework one assumes that once the temperature $T(x)$ in the fluid decreases locally below a certain value $T_{\rm freeze}$ all particle species decouple completely from the fluid \footnote{Although the assumption of isothermal freeze-out seems to be crude, it was shown to give a quite reasonable approximation of the differential freeze-out condition, Eq.~(\ref{freezeout}) \cite{Rischke1999,Kolb:2003dz}.}. Mathematically, the condition $T(x)=T_{\rm freeze}$ defines a three-dimensional freeze-out hypersurface $\Sigma$ in the four-dimensional Minkowski space-time ${\cal M}$ (see Sec.~\ref{sec:5}). The thickness of $\Sigma$ is idealistically assumed to be infinitesimal, which means that the freeze-out process takes place instantaneously. Just before crossing $\Sigma$, in the fluid phase, the matter is considered to be in local thermal and chemical equilibrium, so that phase-space distributions of the microscopic constituents follow the statistical ones (see Sec.~\ref{sec:6}). It is assumed that freeze-out process itself, due to its instantaneous character, does not affect the phase-space distributions, so that equilibrium distributions are also shared by the particles emitted from the surface $\Sigma$. Outside the fluid region various approximations, usually based on some sort of transport theory, may apply. In the original single-freeze-out model \cite{Broniowski:2001we,Broniowski:2001uk,Broniowski:2002nf} the particles created on $\Sigma$, termed as \emph{primordial}, are assumed to form \emph{ideal non-interacting gas of hadrons}, which undergo \emph{free-streaming} to the detectors. However, the (primordial) hadron gas contains the full mass spectrum of hadronic resonances, which subsequently decay into \emph{stable} particles \footnote{The inverse processes, due to low probabilities, are usually neglected in this framework.}. Due to decays, the abundances of stable particles, as well as their spectra, are modified (see Sec.~\ref{sec:9}). As a result, the temperature $T_{\rm freeze}$ should, in general, be interpreted as a \emph{switching temperature} (from fluid to particle description), rather than the true chemical and kinetic freeze-out temperature. Another possibility is to perform the switching at somewhat higher, often called \emph{particlization} \cite{Huovinen:2012is}, temperature $T_{\rm part}>T_{\rm freeze}$, when the system is considered to be still strongly interacting, but the transport theory description is more adequate. The particle ensemble, together with their phase-space coordinates, is then passed to the quantum molecular transport models describing various phenomena in the dense hadronic medium. For the sake of simplicity, in this work we focus on the \emph{single-freeze-out} scenario solely. Finally, one should note here that it is commonly assumed that the fluid dynamical description applies to the whole forward light-cone of the system and \emph{a posteriori} part of the space-time evolution of fluid satisfying $T(x)< T_{\rm freeze}$ is neglected and replaced with the transport theory description. This procedure obviously introduces some inconsistency at the boundary $\Sigma$, as outside the fluid regime the system is a non-interacting hadron gas (as opposed to the strongly-interacting fluid). Consequently, the boundary conditions at $\Sigma$ for the fluid evolution are different in the two cases. Possible consequences of these problems will be neglected herein. We will only note here, that these problems may be largely reduced when large anisotropies are included already within fluid dynamics stage which would describe its gradual break up \cite{Florkowski:2010cf,Martinez:2010sc,Alqahtani:2017jwl}. In this way the switching should introduce less uncertainty. \section{Freeze-out hypersurface extraction} \label{sec:5} \sectionmark{Freeze-out hypersurface extraction} \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.6 \textwidth]{fo3D.pdf} \end{center} \caption{Isothermal freeze-out surface in Milne coordinates ($\varsigma=0$).} \label{fig:fo3d} \end{figure} When initialized with proper initial conditions the perfect fluid dynamics determines the evolution of the temperature, flow velocity, and the chemical potentials (if charge conserving theory is considered) in the entire forward light-cone of the Minkowski space-time ${\cal M}$, whose points satisfy the requirement $\tau \ge \tau_{\rm i}$. Usually, due to specific shape of the isothermal freeze-out hypersurface $\Sigma$ (see Figs.~\ref{fig:evo} and \ref{fig:fo3d}), it is convenient to parametrize the ambient space-time ${\cal M}$ with three angles, say $\theta,\zeta$ and $\phi$, and the distance $\rho$ from the coordinate system's origin $(\tau=\tau_{\rm i},x=0,y=0,\varsigma=0)$~\footnote{Note from Figs.~\ref{fig:evo} and \ref{fig:fo3d} that $\tau$ is not always a function of remaining space-time coordinates.}. The resulting parametrization reads \cite{Bozek:2009ty, Ryblewski:2013jsa} \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.49 \textwidth]{Hiperxyt.pdf} \includegraphics[angle=0,width=0.49 \textwidth]{Hiperret.pdf} \end{center} \caption{Space-time parametrization with angles $\theta,\zeta$ and $\phi$, and the distance $\rho$.} \label{fig:parametr} \end{figure} \begin{eqnarray} x^0=t(\rho,\theta,\zeta,\phi) &=& \tau \cosh \varsigma , \\ x^1=x(\rho,\theta,\zeta,\phi) &=& r \sin\theta\cos\phi , \\ x^2=y(\rho,\theta,\zeta,\phi) &=& r \sin\theta \sin\phi ,\\ x^3=z(\rho,\theta,\zeta,\phi) &=& \tau \sinh \varsigma, \label{freezepar} \end{eqnarray} with \begin{eqnarray} \Lambda\,\varsigma &=& \rho \cos\theta , \\ r &=& \rho \cos\zeta, \label{freezepar2} \end{eqnarray} so that $\tau -\tau_{\rm i} = \rho \sin\theta \sin\zeta$, and \begin{eqnarray} 0 \leq \zeta \leq \pi/2, \qquad 0 \leq \phi < 2 \pi, \quad {\rm and} \quad 0 \leq \theta \leq \pi; \label{angleslim} \end{eqnarray} see Fig.~{\ref{fig:parametr}}. The isothermal freeze-out condition defines the hypersurface $\Sigma$ embedded in ${\cal M}$ in the following way \begin{eqnarray} \Sigma(\theta,\zeta,\phi) = \{x \in {\cal M}: T(\rho,\theta,\zeta,\phi) = T_{\rm freeze} \}, \label{constT} \end{eqnarray} so that one has implicitly $\rho=\rho(\theta,\zeta,\phi)$. One should note here that, while at $\Sigma$, by definition, the temperature is fixed, the flow four-velocity is not, $u^{\mu}=u^{\mu}(\theta,\zeta,\phi)$. The infinitesimal element of the hypersurface $\Sigma$ is defined by the covariant four-vector \cite{Misner:1974qy} \begin{equation} \dS_\mu= \varepsilon_{\mu \alpha \beta \gamma} \frac{\partial x^\alpha}{\partial \zeta} \frac{\partial x^\beta}{\partial \phi} \frac{\partial x^\gamma}{\partial \theta } d\zeta d\phi d\theta, \label{d3Sigma} \end{equation} where $\varepsilon_{\mu \alpha \beta \gamma}$ is the totally antisymmetric tensor in four dimensions with $\varepsilon_{0123} = +1$. \section{Cooper-Frye formalism} \label{sec:6} \sectionmark{Cooper-Frye formalism} The change from fluid elements to hadrons at the switching hypersurface $\Sigma(x)$ is usually performed with the use of kinetic theory concepts. In the transport theory framework the flux of particle species $k$ is expressed with the formula (see Eqs.~(\ref{mom1}) and (\ref{ident1})) \cite{Rischke1999} \begin{equation} N_k^\mu(x) = \int\frac{\dP}{E_p} p^\mu f_k(x,p), \label{partflux} \end{equation} where $p^\mu=(E_p, \textbf{p}, p_z)$ is the four-momentum of the (on-shell) particle with mass $m_k$, and $f_k(x,p)$ is its phase-space distribution function. The number of world-lines of particles of species $k$ crossing the infinitesimal element $\dS$ of the surface $\Sigma$ is then calculated from the expression \begin{equation} N^k_{\Sigma} \equiv \dS_\mu N_k^\mu = \int\frac{\dP}{E_p} \dS_\mu p^\mu f_k(x,p). \label{partnum} \end{equation} The invariant momentum spectrum of particles produced at the surface element $\dS$ is \begin{equation} E_p\frac{dN^k_{\Sigma}}{\dP} = \dS_\mu p^\mu f_k(x, p). \label{momspec} \end{equation} Therefore, the invariant momentum distribution of hadrons emitted on the entire freeze-out hypersurface $\Sigma$ is given by the integral \begin{equation} E_p\frac{dN^k}{\dP} = \int_\Sigma \dS_\mu p^\mu f_k(x, p). \label{cfform} \end{equation} Equation~(\ref{cfform}) is commonly known as the Cooper-Frye formula \cite{Cooper:1974mv}. It is usually assumed that just before decoupling from the fluid, \emph{i.e.}, before the particles cross the switching surface $\Sigma$, they are in local thermal and chemical equilibrium such that their phase-space distributions are described by the equilibrium ones. In such a case one assumes that produced hadrons follow either Fermi--Dirac ($a=-1$) or Bose--Einstein ($a=+1$) distributions, \begin{equation} f_k(x, p)= f_k\left(p_\mu u^\mu(x), T(x), \tilde{\mu}_k\right) = \frac{g_k}{(2\pi)^3}\left\{ \exp\left[\frac{p_\mu u^\mu(x) - \tilde{\mu}_k(x) }{T(x)}\right] -a \right\}^{-1}\!\!\!, \label{eqdistr} \end{equation} where the factor $g_k = 2s_k+1$ takes into account the spin degeneracy of hadron species $k$. The chemical potential $\tilde{\mu}_k$ is given by the linear combination of the chemical potentials $\mu_i$ (see Sec.~\ref{sec:2a}) and the respective charges of the hadron species $k$ \begin{equation} \tilde{\mu}_k = \sum \limits_i Q_i^k \mu_i. \label{chempot} \end{equation} The particle's four-momentum, as measured by the experiment, is usually parametrized in the following way \begin{equation} p^{\mu} = \left( m_T \cosh {\rm y}_p, p_T \cos \phi_p, p_T \sin \phi_p, m_T \sinh {\rm y}_p\right), \label{particlemom} \end{equation} where $p_T=\sqrt{p_x^2+p_y^2}$ is the transverse momentum, $m_T=\sqrt{p_T^2+m_k^2}$ is the transverse mass, $y_p= \ln\left[ (E_p+p_z)/(E_p-p_z)\right]/2$ is the longitudinal rapidity and $\phi_p = \tan^{-1} \left(p_y/p_x\right)$ is the momentum azimuthal angle in the plane transverse to the beam axis. With definitions introduced above the integration measure $ \dS_\mu p^\mu $ in Eq.~(\ref{cfform}) takes the form \begin{eqnarray} \dS_\mu p^\mu &=& \frac{\sin\theta \tau \rho^2 }{\Lambda} \Biggl[ \Biggr. \frac{\partial\rho}{\partial\zeta} \cos\zeta \left( p_T \sin\zeta \cos\left(\phi_p - \phi\right)- m_T \cos\zeta \cosh\left({\rm y}_p - \varsigma \right) \right) \nonumber\\ &+& \!\! \cos\zeta \sin\theta \left(\rho \sin\theta - \frac{\partial \rho}{\partial\theta} \cos\theta \right) \nonumber\\ &\times& \left( p_T \cos\zeta \cos\left(\phi_p - \phi\right) + m_T \sin\zeta \cosh\left({\rm y}_p- \varsigma \right) \right) \nonumber\\ &+& \!\! \cos\zeta \sin\theta \left(\rho \cos\theta + \frac{\partial \rho}{\partial\theta} \sin\theta \right) \frac{\Lambda}{\tau} m_T \sinh\left({\rm y}_p - \varsigma \right) \nonumber\\ &-& \!\! \frac{\partial \rho}{\partial\phi}\,\, p_T\, \sin(\phi_p - \phi) \Biggl. \Biggr] d \zeta d\phi d\theta \equiv h_k(\zeta,\phi,\theta, p_T, \phi_p, {\rm y}_p)d \zeta d\phi d\theta, \label{intmeas} \end{eqnarray} and the (Lorentz-boosted) energy is \begin{equation} p_\mu u^\mu = u_0\, m_T \cosh ({\rm y}_p-{\rm y}_u) - p_T u_T \cos(\phi_p-\phi_u), \label{pu} \end{equation} where we introduced yet another variable $\phi_u$ such that $u_x = u_T \cos \phi_u$ and $u_y = u_T \sin \phi_u$. One should note that in cases where some symmetries are present in the system the formulas (\ref{intmeas})-(\ref{pu}) may be respectively simplified \cite{Florkowski:2010zz,Chojnacki:2007rq}. Using expressions (\ref{intmeas})-(\ref{pu}) in the Cooper-Frye formula (\ref{cfform}) one obtains a six-dimensional particle distribution, which can be used directly to generate particles (both, stable hadrons and unstable resonances) on $\Sigma$ \begin{eqnarray} \frac{d^6N^k}{p_T d p_T d\phi_p d{\rm y_p} d\zeta d \phi d \theta} &&= \frac{g_k}{(2 \pi)^3} h_k (\zeta,\phi,\theta, p_T, \phi_p, {\rm y}_p) f_k(\zeta,\phi,\theta, p_T, \phi_p, {\rm y}_p) \nonumber \\ &&\equiv {\cal F}_k (\zeta,\phi,\theta, p_T, \phi_p, {\rm y}_p). \label{cfformTH} \end{eqnarray} The Cooper-Frye formula, Eq.~(\ref{cfform}), is nowadays commonly used in the fluid dynamical simulations of the heavy-ion collisions to describe hadron production on the freeze-out hypersurface. There are, however, well known limitations of this prescription. An immediate problem with Eq.~(\ref{cfform}) arises if the freeze-out hypersurface contains both time-like and space-like parts. In particular, if the freeze-out element is time-like, so that associated normal vector is space-like, for certain directions of the momentum $p^\mu$ the invariant measure $\dS_\mu p^\mu$ may be negative. Thus, the particle number generated in this region will become ill-defined (the Cooper-Frye formula would in such a case describe the back-flow of particles into the fluid) \cite{Rischke1999}. One may show that at high energies the negative emission is usually negligible \cite{Chojnacki:2011hb}, so that it may be safely removed by introducing the step function $\Theta(\dS_\mu p^\mu)$ on the right-hand side of Eq.~(\ref{cfform}) \cite{Rischke1999}. Another issue connected with Eq.~(\ref{cfform}) is its insensitivity to the fact that particles with large momenta are in general more probable to leave the fluid more easily than the soft ones \cite{Kolb:2003dz}. \section{Hadron abundances} \label{sec:7} \sectionmark{Hadron abundances} For the system in local thermal equilibrium the flux $N_k^\mu$ of particle species $k$ from the fluid cell is proportional to its four-velocity, see Eq.~(\ref{pflux}). In this case, using formula (\ref{partnum}), the total number of particles emitted on the entire hypersurface may be expressed as follows \begin{equation} N_k \equiv \int_\Sigma \dS_\mu N_k^\mu = \int_\Sigma \dS_\mu u^\mu(x) {\cal N}_k\left(T(x), \tilde{\mu}_k(x)\right). \label{avpart} \end{equation} One should stress here that the particle density ${\cal N}_k$ in Eq.~(\ref{avpart}) is expressed solely through the local temperature $T(x)$ and chemical potentials $\tilde{\mu}_k(x)$. It straightforward to see that, if $T$ and $\tilde{\mu}_k$ are constant along $\Sigma$, the integral of the flow pattern on the freeze-out manifold factorizes in Eq.~(\ref{avpart}), giving the so called effective comoving volume $V_{\rm eff}\equiv\int_\Sigma \dS_\mu u^\mu(x)$. When one considers ratios of particle multiplicities of different species, say $a$ and $b$, the factor $V_{\rm eff}$ cancels out completely in the ratios \cite{Heinz:1998st,Cleymans:1998yf} giving \begin{equation} \frac{N_a}{N_b} = \frac{{\cal N}_a(T, \tilde{\mu}_k)}{{\cal N}_b(T, \tilde{\mu}_k)}. \label{ratios} \end{equation} Arguments presented above gave rise to the wide variety of analyses under the common name of \emph{thermal} or \emph{statistical} models \cite{BraunMunzinger:2001ip,Florkowski:2001fp,Rafelski:2001hp,Baran:2003nm,Turko:2007ri}. They focus mainly on the extraction of thermodynamic properties of the matter at the chemical freeze-out based on thermal analysis of the multiplicities of the experimentally measured particles and ratios thereof. Using the grand canonical version of the thermal approach the fits usually yield the chemical freeze-out temperature of the order of the quark-hadron phase transition obtained from the lattice QCD calculations \begin{equation} T_{\rm chem} \sim T_{\rm c} \sim 170 \,\,\,{\rm MeV} , \label{Tchem} \end{equation} which suggest possible relation between hadronization process and chemical equilibration \cite{BraunMunzinger:2003zz} \footnote{Note that the new lattice QCD simulations suggest somewhat lower values the critical temperature, $T_{\rm c} \sim 155 \,\,\,{\rm MeV}$ \cite{Borsanyi:2010cj}.}. Equation (\ref{ratios}) requires a few important remarks: (i) the integrals quoted above are performed over the full momentum space, which means that the reliable analysis would require the $4\pi$ acceptance for identified particles, (ii) at low energies and forward rapidities the freeze-out conditions are usually quite different from the baryon-free midrapidity region, suggesting that thermal analyses based on Eq.~(\ref{ratios}) yield, in this case, only approximate (averaged over the entire hypersurface) values of thermal parameters at freeze-out. Nevertheless, keeping in mind its simplicity, the precision of the thermal approach is quite remarkable. \section{Decays of resonances} \label{sec:9} \sectionmark{Decays of resonances} When discussing the thermal/statistical approach we neglected an important aspect of the modeling connected with the role of resonances. It is usually assumed that the particles created at freeze-out form an ideal non-interacting gas of hadrons, which includes, in principle, entire mass spectrum of unstable resonances. In reality, the latter subsequently decay populating the spectrum of stable hadrons, which is then observed in the detector. Although the Boltzmann factor tends to suppress heavy states, one should keep in mind that, according to the Hagedorn hypothesis \cite{Hagedorn:1965st}, their mass spectrum increases exponentially. While at low temperatures the role of the resonance feed-down is diminished, at the temperatures of the order of $T_{\rm chem}$ it is quite significant. In fact, the successful description of the available data on the hadronic abundances within the statistical approach, as quoted in Sec.~\ref{sec:7}, was possible largely due to inclusion of the full mass spectrum of the hadronic resonances \cite{Amsler:2008zzb} \footnote{In practice, all resonances from the Particle Data Group tables \cite{Amsler:2008zzb}, whose properties are known well enough, were included.}. As it was shown within the so called Cracow model \cite{Broniowski:2001uk} (see next Section), which includes hydrodynamic-like expansion of the system, the role of the resonance feed-down turns out to be equally important for description of the momentum spectra of stable hadrons as the flow itself. This is mainly due to the fact that decays of heavy resonances populate mainly the soft region of the stable hadron transverse-momentum spectra leading to their steeper slopes. Effectively, the observed inverse slope parameter, usually interpreted as the thermal freeze-out temperature, is much smaller than the chemical freeze-out temperature inferred from the analysis of the hadronic ratios \begin{equation} T_{\rm therm} \sim 130 \,\,\,{\rm MeV}; \label{Ttherm} \end{equation} compare Eq.~(\ref{Tchem}). This observation gave further support to the single-freeze-out model, and explained apparent mismatch between the two freeze-out temperatures. \section{Hydro-inspired parameterizations of freeze-out} \label{sec:10} \sectionmark{Hydro-inspired parameterizations} According to Eq.~(\ref{eqdistr}) the spectrum of particles produced in a single fluid cell is thermal. However, even if the thermal parameters $T$ and $\tilde{\mu}$ are constant along the $\Sigma$, the total momentum spectrum, as calculated with Eq.~(\ref{cfform}), includes contributions from different fluid cells, each boosted with a different velocity $u^\mu(x)$. Therefore, the resulting total spectrum is modified due to the combination of \emph{redshift} and \emph{blueshift} effects \cite{Broniowski:2001we}. These effects are observed in the experiment in the form of the characteristic concave shape of the transverse momentum spectrum. In view of these arguments, the realistic description of the momentum spectra of stable hadrons must include effects of some kind of collective evolution reflected in the finite flow of the matter at freeze-out. The most natural way to include flow at the freeze-out is to perform full fluid dynamical simulations along the lines presented in Sec.~\ref{sec:2a}. Unfortunately, numerical solution of fluid dynamical equations of motion is rather complicated and computationally intensive. Moreover, fluid evolution requires at least \emph{some} knowledge of the initial conditions, which are rather poorly known, and thus always bias final results. In order to avoid such problems, one may follow a different strategy, which results in the so called \emph{hydro-inspired} models. Within these models the conditions at the freeze-out hypersurface $\left(T(x), u^\mu(x), \mu_i(x)\right)$, as well as the shape of the freeze-out surface $\Sigma(x)$, are simply assumed, or \emph{inspired}, by the full numerical fluid dynamical simulations. Among these models the most successful ones are the Blast-wave \cite{Kisiel:2006is,Schnedermann:1993ws} and Cracow \cite{Broniowski:2001we} models. In particular, within the Cracow model \cite{Broniowski:2001we} (as well as in Blast-wave model) it is assumed that the freeze-out takes place at the boost-invariant and cylindrically symmetric (in the transverse $x-y$ plane). The hypersurface $\Sigma$ is defined by the requirement that the particles freeze-out at the surface of constant proper-time (${\tilde{\tau}}$) \begin{equation} {\tilde{\tau}}^2 = x^\mu x_\mu = t^2 - x^2 - y^2 - z^2 = \tau^2 -r^2 = {\tilde{\tau}}_{\rm freeze}^2 = \hbox{const.} \label{hubtau} \end{equation} According to Eq.~(\ref{hubtau}) the particles decouple starting from the center of the \emph{fire-cylinder} towards its edge, so that $0\le r \le r_{\rm max}$. At the same time the fluid four-velocity is assumed to have the Hubble-like form \cite{Chojnacki:2004ec} \begin{equation} u^\mu = \gamma (1, {\bf v}_T, v_z) = {x^\mu \over \tilde{\tau}_{\rm freeze}}, \label{hubumu} \end{equation} which leads to \begin{eqnarray} p_\mu u^\mu &=& \frac{\sqrt{\tilde{\tau}_{\rm freeze}^2 + r^2}}{\tilde{\tau}_{\rm freeze}}\,\, m_T \cosh(y_p-\varsigma) - p_T \frac{r}{\tilde{\tau}_{\rm freeze}} \cos(\phi_p-\phi), \label{Cracowpu} \end{eqnarray} compare Eq.~(\ref{pu}). Conditions (\ref{hubtau}) and (\ref{hubumu}) imply that the freeze-out hypersurface element is proportional to four-velocity \begin{eqnarray} \dS &=& u^\mu \tilde{\tau}_{\rm freeze}\, r dr\, d\varsigma\, d\phi. \label{CracowdS} \end{eqnarray} Within the Cracow model, and with the inclusion of all known hadronic resonances, it was possible to describe, both, the particle ratios and spectra of particles \cite{Broniowski:2001uk}, within the single framework. Due to the fast increase of the computer power during the last decade the full numerical solutions of fluid dynamical evolution equations, including the dissipative effects became easily achievable. As a result the importance of hydro-inspired models was largely reduced, making them useful tools for the estimate of flow and thermal properties at the freeze-out mainly in the analyses of the experimental data; although see \cite{Begun:2013nga}. \section{Monte-Carlo statistical hadronization} \label{sec:11} \sectionmark{Statistical hadronization with \texttt{THERMINATOR\,} } Careful analysis of the experimental data on flow and correlations measured at RHIC and the LHC energies has shown that the realistic description of the dynamics of heavy-ion collisions requires, an experimental-wise, \emph{event-by-event} simulations of such reactions. Due to event-by-event initial state fluctuations (of different kind), such a modeling usually involves running fluid dynamical evolution separately for each event. As a result, in each event the flow patterns at $\Sigma$, as well as the shape of the $\Sigma$, does not exhibit any symmetries. Hence, in general, the calculation of the particle spectra using Cooper-Frye formula, Eq.~(\ref{cfform}), has to be done entirely numerically. Moreover, precision studies usually require various experimental cuts and feed-down corrections to be applied to reproduce the data correctly. Thus, it is crucial to have the access to the full information on phase-space properties of the produced hadrons \footnote{The access to the entire phase-space properties of the created particle ensemble is of great importance (note that the experimental analysis may access only the four-momentum properties of the particles). As a result, one may, for instance, relate the experimental-wise calculated HBT radii of the system with its \emph{actual} space-time size in simulations.}. In view of these arguments, the development of Monte-Carlo generators for simulation of physical events became necessary. One of the first numerical open-source codes devoted to this task was \texttt{THERMINATOR\,} (THERMal heavy-IoN generATOR) \cite{Kisiel:2005hn} (for its new, extended version -- \texttt{THERMINATOR2\,} -- see \cite{Chojnacki:2011hb} \footnote{The \texttt{THERMINATOR2\,} code \cite{Chojnacki:2011hb} was also supplemented with another (separate) code, \texttt{FEMTO-\texttt{THERMINATOR\,}}, which is provided to carry out the analysis of the pion--pion femtoscopic correlations.}). The \texttt{THERMINATOR\,}'s main functionality is to perform hadronization in relativistic heavy-ion collisions using concepts of the statistical approach and the single-freeze-out model. In its latest version \cite{Chojnacki:2011hb}, the code performs event-by-event generation of an ensemble of particles (an equivalent of the physical event) given any freeze-out conditions (shape of the hypersurface, flow, and thermodynamic parameters) typically created in fluid dynamic models using the Cooper-Frye formula~(\ref{cfform}). Within the code one may choose, either one of the predefined hydro-inspired freeze-out parameterizations (such as Blast-wave, or Cracow; see Sec.~\ref{sec:10}) or an output from any realistic fluid dynamical simulations. Since nowadays the fluid dynamical modeling became the common standard in the field of heavy-ion collisions (at least with event-averaged initial conditions), in what follows we focus mainly on the latter case. In the case of using directly the fluid dynamics simulations, at first the freeze-out has to be extracted from the fluid evolution, which is out of the scope of \texttt{THERMINATOR\,} code. In particular, in the case of perfect fluid dynamics, one has to supply the distance $\rho(\zeta,\phi,\theta)$, the components of the flow velocity $u_x(\zeta,\phi,\theta)$, $u_y(\zeta,\phi,\theta)$ and $y_u(\zeta,\phi,\theta)$, temperature $T(\zeta,\phi,\theta)$, and chemical potentials $\mu_i(\zeta,\phi,\theta)$ (although the values of chemical potentials are usually neglected in the hydrodynamic stage) \footnote{In the case of isothermal freeze-out the temperature $T(\zeta,\phi,\theta)=T_{\rm freeze}$ is constant. Chemical potentials are usually included at the freeze-out, based on the results from the thermal model fits $\mu_i(\zeta,\phi,\theta)$.}. For the perfect fluid case, it is assumed that on the hypersurface $\Sigma(x)$ the system is in local thermal and chemical equilibrium, which means that the phase-space distributions of the particles have the forms given by Eq.~(\ref{eqdistr}) \footnote{In the case of using output from the dissipative fluid dynamics one should correct the distribution function (\ref{eqdistr}) for the non-equilibrium effects as well as supply the \texttt{THERMINATOR2\,} code with the respective dissipative quantities at $\Sigma$.}. The \texttt{THERMINATOR\,}'s code is written in an object-oriented C++ programing language and conforms to the CERN's \texttt{ROOT} framework standards \cite{Brun:1997pa}. Once the proper input is provided, the generation of hadrons (stable ones and resonances) is done using straightforward Monte Carlo method according to the Cooper-Frye formula (\ref{cfformTH}). The detailed description of the \texttt{THERMINATOR\,}'s theoretical background, code structure, functionalities, as well as short introduction to its usage may be found in the original papers \cite{Kisiel:2005hn,Chojnacki:2011hb}, and on the project's website \cite{THERMINATORweb}. Herein we will just briefly review its main aspects. Each evaluation of the code is performed in two stages. The first (preliminary) stage, which is performed once per set of parameters, for each particle species, and whose results are recorded and used for all subsequently generated events, involves: \begin{itemize} \item Calculation of the global maximum ${\cal F}^k_{\rm max}$ of the right-hand side of Eq.~(\ref{cfformTH}). \item Calculation of the average multiplicity $\bar{N}_k$ by integrating Eq.~(\ref{cfformTH}) over the entire phase-space (\emph{i.e}. over $\zeta,\phi,\theta, p_T, \phi_p$ and ${\rm y}_p$). \end{itemize} The second (main) stage consists of generation of (primordial) particle ensemble and performing decays of unstable resonances, which proceeds on event-by-event basis. In each event: \begin{itemize} \item It is assumed that the generated ensemble of particles corresponds to the grand canonical ensemble. Hence, the number $N_k$ of particles of species $k$ in the event is generated randomly according to the probability expressed by the Poisson distribution \begin{equation} P(N_k)=\frac{ \left(\bar{N}_k\right)^{N_k}}{N_k!}\exp(-\bar{N}_k) \label{Poisson} \end{equation} For each particle species $k$ number $N_k$ of particles is generated according to the von-Neumann acceptance/rejection procedure: space-time point $(\zeta,\phi,\theta)$ at $\Sigma$, momentum components of the particle ($p_T, \phi_p$ and ${\rm y}_p$), and a test variable ${\cal F}^k_{\rm test}$ in the range $\langle 0, {\cal F}^k_{\rm max}\rangle$, are generated randomly. The particle is accepted if ${\cal F}^k_{\rm test}<{\cal F}^k(\zeta,\phi,\theta,p_T, \phi_p, {\rm y}_p)$, otherwise it is rejected. The generation of particles goes over all species (stable ones and resonances), which are listed in the Particle Data Group \cite{Amsler:2008zzb} tables, and whose properties are known well enough. For that purpose the \texttt{SHARE\,} particle database is used \cite{Torrieri:2004zz}. \item Once the ensemble of primordial particles is generated the code performs decays of unstable resonances, which, in general, may proceed in cascades. Each resonance evolves along the classical trajectory starting from its initial position $x^\mu_{\rm origin}$ according to its momentum \begin{equation} x^\mu_{\rm decay} = x^\mu_{\rm origin} + {p^\mu \over m_k} \Delta \tau. \label{decpt} \end{equation} and decays after its lifetime $\Delta\tau$, which is randomly generated with the probability density $\Gamma_k \exp(-\Gamma_k \Delta\tau)$, where $\Gamma_k$ is the width of the particle of species $k$. The particular decay channel is selected randomly with the probability corresponding to its branching ratio. The decays of sub-threshold type are not allowed. The decays, which are of two-particle or three-particle type, follow simple kinematic formulas \cite{Kisiel:2005hn}, and are treated on equal footing. All required data on the decays follows from the \texttt{SHARE\,}~ particle decays database \cite{Torrieri:2004zz}. \item Once all particles in the event decayed, the calculation is completed. \end{itemize} The exemplar emission points in space-time obtained with the initial tilted source and \texttt{THERMINATOR\,} simulations are presented in the Fig.~\ref{fig:emiss}. In the next Section, based on the data on the emitted particles, we will calculate some physical observables. \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.49 \textwidth]{fig-hsxytp.pdf} \includegraphics[angle=0,width=0.49 \textwidth]{fig-hsxytd.pdf} \end{center} \caption{Emission points of primordial $\pi^{+}$ (left panel), and $\pi^{+}$ from decays (right panel) in the $x-y-\tau$ plane in Au-Au collisions at $\sqrt{s_{\rm NN}}= 200~{\rm GeV}$ and the impact parameter of $7.16$ fm. } \label{fig:emiss} \end{figure} \section{Performing analysis with \texttt{THERMINATOR2\,}} \label{sec:12} \sectionmark{} Due to the detailed record on properties of all produced particles, including their space-time coordinates $(x^\mu, p^\mu)$ and their decay chains, the \texttt{THERMINATOR\,}~code becomes a versatile tool, allowing for calculation of various observables. Having completed generation of events various experimental observables may be calculated, either, by using the figure macros supplied with the code, or, by preparing user macros. In this Section we will present some of its capabilities. \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.65 \textwidth]{fig-distpt.pdf} \includegraphics[angle=0,width=0.65 \textwidth]{fig-distpt-exotic.pdf} \end{center} \caption{Transverse-momentum spectra of $\pi^+$, $K^+$, and protons (top panel) and $\rho$ mesons, $K^*_0$ mesons, $\phi$ mesons, $\Lambda^0$ barions, and $\Omega^-$ barions (bottom panel) for Au-Au collisions at $\sqrt{s_{\rm NN}}= 200~{\rm GeV}$ and the impact parameter of $7.16$ fm (protons from the weak decays of $\Lambda$'s are excluded). The statistics of 3000 events was used. The errors are statistical only. } \label{fig:spec} \end{figure} \subsection{Single-particle spectra} \label{ssec:121} \sectionmark{hs} One of the most straightforward observables to calculate is the single-particle spectrum of certain particle species $k$. The most copiously produced ones are the lightest mesons (pions and kaons) and baryons (protons). They form more than $90\%$ of the total charged particles. In the Fig.~\ref{fig:spec} we present the transverse-momentum spectra of $\pi^+$, $K^+$, and protons (top panel), as well as $\rho$ mesons, $K^*_0$ mesons, $\phi$ mesons, $\Lambda^0$ barions, and $\Omega^-$ barions (bottom panel) for Au-Au collisions at $\sqrt{s_{\rm NN}}= 200~{\rm GeV}$ and the impact parameter of $7.16$ fm. All results were obtained using the fluid dynamical input, obtained with event-averaged ``tilted'' initial conditions and the freeze-out temperature $T_{\rm freeze} = 150\,{\rm MeV}$. The chemical potentials $\mu_B =28.5\,{\rm MeV}$, $\mu_{I_3} =-0.9\,{\rm MeV}$, and $\mu_S =6.9\,{\rm MeV}$ were included at the freeze-out solely. The presentation of results is limited to the ``soft'' transverse momenta ($p_T<3\,{\rm GeV}$), where the fluid dynamical models are expected to be applicable, and the midrapidity region, $|y_p|<1$. One observes that slopes of the spectra are species-dependent, which is mainly due to their different masses. At intermediate momenta the spectra have exponential shapes, which is characteristic for thermal systems. At low $p_T$ various effects play role, see next section. In the Fig.~\ref{fig:specpseu} we present the respective $p_T$-integrated pseudorapidity distribution of charged particles, where $\eta = \ln\left[ (p + p_{z})/(p - p_{z})\right]/2$. The latter are compared to the contributions from $\pi^+$, $K^+$, and protons. One observes that, while the central rapidity region $y_p\approx \eta$ is approximately boost-invariant, the forward/backward rapidity regions are not. This is a result of using full four-dimensional fluid dynamical simulations of the emitting source. \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.65 \textwidth]{fig-disteta.pdf} \end{center} \caption{Pseudorapidity spectra of $\pi^{+}$, $K^{+}$, and protons shown separately, and summed together with their antiparticles, as well as of all charged particles, for Au-Au collisions at $\sqrt{s_{\rm NN}}= 200~{\rm GeV}$ and the impact parameter of $7.16$ fm. The statistics of 3000 events was used. The errors are statistical only.} \label{fig:specpseu} \end{figure} \subsection{Impact of resonance decays} \label{ssec:122} \sectionmark{rd} One of the effects which significantly affects shapes of the single-particle spectra are decays of resonances. Due to available phase space they populate mainly the low-$p_T$ region of the spectra. In the Fig.~\ref{fig:specpion} we present the ``anatomy'' of the transverse-momentum spectra of $\pi^+$. One observes that the low-$p_T$ part of the spectrum of primordial pions (produced directly at the freeze-out hypersurface) and the total spectrum (including the contribution from all resonance decays) differ significantly. The primordial pions develop a characteristic knee in the soft region. The decays of heavy resonances feed up the spectrum in this region, see contribution from $\omega$ decays in the Fig.~\ref{fig:specpion}. As a result the total shape of pion spectrum takes the characteristic concave shape. Moreover, the effective slope of the final spectrum becomes steeper which manifests itself by the lower effective temperature of the spectrum, see Sec.~\ref{sec:9}. \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.65 \textwidth]{fig-distpt-pion.pdf} \end{center} \caption{The anatomy of the transverse-momentum spectra of $\pi^+$ for Au-Au collisions at $\sqrt{s_{\rm NN}}= 200~{\rm GeV}$, and the impact parameter of $7.16$ fm. The spectrum of primordial pions, as well as the contribution from $\rho^0$, $\rho^+$ and $\omega$ resonance decays, is presented. The statistics of 3000 events was used. The errors are statistical only.} \label{fig:specpion} \end{figure} \subsection{Experimental feed-down corrections} \label{ssec:123} \sectionmark{rd} The experimental proton spectra are usually feed-down corrected for $\Lambda^0 \to p^+ + \pi^-$ weak decays. Such corrections are straightforward to be included in \texttt{THERMINATOR\,}\, analysis, and they were also applied in Fig.~\ref{fig:spec}. In the Fig.~\ref{fig:specfeed} we present the comparison of proton spectra with and without applying these corrections. We observe that the feed-down from weak-decays is at the level of $30\%$, which is a significant correction. \begin{figure}[h] \begin{center} \includegraphics[angle=0,width=0.65 \textwidth]{fig-distpt-feed.pdf} \end{center} \caption{Transverse-momentum spectra of protons, with, and without feed-down correction for $\Lambda^0\to p^+ + \pi^-$ weak decays. The statistics of 3000 events was used. The errors are statistical only.} \label{fig:specfeed} \end{figure} \subsection{Ratios of particle yields} \label{ssec:124} \sectionmark{rd} As it was discussed in Sec.~\ref{sec:9} the inclusion of resonance decays was crucial for the proper description of the ratios of particle yields, giving the chemical freeze-out temperatures of the order of the critical temperature of the phase-transition in QCD. Following some recent studies, which focus on reproducing the shapes of the spectra rather than the yields ratios, the freeze-out temperatures extracted from the data can be as low as 150 MeV (sometimes even lower). One should expect that, in such a case, on should observe a decrease of quality of the fits of the particle abundances. To see this, in Table~\ref{table:ratios} we present the $K^+/\pi^+$ and $p/\pi^+$ total yield ratios calculated at various freeze-out temperatures $T_{\rm freeze}$. One observes that with reducing the freeze-out temperature the ratios decrease which is consequence of the fact that heavy particles are most copiously produced at large temperatures. The decrease is more significant for protons than for kaons. The results suggest that the fitting of particle spectra should be always accompanied by the fits of particle yields. \begin{table}[t] \begin{center} \begin{small} \begin{tabular}{lcccc@{\hskip 1cm}ccc} \hline \\ [-1ex] $T_{\rm freeze} \,{\rm [MeV]}$ & 130 & \multicolumn{3}{c}{150} & \multicolumn{3}{c}{170} \\ [1ex] \hline \\ $K^+/\pi^+$ & 0.199 & \multicolumn{3}{c}{0.263} & \multicolumn{3}{c}{0.326} \\ [2ex] $p/\pi^+$ & 0.033 & \multicolumn{3}{c}{0.065} & \multicolumn{3}{c}{0.110} \\ \\ \hline \end{tabular} \end{small} \end{center} \caption{\small The $K^+/\pi^+$ and $p/\pi^+$ total yield ratios calculated at various freeze-out temperatures $T_{\rm freeze}\in\{130, 150, 170\}\, {\rm MeV}$. } \label{table:ratios} \end{table} \begin{acknowledgement} The author would like to express his gratitude to the Organizers of the \emph{53rd Karpacz Winter School of Theoretical Physics and THOR COST Action Training School} for their help and hospitality. This work was supported by the THOR COST Action CA15213, ExtreMe Matter Institute EMMI at the GSI Helmholtzzentrum f̈ur Schwerionenforschung, Darmstadt, Germany, and the Polish National Science Center grants No. DEC-2012/07/D/ST2/02125 and DEC-2016/23/B/ST2/00717. \end{acknowledgement} \bibliographystyle{unsrt} \section{Relativistic heavy-ion collisions} \label{sec:1} \sectionmark{Relativistic heavy-ion collision} The relativistic heavy-ion collision experiments are a perfect tool for studying, in a controlled and reproducible manner, the properties of strongly interacting matter at high energies. Unlike the collisions of more elementary particles, they provide unique opportunity to reach the thermodynamic equilibrium needed for investigating the phase diagram and transport properties of the QCD matter. Assuming that (local) thermal equilibrium is achieved promptly, and that the interactions are strong enough to maintain this state throughout subsequent evolution, the expansion of such a system should, in principle, follow the laws of relativistic fluid dynamics \cite{Yan:2017ivm} \footnote{Note that several recent studies indicate that fluid dynamics may be applicable also in the situations where the produced system is locally far off equilibrium \cite{Florkowski:2010cf,Martinez:2010sc,Alqahtani:2017jwl,Strickland:2014pga,Alqahtani:2017mhy}, see also \cite{Florkowski:2013lya,Florkowski:2013lza,Denicol:2014xca,Denicol:2014tha}.}. Given initial conditions for initialization of fluid dynamical fields, and the prescription for the hadron emission from the fluid, the fluid dynamics provides straightforward and intuitive way to study properties of the produced matter, as encoded in its equation of state. Although still some important questions remain, \emph{e.g.}, concerning the formulation of the theory itself \cite{Florkowski:2017olj} (in particular related to the form of the transport coefficients \cite{Jaiswal:2014isa,Florkowski:2015lra,Tinti:2016bav}), the successes of the models employing fluid dynamics concepts already have shown (almost)-perfect fluidity of the quark-gluon plasma and established a sort of hydro-like ``Standard Model'' of heavy-ion collisions. While recently a significant progress in the determination of the initial state of heavy-ion collisions has been made, using, \emph{i.a.}, non-equilibrium effective field theory and gauge/gravity correspondence, the hadronic production from such a system is still poorly understood. Huge majority of the approaches use (to some level heuristic, yet surprisingly successful) prescriptions for the hadronization process dating back to times of Fermi, Landau, and Hagedorn. In these lectures we will briefly review some of the concepts of particle decoupling and statistical hadronization as applied to heavy-ion collisions showing, in the end, their remarkable efficacy in describing some of experimentally observed phenomena. In this work the we use natural units where $c=k_B=\hbar =1$. The bold font denotes the vectors in the transverse $x-y$ plane. \section{Relativistic perfect fluid dynamics} \label{sec:2a} \sectionmark{Relativistic perfect fluid dynamics} The simplest and, at the same time, the only unambiguously~\footnote{Some sort of ambiguity arises in the case when dissipative effects are present in the system. In such a situation additional assumptions on the evolution of dissipative quantities (\emph{e.g.} shear stress tensor and bulk viscous pressure) are required, resulting in additional equations of motion. The latter may differ significantly in various approaches~\cite{Florkowski:2016kjj}.} derivable relativistic fluid dynamical equations are that of \emph{relativistic perfect fluid dynamics}~\cite{Landau:1959,Misner:1974qy,deGroot:1980,rezzolla2013relativistic}. Due to their simplicity they are extensively applied to various systems in physics, including the evolution of strongly interacting matter produced in relativistic heavy-ion collisions. Although the literature on the subject is rather extensive (see \emph{i.e.} Refs.~\cite{Florkowski:2017olj,Stoecker:1986ci,Rischke1999,Kolb:2003dz,Huovinen:2006jp,Florkowski:2010zz,Romatschke:2009im,Gale:2013da,Jaiswal:2016hex} and the references therein), we will review herein its basic aspects to set the stage for further discussion. The equations of relativistic perfect fluid dynamics for the (net) charge-free matter follow solely from the local conservation laws of energy and momentum~\cite{Landau:1959,Misner:1974qy,deGroot:1980,rezzolla2013relativistic}, which in the Minkowski coordinates may be formulated in the following covariant form \footnote{In the case of curvilinear coordinates, even if the space-time is considered to be flat (in the sense of globally vanishing Riemann tensor), one should replace the partial derivative $\partial_\mu=(\partial_t, - \nabla)$ in Eq.~(\ref{emc}) with \emph{covariant derivative} $d_\mu$.} \begin{equation} \partial_\mu T^{\mu\nu}(x) = 0, \label{emc} \end{equation} where $T^{\mu\nu}(x)$ is the energy-momentum tensor and $x^\mu = (t, \xT, z)$. In addition, for a multicomponent system which possesses $N$ conserved charges $Q_i$ one should supplement Eq.~(\ref{emc}) with $N$ continuity equations for the respective charge currents $N_i^\mu$ \begin{equation} \partial_\mu N_i^{\mu}(x) = 0 \qquad \left(i=1\dots \,N\right). \label{nc} \end{equation} One may consider, \emph{i.e.}, $Q_i=\{B, I_3, S, C\}$, where $B, I_3, S$ and $C$ denote baryon number, third component of the isospin, strangeness and charm, respectively \footnote{Equivalently, instead of the third component of the isospin one may use the electric charge.}. The main assumption defining the perfect fluid is that each fluid element, when considered in its local rest frame (LRF), is exactly in \emph{thermal and chemical equilibrium} state. This is expressed by the static equilibrium (isotropic) form of the energy-momentum tensor \begin{equation} T^{\mu\nu}_{\rm LRF} (x) = {\rm diag}\left(\vphantom{\frac{}{}} \e(x), \p(x), \p(x), \p(x) \right), \label{emtLRF} \end{equation} where $\e(x)$ and $\p(x)$ denote the equilibrium energy density and pressure, respectively. One may also easily convince oneself that in the perfect fluid case the charge currents must all have the following LRF form \begin{equation} N^\mu_{i, {\rm LRF}} (x) = \left(\vphantom{\frac{}{}}\!\!{\cal N}_i(x), 0, 0, 0 \right), \label{ncLRF} \end{equation} where ${\cal N}_i(x)$ represents the density of the charge $Q_i$; otherwise dissipative effects have to occur. In general (laboratory) frame each fluid element moves with a fluid four-velocity $u^\mu (x) \equiv \gamma \left(1, \textbf{v}_T, v_z\right)$, satisfying normalization condition $u^\mu u_\mu=1$. The form of the energy-momentum tensor in this frame can be obtained by applying a general (canonical) Lorentz boost transformation to Eq.~(\ref{emtLRF}) \begin{equation} T^{\mu\nu}(x) = \Lambda^\mu_{\,\,\,\alpha}\ (u^\lambda)\,\Lambda^\nu_{\,\,\,\beta} (u^\lambda)\,T_{\rm LRF}^{\alpha\beta}(x), \label{transform} \end{equation} where the boost matrix $\Lambda^\mu_{\,\,\,\nu}$ is defined as follows % \begin{equation} \Lambda^{\mu}_{ \,\,\,\nu}(u^{\lambda}) \equiv \left( \begin{array}{rrrr} \gamma & -\gamma v_x & -\gamma v_y & -\gamma v_z \\ -\gamma v_x & 1 + (\gamma - 1) \frac{v_x^2}{v^2} & (\gamma - 1) \frac{v_x v_y}{v^2} & (\gamma - 1) \frac{v_x v_z}{v^2} \\ -\gamma v_y & (\gamma - 1) \frac{v_x v_y}{v^2} & 1 + (\gamma - 1) \frac{v_y^2}{v^2} & (\gamma - 1) \frac{v_y v_z}{v^2} \\ -\gamma v_z & (\gamma - 1) \frac{v_x v_z}{v^2} & (\gamma - 1) \frac{v_y v_z}{v^2} & 1 + (\gamma - 1) \frac{v_z^2}{v^2} \end{array} \right).\nonumber \label{genboost} \end{equation} Using covariant notation the result of Eq.~(\ref{transform}) may be expressed in the following form \begin{eqnarray} T^{\mu \nu} =\e u^\mu u^\nu - \p\Delta^{\mu\nu}, \label{emt} \end{eqnarray} where we introduced the symmetric projection operator on the space orthogonal to the fluid four-velocity, $\Delta^{\mu\nu} \equiv g^{\mu\nu} - u^\mu u^\nu$, which satisfies conditions $u_{\mu} \Delta^{\mu\nu} = 0$, $\Delta^{\mu}_{\,\,\,\alpha}\Delta^{\alpha\nu}=\Delta^{\mu\nu}$ and $\Delta^{\mu}_{\,\,\,\mu}=3$, and $g^{\mu\nu} = {\rm diag} \left(1,-1,-1,-1\right)$ is the metric tensor. Similarly, the Lorentz boost transformation applied to Eq.~(\ref{ncLRF}) leads to the following tensor decomposition of $N_i^\mu$ in the general frame \begin{equation} N_i^{\mu} = {\cal N}_i \,u^\mu, \label{pflux} \end{equation} so that in the LRF, where $u^\mu_{\rm LRF} =\left(1, \textbf{0}, 0\right)$, one has ${\cal N}_i=N_i^\mu u_{\mu, \rm LRF}$. Equation (\ref{emc}) may be rewritten in a somewhat more familiar form using Eq.~(\ref{emt}) and performing projections perpendicular and parallel to the fluid four-velocity \begin{eqnarray} \label{emc1} \Delta^\alpha_{\,\,\,\nu} \partial_\mu T^{\mu \nu} = (\e+\p)D u^\alpha-\nabla^\alpha \p &=&0, \\ \label{emc2} u_\nu \partial_\mu T^{\mu \nu}= D\e + (\e+\p)\theta &=&0, \end{eqnarray} where $D\equiv u^\mu \partial_\mu$ is the co-moving time derivative, $\nabla^\mu \equiv \Delta^{\mu \alpha} \partial_\alpha$ is the spacial gradient, and $\theta\equiv\partial_\mu u^\mu$ is the expansion scalar. Equations (\ref{emc1})-(\ref{emc2}) are relativistic analogs of the Euler and continuity equations, respectively. Similarly, putting decompositions (\ref{pflux}) in Eqs.~(\ref{nc}) yields \begin{eqnarray} \label{nc2} \partial_\mu N^{\mu}_i = D {\cal N}_i+ {\cal N}_i \theta&=&0 . \end{eqnarray} Equations (\ref{emc1})-(\ref{nc2}) contain together $4+N$ independent partial differential equations for the space-time evolution of $5+N$ quantities (three components of four-velocity, energy density, pressure and $N$ charge densities). In order for the system to be closed one has to introduce a material-specific \emph{equation of state} relating the pressure, the energy density and the charge densities in the system, $\p=\p(\e,{\cal N}_i)$. Since the system is locally in equilibrium such a relation exists and has to follow from the underlying microscopic theory describing the system. Henceforth, we will assume that the system created in heavy-ion collisions is charge-free, ${\cal N}_i(x) \equiv0$, which is a reasonable assumption for central rapidity region at the ultra-relativistic energies. The energy density and pressure for the charge-free matter in equilibrium may be directly related to the temperature of the system, $\e=\e(T)$, $\p=\p(T)$, see Sec.~\ref{sec:2c}. A number of studies show that the successful description of the experimental data requires the use of ``cross-over''-type equation of state of strongly interacting matter with the transition from the quark-gluon plasma phase to the hadron gas phase. For the numerical results presented in the remaining part of these lectures we will use the results of the Ref.~\cite{Chojnacki:2007jc}. The temperature dependence of the square speed of sound $c_s^2 (T)=d\p/d\e$ obtained in Ref.~\cite{Chojnacki:2007jc} is shown in Fig.~\ref{fig:eos} \footnote{Note that in the case of dissipative fluid dynamics due to the existence of transport coefficients the inclusion of the equation of state is usually more involved \cite{Tinti:2016bav}.}. \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.6 \textwidth]{figcs2.pdf} \end{center} \caption{Temperature dependence of the squared speed of sound $c_s^2=d\p/d\e$ for the hadron gas, lattice QCD quark-gluon plasma, and interpolation thereof as found in Ref.~\cite{Chojnacki:2007jc}.} \label{fig:eos} \end{figure} \begin{figure} \begin{center} \includegraphics[angle=0,width=0.67 \textwidth]{milne.pdf} \end{center} \caption{The Milne coordinates shown in the Minkowski space-time. Directions $x$ and $y$ are suppressed.} \label{fig:milne} \end{figure} Except for quite limited number of special cases of highly-symmetric flow patterns, such as the Bjorken \cite{Bjorken:1982qr} or Gubser \cite{Gubser:2010ze} flows, Eqs. (\ref{emc1})-(\ref{nc2}) have to be solved numerically. In the case of modeling the collisions at relativistic energies, in which case the system is approximately boost-invariant (with respect to Lorentz boosts along the beam ($z$) direction) in the central rapidity region, the hydrodynamic evolution is preferably performed in \emph{Milne coordinates} $x^\mu = (\tau,\xT,\varsigma)$ instead of Minkowski coordinates $\tilde{x}^\mu = (t,\xT,z)$ \cite{Bjorken:1982qr}. The relation between the two is given by the following coordinate transformation \begin{eqnarray} t&=&\tau \cosh \varsigma,\\ z&=& \tau \sinh \varsigma, \label{coord} \end{eqnarray} with $\tau= \sqrt{t^2-z^2}$ and $\varsigma = \tanh^{-1} \left(z/t\right)$ denoting longitudinal proper time and space-time rapidity, respectively; see Fig.~\ref{fig:milne}. In this coordinate system it is also convenient to parametrize the fluid four-velocity in the following form \begin{equation} u^\mu = \left(u_0 \cosh {\rm y}_u, \textbf{u}_T, u_0 \sinh {\rm y}_u \right), \label{umilne} \end{equation} where ${\rm y}_u$ is the longitudinal rapidity of the fluid, and $u_0 = \sqrt{1+u_T^2}$, with $u_T=\sqrt{u_x^2+u_y^2}$. \section{Fluid dynamics from kinetic theory} \label{sec:2c} \sectionmark{Fluid dynamics from kinetic theory} It is instructive to see the relation between the fluid dynamics, as a general classical field theory, and the relativistic kinetic theory. The latter is based on the knowledge of the \emph{single-particle distribution function} $f(x,p)$, which is defined through the number of (on-shell) particles $d N$ in the phase-space volume $d^3 \!x\,\dP$ located at the phase-space point $(x^\mu, p^\mu)$, where $p^\mu=(E_p, \textbf{p}, p_z)$ with $E_p=\sqrt{m^2+\textbf{p}^2+p_z^2}$. The evolution of $f(x,p)$ follows from the standard relativistic Boltzmann equation \begin{equation} p^\alpha \partial_\alpha f=-C[f], \label{BE} \end{equation} where $C[f]$ is the collisional kernel, which, in general, may have highly complicated form. In global equilibrium $f(x,p)$ is stationary, which means that $C[f]$ vanishes in two very different regimes: free-streaming (no interactions) and equilibrium (strongest possible interactions). Often the collisional kernel is treated in the relaxation-time approximation, \begin{equation} C[f]=p_\mu u^\mu\frac{ f-f_{\rm eq} }{\tau_{\rm eq}}, \label{ccRTA} \end{equation} where $\tau_{\rm eq}$ is the relaxation time, and $f_{\rm eq}$ is the equilibrium distribution. Equations of motion for the soft modes of the system, identified with the fluid dynamical sector of the theory, may be derived by taking the lowest-$n$ momentum moments \cite{Denicol:2014loa,Strickland:2014pga,Alqahtani:2017mhy}, \begin{equation} \hat{{\cal I}}^{\mu_1\cdots\mu_n}\equiv \int \!dP\, p^{\mu_1}p^{\mu_2}\cdots p^{\mu_n}, \qquad \qquad \int dP \equiv \int \frac{\dP}{E_p}, \end{equation} of the Boltzmann equation (\ref{BE}), which gives \begin{equation} \partial_{\alpha} {\cal I} ^{\alpha \mu_1\cdots\mu_n}= -{\cal C}^{\mu_1\cdots\mu_n}[f], \label{EOM} \end{equation} where we defined \begin{eqnarray} {\cal I} ^{\alpha\mu_1\cdots\mu_n} &\equiv& \hat{{\cal I}} ^{\alpha\mu_1\cdots\mu_n} f , \label{mom1} \\{\cal C}^{\mu_1\cdots\mu_n}[f] &\equiv& \hat{{\cal I}} ^{\mu_1\cdots\mu_n} C[f]. \label{mom2} \end{eqnarray} Explicitly, the first two moments lead to the following set of dynamical equations, \begin{equation} \partial_\mu {\cal I} ^\mu = u_\mu \frac{{\cal I}^\mu_{\rm eq}-{\cal I} ^\mu}{\tau_{\rm eq}}, \label{ncK} \end{equation} \begin{equation} \partial_\mu {\cal I} ^{\mu\nu}=u_\mu \frac{{\cal I}^{\mu\nu}_{\rm eq}-{\cal I} ^{\mu\nu}}{\tau_{\rm eq}}. \label{emcK} \end{equation} One may immediately identify zeroth and first moments of the distribution function as particle four-current and the energy-momentum tensor, \begin{eqnarray} N^\mu &\equiv& {\cal I}^{\mu} \label{ident1} \\T^{\mu\nu} &\equiv& {\cal I}^{\mu\nu}. \label{ident2} \end{eqnarray} The conservation of particle current $u_\mu ({\cal I}^\mu_{\rm eq}-{\cal I} ^\mu)=0$, and energy-momentum tensor $u_\mu ({\cal I}^{\mu\nu}_{\rm eq}-{\cal I} ^{\mu\nu})=0$ thus leads to Eqs.~(\ref{emc})-(\ref{nc}). Using decompositions of the particle four-current (\ref{pflux}) and the energy-momentum tensor (\ref{emt}), and the knowledge of the LTE distribution function $f_{\rm eq} = f(p_\mu u^\mu, T, \mu_{\rm i})$, see Eq.~(\ref{eqdistr}) in Sec.~\ref{sec:6}, one may find explicit forms of the thermodynamic variables, $\e=\e(T,\mu_{\rm i})$, $\p=\p(T,\mu_{\rm i})$, and ${\cal N}={\cal N}(T,\mu_{\rm i})$. The latter define the equation of state of the system within kinetic theory. For the conformal charge-free system one gets $\e(T) =3\p(T) = 3 T {\cal N} (T) \sim T^4$ \cite{Florkowski:2010zz}. \section{Event-averaged initial conditions for fluid dynamics} \label{sec:2b} \sectionmark{Initial conditions for fluid dynamics} In general, Eqs.~(\ref{emc1})-(\ref{nc2}) have to be supplemented with proper \emph{initial conditions}, specified on the hypersurface of constant longitudinal proper time $\tau=\tau_{\rm i}$, which usually defines beginning of the fluid dynamical evolution. In particular, one has to provide $\e(\tau_{\rm i},\xT,\varsigma)$, $u_x(\tau_{\rm i}, \xT,\varsigma)$, $u_y(\tau_{\rm i}, \xT,\varsigma)$, $y_u(\tau_{\rm i},\xT,\varsigma)$, and ${\cal N}_i(\tau_{\rm i},\xT,\varsigma)$. These should follow from some microscopic models of the initial state created in heavy-ion collision, though usually they are, to some extent, just fitted to reproduce the data. \begin{figure} [t] \begin{center} \includegraphics[angle=0,width=0.67 \textwidth]{coll.pdf} \end{center} \caption{The geometry of heavy-ion collision in the Milne coordinates.} \label{fig:coll} \end{figure} For the initial energy density profile, we will use the \emph{tilted source} model \cite{Bozek:2010bi} of the initial state, which was applied quite successfully to describe various experimental observables measured at RHIC, including the so called directed flow $v_1$ component of the Fourier decomposition of the azimuthal particle spectra. The initial energy density profile within this model is proportional to the density of sources $n({\bf x}_T,\varsigma,b)$ \begin{equation} \e(\tau_{\rm i},{\bf x}_T, \varsigma,b) = \e_{\rm i} \, \frac{ n({\bf x}_T,\varsigma,b)}{n(\textbf{0},0,0)} , \label{sig2} \end{equation} where \begin{equation} n({\bf x}_T,\varsigma,b) = G(\varsigma)\left\{(1-\kappa) \Big[ W_A({\bf x}_T,b) F(\varsigma) + W_B({\bf x}_T,b) F(-\varsigma)\Big] + \kappa B({\bf x}_T,b) \right\}. \label{denofsourc} \end{equation} The functions $W_{A(B)}$, and $B$ are the density of wounded nucleons from the nucleus A (B), and the density of binary collisions, respectively, both specified at a certain value of the impact parameter $b=|\textbf{b}|$, see Fig.~\ref{fig:coll}. These quantities are determined entirely from the optical limit of the \emph{Glauber model} \cite{Miller:2007ri}. The admixture of binary collisions is controlled by the parameter $\kappa =0.14$ that is typically fitted to reproduce the centrality dependence of charge hadron multiplicity. As herein we assume that the system is charge-free, ${\cal N}_i(\tau_{\rm i}, x,y,\varsigma)\equiv 0$, the initial central energy density $\e_{\rm i}$ of the system may be translated to its initial central temperature $T_{\rm i}=T_{\rm i}(\e_{\rm i})$, which is fitted to reproduce the total number of charged particles produced in the experiment. \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.7 \textwidth]{ini3d.pdf}\\ \includegraphics[angle=0,width=0.47 \textwidth]{ini-xy.pdf} \includegraphics[angle=0,width=0.47 \textwidth]{ini-xs.pdf} \end{center} \caption{(Top panel) The isothermal surfaces $(T\in\{0.4,0.3,0.2,0.1\} {\rm \,GeV})$ of the tilted source in the Milne coordinates. (Bottom panels) The isothermal contours of the initial temperature profile of fluid dynamic evolution in the Milne coordinates in $x-y$ (left) and $x-\varsigma$ (right) plane.} \label{fig:ini} \end{figure} The functional form of the density profile in rapidity in Eq.~(\ref{denofsourc}) is \begin{equation} G(\varsigma) \equiv \exp \left[ - \frac{(\varsigma - \Delta \varsigma)^2}{ 2 \sigma_\varsigma^2 } \, \Theta (|\varsigma| - \Delta \varsigma) \right] \, . \label{eq:rhofunc} \end{equation} The parameters in Eq.~(\ref{eq:rhofunc}) are deduced from the fits to the final rapidity spectrum of charged hadrons. For RHIC the fit results in $\Delta\varsigma = 2.3$ and $\sigma_{\varsigma} = 1.6$. The tilt of the source results from the preferred particle emission from the moving participant nucleon into its forward hemisphere, and may be parametrized as follows \cite{Bozek:2010bi} \begin{eqnarray} F(\varsigma) = \left\{ \begin{array}{lcccc} 0 & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\ & \mbox{ } & \,\,\, & \varsigma < -y_{\rm N} \, , \\ (\varsigma+y_{\rm N})/(2 y_{\rm N}) & & \mbox{if} & & -y_{\rm N} \leq \varsigma \leq y_{\rm N}\, , \\ 1 & & \mbox{ } & & \varsigma > y_{\rm N}\, , \end{array}\right. \,\,\,\,\,\,\,\,\,\,\,\,\, \end{eqnarray} where $y_{\rm N} = \log(2\sqrt{s_{\rm NN}}/(m_{\rm N}))-\varsigma_{\rm shift}$ is the nucleon rapidity shifted by the value $\varsigma_{\rm shift}=2$ (treated as a phenomenological parameter), $\sqrt{s_{\rm NN}}$ is the center-of-mass energy per nucleon pair, and $m_{\rm N}$ is the nucleon mass. The resulting initial temperature profile in $x-y$ plane and $x-\varsigma$ plane is shown in bottom panels of Fig.~\ref{fig:ini}, respectively. Finally, the flow in the transverse plane, as usual, is assumed to vanish initially, $u_x(\tau_{\rm i}, \xT,\varsigma) =0$, $u_y(\tau_{\rm i}, \xT,\varsigma)=0$, and the flow in the longitudinal direction is assumed to have the Bjorken-type scaling form $y_u(\tau_{\rm i}, \xT,\varsigma)=\varsigma$. \section{Particle decoupling} \label{sec:3} \sectionmark{Particle decoupling} \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.47 \textwidth]{ini-xtau.pdf} \includegraphics[angle=0,width=0.47 \textwidth]{ini-stau.pdf} \end{center} \caption{The isothermal contours of the fluid dynamical evolution in the Milne coordinates.} \label{fig:evo} \end{figure} Relativistic perfect fluid dynamics describes, by definition, the infinitely strongly-coupled system of particles evolving from one local thermal equilibrium state to another. When applied to relativistic heavy-ion collisions one immediately realizes that the latter assumption breaks down as the evolution proceeds. The rapid expansion of the created fireball into the vacuum leads to its cooling and dilution, see Fig.~\ref{fig:evo}. Eventually, the particle scatterings become too rare to prevent the particles from leaving the fluid. As a result the local thermal equilibrium cannot be maintained anymore and the fluid description breaks down. This complicated gradual process of particle decoupling from the fluid is often called the \emph{freeze-out} \cite{Stoecker:1986ci,Rischke1999,Kolb:2003dz,Huovinen:2006jp,Florkowski:2010zz,Romatschke:2009im,Gale:2013da,Jaiswal:2016hex}. As the interactions cease and the system becomes rarefied the kinetic theory description in terms of hadronic degrees of freedom and scattering cross sections becomes more adequate. A possible way to describe this process is to compare locally the time scale of the expansion of the fluid $\tau_{\rm exp}$ (which drives the system out of equilibrium), and the time scale characterizing collisions between the particles $\tau_{{\rm coll}}^k$ (which tend to restore it) \cite{Bondorf:1978kz}. In the differential form, the decoupling may be formulated as the following inequality \cite{Heinz:2007in} \begin{eqnarray} \tau_{{\rm coll}}^{k} \ge \tau_{\rm exp} , \label{freezeout} \end{eqnarray} where $\tau_{\rm exp}\sim 1/\theta(x)$ (see Sec.~\ref{sec:2a}), and $\tau_{\rm coll}^{k}\sim 1/\sum\limits_l\langle \sigma_{kl} v_{kl} \rangle\, \tilde{{\cal N}}_l(x)$, with $\sigma_{kl}$ denoting the scattering cross section between the particle species $k$ and $l$, $v_{kl}$ being their relative velocity in the center of mass frame, and $\tilde{{\cal N}}_l(x)$ describing respective particle densities. If the condition (\ref{freezeout}) is satisfied the particle species $k$ start to decouple from the fluid. A few comments are in order here: \begin{itemize} \item Both, the flow velocity $u^\mu(x)$ and particle densities $\tilde{{\cal N}}_l(x)$ are, in general, space-time dependent quantities, which means that the freeze-out process begins at different space-time points of the fluid. \item In general, the cross sections $\sigma_{kl}$ depend on the particle species, thus some particle species decouple ``before'' others. In the case of ultra-relativistic heavy-ion collisions the scattering cross section is usually dominated by a single species (pions), whose freeze-out triggers others. \item In perfect fluid dynamics, to which we restrict ourselves, the particle density $\tilde{{\cal N}}_l(x)\sim T^3(x)$. As a result, the condition (\ref{freezeout}) is usually significantly simplified to the condition of temperature dropping below a certain freeze-out temperature \cite{Rischke1999} \begin{eqnarray} T_{\rm freeze} \ge T(x). \label{freezeoutT} \end{eqnarray} \item The total cross section $\sigma_{kl}=\sigma_{kl}^{\rm el}+\sigma_{kl}^{\rm in}$ is always larger than the elastic one $\sigma_{kl}^{\rm el}$, which implies that the particle-number changing processes cease before the momentum changing processes. This results in the distinction between the \emph{chemical freeze-out} (inelastic collisions stop), and the \emph{kinetic/thermal freeze-out} (elastic collisions stop). It means that, typically, chemical freeze-out takes place at higher temperature than the thermal one, \begin{eqnarray} T_{\rm chem} \ge T_{\rm therm}. \label{freezeout2} \end{eqnarray} \item Equation (\ref{freezeout}) may, in general, involve additional unknown parameter of the order of $1$, which sets the overall scale for the freeze-out processes \cite{Heinz:2007in}. \end{itemize} \section{\emph{Single-freeze-out} scenario} \label{sec:4} The dynamical description of the particle decoupling according to Eq.~(\ref{freezeout}) is quite difficult to realize in practice. Instead, usually a significant simplification of the freeze-out dynamics, often called the \emph{single-freeze-out} model, is adopted \cite{Broniowski:2001we,Broniowski:2001uk,Broniowski:2002nf,Broniowski:2002wp,Bozek:2003qi,Broniowski:2003ax,Kisiel:2006is}. The latter relies on the assumption that the chemical and thermal freeze-out occur simultaneously. Within this framework one assumes that once the temperature $T(x)$ in the fluid decreases locally below a certain value $T_{\rm freeze}$ all particle species decouple completely from the fluid \footnote{Although the assumption of isothermal freeze-out seems to be crude, it was shown to give a quite reasonable approximation of the differential freeze-out condition, Eq.~(\ref{freezeout}) \cite{Rischke1999,Kolb:2003dz}.}. Mathematically, the condition $T(x)=T_{\rm freeze}$ defines a three-dimensional freeze-out hypersurface $\Sigma$ in the four-dimensional Minkowski space-time ${\cal M}$ (see Sec.~\ref{sec:5}). The thickness of $\Sigma$ is idealistically assumed to be infinitesimal, which means that the freeze-out process takes place instantaneously. Just before crossing $\Sigma$, in the fluid phase, the matter is considered to be in local thermal and chemical equilibrium, so that phase-space distributions of the microscopic constituents follow the statistical ones (see Sec.~\ref{sec:6}). It is assumed that freeze-out process itself, due to its instantaneous character, does not affect the phase-space distributions, so that equilibrium distributions are also shared by the particles emitted from the surface $\Sigma$. Outside the fluid region various approximations, usually based on some sort of transport theory, may apply. In the original single-freeze-out model \cite{Broniowski:2001we,Broniowski:2001uk,Broniowski:2002nf} the particles created on $\Sigma$, termed as \emph{primordial}, are assumed to form \emph{ideal non-interacting gas of hadrons}, which undergo \emph{free-streaming} to the detectors. However, the (primordial) hadron gas contains the full mass spectrum of hadronic resonances, which subsequently decay into \emph{stable} particles \footnote{The inverse processes, due to low probabilities, are usually neglected in this framework.}. Due to decays, the abundances of stable particles, as well as their spectra, are modified (see Sec.~\ref{sec:9}). As a result, the temperature $T_{\rm freeze}$ should, in general, be interpreted as a \emph{switching temperature} (from fluid to particle description), rather than the true chemical and kinetic freeze-out temperature. Another possibility is to perform the switching at somewhat higher, often called \emph{particlization} \cite{Huovinen:2012is}, temperature $T_{\rm part}>T_{\rm freeze}$, when the system is considered to be still strongly interacting, but the transport theory description is more adequate. The particle ensemble, together with their phase-space coordinates, is then passed to the quantum molecular transport models describing various phenomena in the dense hadronic medium. For the sake of simplicity, in this work we focus on the \emph{single-freeze-out} scenario solely. Finally, one should note here that it is commonly assumed that the fluid dynamical description applies to the whole forward light-cone of the system and \emph{a posteriori} part of the space-time evolution of fluid satisfying $T(x)< T_{\rm freeze}$ is neglected and replaced with the transport theory description. This procedure obviously introduces some inconsistency at the boundary $\Sigma$, as outside the fluid regime the system is a non-interacting hadron gas (as opposed to the strongly-interacting fluid). Consequently, the boundary conditions at $\Sigma$ for the fluid evolution are different in the two cases. Possible consequences of these problems will be neglected herein. We will only note here, that these problems may be largely reduced when large anisotropies are included already within fluid dynamics stage which would describe its gradual break up \cite{Florkowski:2010cf,Martinez:2010sc,Alqahtani:2017jwl}. In this way the switching should introduce less uncertainty. \section{Freeze-out hypersurface extraction} \label{sec:5} \sectionmark{Freeze-out hypersurface extraction} \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.6 \textwidth]{fo3D.pdf} \end{center} \caption{Isothermal freeze-out surface in Milne coordinates ($\varsigma=0$).} \label{fig:fo3d} \end{figure} When initialized with proper initial conditions the perfect fluid dynamics determines the evolution of the temperature, flow velocity, and the chemical potentials (if charge conserving theory is considered) in the entire forward light-cone of the Minkowski space-time ${\cal M}$, whose points satisfy the requirement $\tau \ge \tau_{\rm i}$. Usually, due to specific shape of the isothermal freeze-out hypersurface $\Sigma$ (see Figs.~\ref{fig:evo} and \ref{fig:fo3d}), it is convenient to parametrize the ambient space-time ${\cal M}$ with three angles, say $\theta,\zeta$ and $\phi$, and the distance $\rho$ from the coordinate system's origin $(\tau=\tau_{\rm i},x=0,y=0,\varsigma=0)$~\footnote{Note from Figs.~\ref{fig:evo} and \ref{fig:fo3d} that $\tau$ is not always a function of remaining space-time coordinates.}. The resulting parametrization reads \cite{Bozek:2009ty, Ryblewski:2013jsa} \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.49 \textwidth]{Hiperxyt.pdf} \includegraphics[angle=0,width=0.49 \textwidth]{Hiperret.pdf} \end{center} \caption{Space-time parametrization with angles $\theta,\zeta$ and $\phi$, and the distance $\rho$.} \label{fig:parametr} \end{figure} \begin{eqnarray} x^0=t(\rho,\theta,\zeta,\phi) &=& \tau \cosh \varsigma , \\ x^1=x(\rho,\theta,\zeta,\phi) &=& r \sin\theta\cos\phi , \\ x^2=y(\rho,\theta,\zeta,\phi) &=& r \sin\theta \sin\phi ,\\ x^3=z(\rho,\theta,\zeta,\phi) &=& \tau \sinh \varsigma, \label{freezepar} \end{eqnarray} with \begin{eqnarray} \Lambda\,\varsigma &=& \rho \cos\theta , \\ r &=& \rho \cos\zeta, \label{freezepar2} \end{eqnarray} so that $\tau -\tau_{\rm i} = \rho \sin\theta \sin\zeta$, and \begin{eqnarray} 0 \leq \zeta \leq \pi/2, \qquad 0 \leq \phi < 2 \pi, \quad {\rm and} \quad 0 \leq \theta \leq \pi; \label{angleslim} \end{eqnarray} see Fig.~{\ref{fig:parametr}}. The isothermal freeze-out condition defines the hypersurface $\Sigma$ embedded in ${\cal M}$ in the following way \begin{eqnarray} \Sigma(\theta,\zeta,\phi) = \{x \in {\cal M}: T(\rho,\theta,\zeta,\phi) = T_{\rm freeze} \}, \label{constT} \end{eqnarray} so that one has implicitly $\rho=\rho(\theta,\zeta,\phi)$. One should note here that, while at $\Sigma$, by definition, the temperature is fixed, the flow four-velocity is not, $u^{\mu}=u^{\mu}(\theta,\zeta,\phi)$. The infinitesimal element of the hypersurface $\Sigma$ is defined by the covariant four-vector \cite{Misner:1974qy} \begin{equation} \dS_\mu= \varepsilon_{\mu \alpha \beta \gamma} \frac{\partial x^\alpha}{\partial \zeta} \frac{\partial x^\beta}{\partial \phi} \frac{\partial x^\gamma}{\partial \theta } d\zeta d\phi d\theta, \label{d3Sigma} \end{equation} where $\varepsilon_{\mu \alpha \beta \gamma}$ is the totally antisymmetric tensor in four dimensions with $\varepsilon_{0123} = +1$. \section{Cooper-Frye formalism} \label{sec:6} \sectionmark{Cooper-Frye formalism} The change from fluid elements to hadrons at the switching hypersurface $\Sigma(x)$ is usually performed with the use of kinetic theory concepts. In the transport theory framework the flux of particle species $k$ is expressed with the formula (see Eqs.~(\ref{mom1}) and (\ref{ident1})) \cite{Rischke1999} \begin{equation} N_k^\mu(x) = \int\frac{\dP}{E_p} p^\mu f_k(x,p), \label{partflux} \end{equation} where $p^\mu=(E_p, \textbf{p}, p_z)$ is the four-momentum of the (on-shell) particle with mass $m_k$, and $f_k(x,p)$ is its phase-space distribution function. The number of world-lines of particles of species $k$ crossing the infinitesimal element $\dS$ of the surface $\Sigma$ is then calculated from the expression \begin{equation} N^k_{\Sigma} \equiv \dS_\mu N_k^\mu = \int\frac{\dP}{E_p} \dS_\mu p^\mu f_k(x,p). \label{partnum} \end{equation} The invariant momentum spectrum of particles produced at the surface element $\dS$ is \begin{equation} E_p\frac{dN^k_{\Sigma}}{\dP} = \dS_\mu p^\mu f_k(x, p). \label{momspec} \end{equation} Therefore, the invariant momentum distribution of hadrons emitted on the entire freeze-out hypersurface $\Sigma$ is given by the integral \begin{equation} E_p\frac{dN^k}{\dP} = \int_\Sigma \dS_\mu p^\mu f_k(x, p). \label{cfform} \end{equation} Equation~(\ref{cfform}) is commonly known as the Cooper-Frye formula \cite{Cooper:1974mv}. It is usually assumed that just before decoupling from the fluid, \emph{i.e.}, before the particles cross the switching surface $\Sigma$, they are in local thermal and chemical equilibrium such that their phase-space distributions are described by the equilibrium ones. In such a case one assumes that produced hadrons follow either Fermi--Dirac ($a=-1$) or Bose--Einstein ($a=+1$) distributions, \begin{equation} f_k(x, p)= f_k\left(p_\mu u^\mu(x), T(x), \tilde{\mu}_k\right) = \frac{g_k}{(2\pi)^3}\left\{ \exp\left[\frac{p_\mu u^\mu(x) - \tilde{\mu}_k(x) }{T(x)}\right] -a \right\}^{-1}\!\!\!, \label{eqdistr} \end{equation} where the factor $g_k = 2s_k+1$ takes into account the spin degeneracy of hadron species $k$. The chemical potential $\tilde{\mu}_k$ is given by the linear combination of the chemical potentials $\mu_i$ (see Sec.~\ref{sec:2a}) and the respective charges of the hadron species $k$ \begin{equation} \tilde{\mu}_k = \sum \limits_i Q_i^k \mu_i. \label{chempot} \end{equation} The particle's four-momentum, as measured by the experiment, is usually parametrized in the following way \begin{equation} p^{\mu} = \left( m_T \cosh {\rm y}_p, p_T \cos \phi_p, p_T \sin \phi_p, m_T \sinh {\rm y}_p\right), \label{particlemom} \end{equation} where $p_T=\sqrt{p_x^2+p_y^2}$ is the transverse momentum, $m_T=\sqrt{p_T^2+m_k^2}$ is the transverse mass, $y_p= \ln\left[ (E_p+p_z)/(E_p-p_z)\right]/2$ is the longitudinal rapidity and $\phi_p = \tan^{-1} \left(p_y/p_x\right)$ is the momentum azimuthal angle in the plane transverse to the beam axis. With definitions introduced above the integration measure $ \dS_\mu p^\mu $ in Eq.~(\ref{cfform}) takes the form \begin{eqnarray} \dS_\mu p^\mu &=& \frac{\sin\theta \tau \rho^2 }{\Lambda} \Biggl[ \Biggr. \frac{\partial\rho}{\partial\zeta} \cos\zeta \left( p_T \sin\zeta \cos\left(\phi_p - \phi\right)- m_T \cos\zeta \cosh\left({\rm y}_p - \varsigma \right) \right) \nonumber\\ &+& \!\! \cos\zeta \sin\theta \left(\rho \sin\theta - \frac{\partial \rho}{\partial\theta} \cos\theta \right) \nonumber\\ &\times& \left( p_T \cos\zeta \cos\left(\phi_p - \phi\right) + m_T \sin\zeta \cosh\left({\rm y}_p- \varsigma \right) \right) \nonumber\\ &+& \!\! \cos\zeta \sin\theta \left(\rho \cos\theta + \frac{\partial \rho}{\partial\theta} \sin\theta \right) \frac{\Lambda}{\tau} m_T \sinh\left({\rm y}_p - \varsigma \right) \nonumber\\ &-& \!\! \frac{\partial \rho}{\partial\phi}\,\, p_T\, \sin(\phi_p - \phi) \Biggl. \Biggr] d \zeta d\phi d\theta \equiv h_k(\zeta,\phi,\theta, p_T, \phi_p, {\rm y}_p)d \zeta d\phi d\theta, \label{intmeas} \end{eqnarray} and the (Lorentz-boosted) energy is \begin{equation} p_\mu u^\mu = u_0\, m_T \cosh ({\rm y}_p-{\rm y}_u) - p_T u_T \cos(\phi_p-\phi_u), \label{pu} \end{equation} where we introduced yet another variable $\phi_u$ such that $u_x = u_T \cos \phi_u$ and $u_y = u_T \sin \phi_u$. One should note that in cases where some symmetries are present in the system the formulas (\ref{intmeas})-(\ref{pu}) may be respectively simplified \cite{Florkowski:2010zz,Chojnacki:2007rq}. Using expressions (\ref{intmeas})-(\ref{pu}) in the Cooper-Frye formula (\ref{cfform}) one obtains a six-dimensional particle distribution, which can be used directly to generate particles (both, stable hadrons and unstable resonances) on $\Sigma$ \begin{eqnarray} \frac{d^6N^k}{p_T d p_T d\phi_p d{\rm y_p} d\zeta d \phi d \theta} &&= \frac{g_k}{(2 \pi)^3} h_k (\zeta,\phi,\theta, p_T, \phi_p, {\rm y}_p) f_k(\zeta,\phi,\theta, p_T, \phi_p, {\rm y}_p) \nonumber \\ &&\equiv {\cal F}_k (\zeta,\phi,\theta, p_T, \phi_p, {\rm y}_p). \label{cfformTH} \end{eqnarray} The Cooper-Frye formula, Eq.~(\ref{cfform}), is nowadays commonly used in the fluid dynamical simulations of the heavy-ion collisions to describe hadron production on the freeze-out hypersurface. There are, however, well known limitations of this prescription. An immediate problem with Eq.~(\ref{cfform}) arises if the freeze-out hypersurface contains both time-like and space-like parts. In particular, if the freeze-out element is time-like, so that associated normal vector is space-like, for certain directions of the momentum $p^\mu$ the invariant measure $\dS_\mu p^\mu$ may be negative. Thus, the particle number generated in this region will become ill-defined (the Cooper-Frye formula would in such a case describe the back-flow of particles into the fluid) \cite{Rischke1999}. One may show that at high energies the negative emission is usually negligible \cite{Chojnacki:2011hb}, so that it may be safely removed by introducing the step function $\Theta(\dS_\mu p^\mu)$ on the right-hand side of Eq.~(\ref{cfform}) \cite{Rischke1999}. Another issue connected with Eq.~(\ref{cfform}) is its insensitivity to the fact that particles with large momenta are in general more probable to leave the fluid more easily than the soft ones \cite{Kolb:2003dz}. \section{Hadron abundances} \label{sec:7} \sectionmark{Hadron abundances} For the system in local thermal equilibrium the flux $N_k^\mu$ of particle species $k$ from the fluid cell is proportional to its four-velocity, see Eq.~(\ref{pflux}). In this case, using formula (\ref{partnum}), the total number of particles emitted on the entire hypersurface may be expressed as follows \begin{equation} N_k \equiv \int_\Sigma \dS_\mu N_k^\mu = \int_\Sigma \dS_\mu u^\mu(x) {\cal N}_k\left(T(x), \tilde{\mu}_k(x)\right). \label{avpart} \end{equation} One should stress here that the particle density ${\cal N}_k$ in Eq.~(\ref{avpart}) is expressed solely through the local temperature $T(x)$ and chemical potentials $\tilde{\mu}_k(x)$. It straightforward to see that, if $T$ and $\tilde{\mu}_k$ are constant along $\Sigma$, the integral of the flow pattern on the freeze-out manifold factorizes in Eq.~(\ref{avpart}), giving the so called effective comoving volume $V_{\rm eff}\equiv\int_\Sigma \dS_\mu u^\mu(x)$. When one considers ratios of particle multiplicities of different species, say $a$ and $b$, the factor $V_{\rm eff}$ cancels out completely in the ratios \cite{Heinz:1998st,Cleymans:1998yf} giving \begin{equation} \frac{N_a}{N_b} = \frac{{\cal N}_a(T, \tilde{\mu}_k)}{{\cal N}_b(T, \tilde{\mu}_k)}. \label{ratios} \end{equation} Arguments presented above gave rise to the wide variety of analyses under the common name of \emph{thermal} or \emph{statistical} models \cite{BraunMunzinger:2001ip,Florkowski:2001fp,Rafelski:2001hp,Baran:2003nm,Turko:2007ri}. They focus mainly on the extraction of thermodynamic properties of the matter at the chemical freeze-out based on thermal analysis of the multiplicities of the experimentally measured particles and ratios thereof. Using the grand canonical version of the thermal approach the fits usually yield the chemical freeze-out temperature of the order of the quark-hadron phase transition obtained from the lattice QCD calculations \begin{equation} T_{\rm chem} \sim T_{\rm c} \sim 170 \,\,\,{\rm MeV} , \label{Tchem} \end{equation} which suggest possible relation between hadronization process and chemical equilibration \cite{BraunMunzinger:2003zz} \footnote{Note that the new lattice QCD simulations suggest somewhat lower values the critical temperature, $T_{\rm c} \sim 155 \,\,\,{\rm MeV}$ \cite{Borsanyi:2010cj}.}. Equation (\ref{ratios}) requires a few important remarks: (i) the integrals quoted above are performed over the full momentum space, which means that the reliable analysis would require the $4\pi$ acceptance for identified particles, (ii) at low energies and forward rapidities the freeze-out conditions are usually quite different from the baryon-free midrapidity region, suggesting that thermal analyses based on Eq.~(\ref{ratios}) yield, in this case, only approximate (averaged over the entire hypersurface) values of thermal parameters at freeze-out. Nevertheless, keeping in mind its simplicity, the precision of the thermal approach is quite remarkable. \section{Decays of resonances} \label{sec:9} \sectionmark{Decays of resonances} When discussing the thermal/statistical approach we neglected an important aspect of the modeling connected with the role of resonances. It is usually assumed that the particles created at freeze-out form an ideal non-interacting gas of hadrons, which includes, in principle, entire mass spectrum of unstable resonances. In reality, the latter subsequently decay populating the spectrum of stable hadrons, which is then observed in the detector. Although the Boltzmann factor tends to suppress heavy states, one should keep in mind that, according to the Hagedorn hypothesis \cite{Hagedorn:1965st}, their mass spectrum increases exponentially. While at low temperatures the role of the resonance feed-down is diminished, at the temperatures of the order of $T_{\rm chem}$ it is quite significant. In fact, the successful description of the available data on the hadronic abundances within the statistical approach, as quoted in Sec.~\ref{sec:7}, was possible largely due to inclusion of the full mass spectrum of the hadronic resonances \cite{Amsler:2008zzb} \footnote{In practice, all resonances from the Particle Data Group tables \cite{Amsler:2008zzb}, whose properties are known well enough, were included.}. As it was shown within the so called Cracow model \cite{Broniowski:2001uk} (see next Section), which includes hydrodynamic-like expansion of the system, the role of the resonance feed-down turns out to be equally important for description of the momentum spectra of stable hadrons as the flow itself. This is mainly due to the fact that decays of heavy resonances populate mainly the soft region of the stable hadron transverse-momentum spectra leading to their steeper slopes. Effectively, the observed inverse slope parameter, usually interpreted as the thermal freeze-out temperature, is much smaller than the chemical freeze-out temperature inferred from the analysis of the hadronic ratios \begin{equation} T_{\rm therm} \sim 130 \,\,\,{\rm MeV}; \label{Ttherm} \end{equation} compare Eq.~(\ref{Tchem}). This observation gave further support to the single-freeze-out model, and explained apparent mismatch between the two freeze-out temperatures. \section{Hydro-inspired parameterizations of freeze-out} \label{sec:10} \sectionmark{Hydro-inspired parameterizations} According to Eq.~(\ref{eqdistr}) the spectrum of particles produced in a single fluid cell is thermal. However, even if the thermal parameters $T$ and $\tilde{\mu}$ are constant along the $\Sigma$, the total momentum spectrum, as calculated with Eq.~(\ref{cfform}), includes contributions from different fluid cells, each boosted with a different velocity $u^\mu(x)$. Therefore, the resulting total spectrum is modified due to the combination of \emph{redshift} and \emph{blueshift} effects \cite{Broniowski:2001we}. These effects are observed in the experiment in the form of the characteristic concave shape of the transverse momentum spectrum. In view of these arguments, the realistic description of the momentum spectra of stable hadrons must include effects of some kind of collective evolution reflected in the finite flow of the matter at freeze-out. The most natural way to include flow at the freeze-out is to perform full fluid dynamical simulations along the lines presented in Sec.~\ref{sec:2a}. Unfortunately, numerical solution of fluid dynamical equations of motion is rather complicated and computationally intensive. Moreover, fluid evolution requires at least \emph{some} knowledge of the initial conditions, which are rather poorly known, and thus always bias final results. In order to avoid such problems, one may follow a different strategy, which results in the so called \emph{hydro-inspired} models. Within these models the conditions at the freeze-out hypersurface $\left(T(x), u^\mu(x), \mu_i(x)\right)$, as well as the shape of the freeze-out surface $\Sigma(x)$, are simply assumed, or \emph{inspired}, by the full numerical fluid dynamical simulations. Among these models the most successful ones are the Blast-wave \cite{Kisiel:2006is,Schnedermann:1993ws} and Cracow \cite{Broniowski:2001we} models. In particular, within the Cracow model \cite{Broniowski:2001we} (as well as in Blast-wave model) it is assumed that the freeze-out takes place at the boost-invariant and cylindrically symmetric (in the transverse $x-y$ plane). The hypersurface $\Sigma$ is defined by the requirement that the particles freeze-out at the surface of constant proper-time (${\tilde{\tau}}$) \begin{equation} {\tilde{\tau}}^2 = x^\mu x_\mu = t^2 - x^2 - y^2 - z^2 = \tau^2 -r^2 = {\tilde{\tau}}_{\rm freeze}^2 = \hbox{const.} \label{hubtau} \end{equation} According to Eq.~(\ref{hubtau}) the particles decouple starting from the center of the \emph{fire-cylinder} towards its edge, so that $0\le r \le r_{\rm max}$. At the same time the fluid four-velocity is assumed to have the Hubble-like form \cite{Chojnacki:2004ec} \begin{equation} u^\mu = \gamma (1, {\bf v}_T, v_z) = {x^\mu \over \tilde{\tau}_{\rm freeze}}, \label{hubumu} \end{equation} which leads to \begin{eqnarray} p_\mu u^\mu &=& \frac{\sqrt{\tilde{\tau}_{\rm freeze}^2 + r^2}}{\tilde{\tau}_{\rm freeze}}\,\, m_T \cosh(y_p-\varsigma) - p_T \frac{r}{\tilde{\tau}_{\rm freeze}} \cos(\phi_p-\phi), \label{Cracowpu} \end{eqnarray} compare Eq.~(\ref{pu}). Conditions (\ref{hubtau}) and (\ref{hubumu}) imply that the freeze-out hypersurface element is proportional to four-velocity \begin{eqnarray} \dS &=& u^\mu \tilde{\tau}_{\rm freeze}\, r dr\, d\varsigma\, d\phi. \label{CracowdS} \end{eqnarray} Within the Cracow model, and with the inclusion of all known hadronic resonances, it was possible to describe, both, the particle ratios and spectra of particles \cite{Broniowski:2001uk}, within the single framework. Due to the fast increase of the computer power during the last decade the full numerical solutions of fluid dynamical evolution equations, including the dissipative effects became easily achievable. As a result the importance of hydro-inspired models was largely reduced, making them useful tools for the estimate of flow and thermal properties at the freeze-out mainly in the analyses of the experimental data; although see \cite{Begun:2013nga}. \section{Monte-Carlo statistical hadronization} \label{sec:11} \sectionmark{Statistical hadronization with \texttt{THERMINATOR\,} } Careful analysis of the experimental data on flow and correlations measured at RHIC and the LHC energies has shown that the realistic description of the dynamics of heavy-ion collisions requires, an experimental-wise, \emph{event-by-event} simulations of such reactions. Due to event-by-event initial state fluctuations (of different kind), such a modeling usually involves running fluid dynamical evolution separately for each event. As a result, in each event the flow patterns at $\Sigma$, as well as the shape of the $\Sigma$, does not exhibit any symmetries. Hence, in general, the calculation of the particle spectra using Cooper-Frye formula, Eq.~(\ref{cfform}), has to be done entirely numerically. Moreover, precision studies usually require various experimental cuts and feed-down corrections to be applied to reproduce the data correctly. Thus, it is crucial to have the access to the full information on phase-space properties of the produced hadrons \footnote{The access to the entire phase-space properties of the created particle ensemble is of great importance (note that the experimental analysis may access only the four-momentum properties of the particles). As a result, one may, for instance, relate the experimental-wise calculated HBT radii of the system with its \emph{actual} space-time size in simulations.}. In view of these arguments, the development of Monte-Carlo generators for simulation of physical events became necessary. One of the first numerical open-source codes devoted to this task was \texttt{THERMINATOR\,} (THERMal heavy-IoN generATOR) \cite{Kisiel:2005hn} (for its new, extended version -- \texttt{THERMINATOR2\,} -- see \cite{Chojnacki:2011hb} \footnote{The \texttt{THERMINATOR2\,} code \cite{Chojnacki:2011hb} was also supplemented with another (separate) code, \texttt{FEMTO-\texttt{THERMINATOR\,}}, which is provided to carry out the analysis of the pion--pion femtoscopic correlations.}). The \texttt{THERMINATOR\,}'s main functionality is to perform hadronization in relativistic heavy-ion collisions using concepts of the statistical approach and the single-freeze-out model. In its latest version \cite{Chojnacki:2011hb}, the code performs event-by-event generation of an ensemble of particles (an equivalent of the physical event) given any freeze-out conditions (shape of the hypersurface, flow, and thermodynamic parameters) typically created in fluid dynamic models using the Cooper-Frye formula~(\ref{cfform}). Within the code one may choose, either one of the predefined hydro-inspired freeze-out parameterizations (such as Blast-wave, or Cracow; see Sec.~\ref{sec:10}) or an output from any realistic fluid dynamical simulations. Since nowadays the fluid dynamical modeling became the common standard in the field of heavy-ion collisions (at least with event-averaged initial conditions), in what follows we focus mainly on the latter case. In the case of using directly the fluid dynamics simulations, at first the freeze-out has to be extracted from the fluid evolution, which is out of the scope of \texttt{THERMINATOR\,} code. In particular, in the case of perfect fluid dynamics, one has to supply the distance $\rho(\zeta,\phi,\theta)$, the components of the flow velocity $u_x(\zeta,\phi,\theta)$, $u_y(\zeta,\phi,\theta)$ and $y_u(\zeta,\phi,\theta)$, temperature $T(\zeta,\phi,\theta)$, and chemical potentials $\mu_i(\zeta,\phi,\theta)$ (although the values of chemical potentials are usually neglected in the hydrodynamic stage) \footnote{In the case of isothermal freeze-out the temperature $T(\zeta,\phi,\theta)=T_{\rm freeze}$ is constant. Chemical potentials are usually included at the freeze-out, based on the results from the thermal model fits $\mu_i(\zeta,\phi,\theta)$.}. For the perfect fluid case, it is assumed that on the hypersurface $\Sigma(x)$ the system is in local thermal and chemical equilibrium, which means that the phase-space distributions of the particles have the forms given by Eq.~(\ref{eqdistr}) \footnote{In the case of using output from the dissipative fluid dynamics one should correct the distribution function (\ref{eqdistr}) for the non-equilibrium effects as well as supply the \texttt{THERMINATOR2\,} code with the respective dissipative quantities at $\Sigma$.}. The \texttt{THERMINATOR\,}'s code is written in an object-oriented C++ programing language and conforms to the CERN's \texttt{ROOT} framework standards \cite{Brun:1997pa}. Once the proper input is provided, the generation of hadrons (stable ones and resonances) is done using straightforward Monte Carlo method according to the Cooper-Frye formula (\ref{cfformTH}). The detailed description of the \texttt{THERMINATOR\,}'s theoretical background, code structure, functionalities, as well as short introduction to its usage may be found in the original papers \cite{Kisiel:2005hn,Chojnacki:2011hb}, and on the project's website \cite{THERMINATORweb}. Herein we will just briefly review its main aspects. Each evaluation of the code is performed in two stages. The first (preliminary) stage, which is performed once per set of parameters, for each particle species, and whose results are recorded and used for all subsequently generated events, involves: \begin{itemize} \item Calculation of the global maximum ${\cal F}^k_{\rm max}$ of the right-hand side of Eq.~(\ref{cfformTH}). \item Calculation of the average multiplicity $\bar{N}_k$ by integrating Eq.~(\ref{cfformTH}) over the entire phase-space (\emph{i.e}. over $\zeta,\phi,\theta, p_T, \phi_p$ and ${\rm y}_p$). \end{itemize} The second (main) stage consists of generation of (primordial) particle ensemble and performing decays of unstable resonances, which proceeds on event-by-event basis. In each event: \begin{itemize} \item It is assumed that the generated ensemble of particles corresponds to the grand canonical ensemble. Hence, the number $N_k$ of particles of species $k$ in the event is generated randomly according to the probability expressed by the Poisson distribution \begin{equation} P(N_k)=\frac{ \left(\bar{N}_k\right)^{N_k}}{N_k!}\exp(-\bar{N}_k) \label{Poisson} \end{equation} For each particle species $k$ number $N_k$ of particles is generated according to the von-Neumann acceptance/rejection procedure: space-time point $(\zeta,\phi,\theta)$ at $\Sigma$, momentum components of the particle ($p_T, \phi_p$ and ${\rm y}_p$), and a test variable ${\cal F}^k_{\rm test}$ in the range $\langle 0, {\cal F}^k_{\rm max}\rangle$, are generated randomly. The particle is accepted if ${\cal F}^k_{\rm test}<{\cal F}^k(\zeta,\phi,\theta,p_T, \phi_p, {\rm y}_p)$, otherwise it is rejected. The generation of particles goes over all species (stable ones and resonances), which are listed in the Particle Data Group \cite{Amsler:2008zzb} tables, and whose properties are known well enough. For that purpose the \texttt{SHARE\,} particle database is used \cite{Torrieri:2004zz}. \item Once the ensemble of primordial particles is generated the code performs decays of unstable resonances, which, in general, may proceed in cascades. Each resonance evolves along the classical trajectory starting from its initial position $x^\mu_{\rm origin}$ according to its momentum \begin{equation} x^\mu_{\rm decay} = x^\mu_{\rm origin} + {p^\mu \over m_k} \Delta \tau. \label{decpt} \end{equation} and decays after its lifetime $\Delta\tau$, which is randomly generated with the probability density $\Gamma_k \exp(-\Gamma_k \Delta\tau)$, where $\Gamma_k$ is the width of the particle of species $k$. The particular decay channel is selected randomly with the probability corresponding to its branching ratio. The decays of sub-threshold type are not allowed. The decays, which are of two-particle or three-particle type, follow simple kinematic formulas \cite{Kisiel:2005hn}, and are treated on equal footing. All required data on the decays follows from the \texttt{SHARE\,}~ particle decays database \cite{Torrieri:2004zz}. \item Once all particles in the event decayed, the calculation is completed. \end{itemize} The exemplar emission points in space-time obtained with the initial tilted source and \texttt{THERMINATOR\,} simulations are presented in the Fig.~\ref{fig:emiss}. In the next Section, based on the data on the emitted particles, we will calculate some physical observables. \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.49 \textwidth]{fig-hsxytp.pdf} \includegraphics[angle=0,width=0.49 \textwidth]{fig-hsxytd.pdf} \end{center} \caption{Emission points of primordial $\pi^{+}$ (left panel), and $\pi^{+}$ from decays (right panel) in the $x-y-\tau$ plane in Au-Au collisions at $\sqrt{s_{\rm NN}}= 200~{\rm GeV}$ and the impact parameter of $7.16$ fm. } \label{fig:emiss} \end{figure} \section{Performing analysis with \texttt{THERMINATOR2\,}} \label{sec:12} \sectionmark{} Due to the detailed record on properties of all produced particles, including their space-time coordinates $(x^\mu, p^\mu)$ and their decay chains, the \texttt{THERMINATOR\,}~code becomes a versatile tool, allowing for calculation of various observables. Having completed generation of events various experimental observables may be calculated, either, by using the figure macros supplied with the code, or, by preparing user macros. In this Section we will present some of its capabilities. \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.65 \textwidth]{fig-distpt.pdf} \includegraphics[angle=0,width=0.65 \textwidth]{fig-distpt-exotic.pdf} \end{center} \caption{Transverse-momentum spectra of $\pi^+$, $K^+$, and protons (top panel) and $\rho$ mesons, $K^*_0$ mesons, $\phi$ mesons, $\Lambda^0$ barions, and $\Omega^-$ barions (bottom panel) for Au-Au collisions at $\sqrt{s_{\rm NN}}= 200~{\rm GeV}$ and the impact parameter of $7.16$ fm (protons from the weak decays of $\Lambda$'s are excluded). The statistics of 3000 events was used. The errors are statistical only. } \label{fig:spec} \end{figure} \subsection{Single-particle spectra} \label{ssec:121} \sectionmark{hs} One of the most straightforward observables to calculate is the single-particle spectrum of certain particle species $k$. The most copiously produced ones are the lightest mesons (pions and kaons) and baryons (protons). They form more than $90\%$ of the total charged particles. In the Fig.~\ref{fig:spec} we present the transverse-momentum spectra of $\pi^+$, $K^+$, and protons (top panel), as well as $\rho$ mesons, $K^*_0$ mesons, $\phi$ mesons, $\Lambda^0$ barions, and $\Omega^-$ barions (bottom panel) for Au-Au collisions at $\sqrt{s_{\rm NN}}= 200~{\rm GeV}$ and the impact parameter of $7.16$ fm. All results were obtained using the fluid dynamical input, obtained with event-averaged ``tilted'' initial conditions and the freeze-out temperature $T_{\rm freeze} = 150\,{\rm MeV}$. The chemical potentials $\mu_B =28.5\,{\rm MeV}$, $\mu_{I_3} =-0.9\,{\rm MeV}$, and $\mu_S =6.9\,{\rm MeV}$ were included at the freeze-out solely. The presentation of results is limited to the ``soft'' transverse momenta ($p_T<3\,{\rm GeV}$), where the fluid dynamical models are expected to be applicable, and the midrapidity region, $|y_p|<1$. One observes that slopes of the spectra are species-dependent, which is mainly due to their different masses. At intermediate momenta the spectra have exponential shapes, which is characteristic for thermal systems. At low $p_T$ various effects play role, see next section. In the Fig.~\ref{fig:specpseu} we present the respective $p_T$-integrated pseudorapidity distribution of charged particles, where $\eta = \ln\left[ (p + p_{z})/(p - p_{z})\right]/2$. The latter are compared to the contributions from $\pi^+$, $K^+$, and protons. One observes that, while the central rapidity region $y_p\approx \eta$ is approximately boost-invariant, the forward/backward rapidity regions are not. This is a result of using full four-dimensional fluid dynamical simulations of the emitting source. \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.65 \textwidth]{fig-disteta.pdf} \end{center} \caption{Pseudorapidity spectra of $\pi^{+}$, $K^{+}$, and protons shown separately, and summed together with their antiparticles, as well as of all charged particles, for Au-Au collisions at $\sqrt{s_{\rm NN}}= 200~{\rm GeV}$ and the impact parameter of $7.16$ fm. The statistics of 3000 events was used. The errors are statistical only.} \label{fig:specpseu} \end{figure} \subsection{Impact of resonance decays} \label{ssec:122} \sectionmark{rd} One of the effects which significantly affects shapes of the single-particle spectra are decays of resonances. Due to available phase space they populate mainly the low-$p_T$ region of the spectra. In the Fig.~\ref{fig:specpion} we present the ``anatomy'' of the transverse-momentum spectra of $\pi^+$. One observes that the low-$p_T$ part of the spectrum of primordial pions (produced directly at the freeze-out hypersurface) and the total spectrum (including the contribution from all resonance decays) differ significantly. The primordial pions develop a characteristic knee in the soft region. The decays of heavy resonances feed up the spectrum in this region, see contribution from $\omega$ decays in the Fig.~\ref{fig:specpion}. As a result the total shape of pion spectrum takes the characteristic concave shape. Moreover, the effective slope of the final spectrum becomes steeper which manifests itself by the lower effective temperature of the spectrum, see Sec.~\ref{sec:9}. \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=0.65 \textwidth]{fig-distpt-pion.pdf} \end{center} \caption{The anatomy of the transverse-momentum spectra of $\pi^+$ for Au-Au collisions at $\sqrt{s_{\rm NN}}= 200~{\rm GeV}$, and the impact parameter of $7.16$ fm. The spectrum of primordial pions, as well as the contribution from $\rho^0$, $\rho^+$ and $\omega$ resonance decays, is presented. The statistics of 3000 events was used. The errors are statistical only.} \label{fig:specpion} \end{figure} \subsection{Experimental feed-down corrections} \label{ssec:123} \sectionmark{rd} The experimental proton spectra are usually feed-down corrected for $\Lambda^0 \to p^+ + \pi^-$ weak decays. Such corrections are straightforward to be included in \texttt{THERMINATOR\,}\, analysis, and they were also applied in Fig.~\ref{fig:spec}. In the Fig.~\ref{fig:specfeed} we present the comparison of proton spectra with and without applying these corrections. We observe that the feed-down from weak-decays is at the level of $30\%$, which is a significant correction. \begin{figure}[h] \begin{center} \includegraphics[angle=0,width=0.65 \textwidth]{fig-distpt-feed.pdf} \end{center} \caption{Transverse-momentum spectra of protons, with, and without feed-down correction for $\Lambda^0\to p^+ + \pi^-$ weak decays. The statistics of 3000 events was used. The errors are statistical only.} \label{fig:specfeed} \end{figure} \subsection{Ratios of particle yields} \label{ssec:124} \sectionmark{rd} As it was discussed in Sec.~\ref{sec:9} the inclusion of resonance decays was crucial for the proper description of the ratios of particle yields, giving the chemical freeze-out temperatures of the order of the critical temperature of the phase-transition in QCD. Following some recent studies, which focus on reproducing the shapes of the spectra rather than the yields ratios, the freeze-out temperatures extracted from the data can be as low as 150 MeV (sometimes even lower). One should expect that, in such a case, on should observe a decrease of quality of the fits of the particle abundances. To see this, in Table~\ref{table:ratios} we present the $K^+/\pi^+$ and $p/\pi^+$ total yield ratios calculated at various freeze-out temperatures $T_{\rm freeze}$. One observes that with reducing the freeze-out temperature the ratios decrease which is consequence of the fact that heavy particles are most copiously produced at large temperatures. The decrease is more significant for protons than for kaons. The results suggest that the fitting of particle spectra should be always accompanied by the fits of particle yields. \begin{table}[t] \begin{center} \begin{small} \begin{tabular}{lcccc@{\hskip 1cm}ccc} \hline \\ [-1ex] $T_{\rm freeze} \,{\rm [MeV]}$ & 130 & \multicolumn{3}{c}{150} & \multicolumn{3}{c}{170} \\ [1ex] \hline \\ $K^+/\pi^+$ & 0.199 & \multicolumn{3}{c}{0.263} & \multicolumn{3}{c}{0.326} \\ [2ex] $p/\pi^+$ & 0.033 & \multicolumn{3}{c}{0.065} & \multicolumn{3}{c}{0.110} \\ \\ \hline \end{tabular} \end{small} \end{center} \caption{\small The $K^+/\pi^+$ and $p/\pi^+$ total yield ratios calculated at various freeze-out temperatures $T_{\rm freeze}\in\{130, 150, 170\}\, {\rm MeV}$. } \label{table:ratios} \end{table} \begin{acknowledgement} The author would like to express his gratitude to the Organizers of the \emph{53rd Karpacz Winter School of Theoretical Physics and THOR COST Action Training School} for their help and hospitality. This work was supported by the THOR COST Action CA15213, ExtreMe Matter Institute EMMI at the GSI Helmholtzzentrum f̈ur Schwerionenforschung, Darmstadt, Germany, and the Polish National Science Center grants No. DEC-2012/07/D/ST2/02125 and DEC-2016/23/B/ST2/00717. \end{acknowledgement} \bibliographystyle{unsrt}
{ "timestamp": "2017-12-15T02:07:30", "yymm": "1712", "arxiv_id": "1712.05213", "language": "en", "url": "https://arxiv.org/abs/1712.05213" }
\section{Introduction}\label{sec:1} Black holes in more than four spacetime dimensions have been one of the most interesting subjects of general relativity in the last two decades. For example, the statistical counting of black-hole entropy was performed in string theory~\cite{Strominger:1996sh}, which needs higher-dimensional gravity. Also, the AdS/CFT correspondence~\cite{Aharony:1999ti} relates the dynamics of higher-dimensional black holes to those of a quantum field theory in one lower dimension. Moreover, the higher-dimensional black-hole production at an accelerator was predicted in the scenario of large extra dimensions~\cite{Argyres:1998qn}. In spite of a lot of efforts by many physicists, our understanding of higher-dimensional black holes cannot be said to be enough, even in five-dimensional pure gravity. \medskip The topology theorems for stationary black holes in five dimensions~\cite{Galloway:2005mf,Cai:2001su,Hollands:2007aj,Hollands:2010qy} state that the allowed topology of the spatial cross section of the event horizon is either a sphere $S^3$, a ring $S^1\times S^2$ or lens spaces $L(p,q)$, if the spacetime is asymptotically flat and admits biaxisymmetry. For both the sphere and the ring, the corresponding exact solutions have been found as stationary solutions to the five-dimensional vacuum Einstein equations~\cite{Tangherlini:1963bw,Myers:1986un,Emparan:2001wn,Pomeransky:2006bd}, whereas for the lens spaces, they have not yet been found, although some authors have tried to find them as regular vacuum solutions~\cite{Evslin:2008gx,Chen:2008fa}. Recently, however, as for the class of supersymmetric solutions in the bosonic sector of five-dimensional minimal ungauged supergravity, the first black lens solution whose horizon topology is $L(2,1)=S^3/{\mathbb Z}_2$ has been constructed by Kunduri and Lucietti~\cite{Kunduri:2014kja} with the help of the well-known construction developed by Gauntlett {\it et al.}~\cite{Gauntlett:2002nw}. Moreover, this solution has been extended to those that admit the horizon with the more general lens space topologies $L(n,1)=S^3/{\mathbb Z}_n$~\cite{Tomizawa:2016kjh} and multiple horizons~\cite{Tomizawa:2017suc}. \medskip The next step that one should consider may be whether a black lens exists in a de Sitter space and an anti-de Sitter space. To find such a solution may seem to be a considerably difficult problem since not only do we not have the solution-generation-methods for the Einstein equations with a cosmological constant, but also we do not know even a (regular) vacuum solution of the black lens. However, if we start with an extremal black hole solution to the Einstein-Maxwell-Chern-Simons (EMCS) equations, it is not too difficult to find such an exact solution. For instance, the Kastor-Traschen (KT) solution, which describes colliding charged black holes for a contracting phase, was found by adding a positive cosmological constant to the Majumdar-Papapetrou (MP) multi-black-hole solutions. Applying the same method to higher dimensions, London has also found the KT solutions in arbitrary higher dimensions that present the coalescence of an arbitrary number of spherical black holes into a single spherical black hole from the MP solutions in arbitrary higher dimensions. Moreover, using the method for the rotating case and deforming the well-known supersymmetric Breckenridge-Myers-Peet-Vafa (BMPV) solution~\cite{Breckenridge:1996is} so that it has a cosmological constant, Klemm and Sabra have obtained the five-dimensional rotating KT solution that presents the coalescence of any number of rotating black holes. \medskip The regular metric on the lens space $L(n,1)=S^3/{\mathbb Z}_n$ with unit radius can be written as \begin{eqnarray} ds^2=\frac 14 \left[\left(\frac{d\psi}{n}+\cos\theta d\phi\right)^2+d\theta^2+\sin^2\theta d\phi^2\right] , \label{eq:lens} \end{eqnarray} where $0\le \psi<4\pi$, $0\le \phi<2\pi$ and $0\le \theta\le\pi$, and $n$ is a positive integer parametrizing the Chern class of the principal bundle over $S^2$. In particular, for $n=1$, this coincides with the metric on a three-dimensional sphere written in terms of the Euler angle coordinates. The purpose of this paper is to seek an exact solution of a black lens solution such that the spatial cross section of the horizon can be written in the above form in the five-dimensional EMCS theory with a positive cosmological constant by adding the found supersymmetric black lens solution to a positive cosmological constant. The method is essentially based on some previous works in Refs.~\cite{Kunduri:2014kja,Tomizawa:2016kjh} by Klemm and Sabra~\cite{Klemm:2000gh}. It is worthy of special mention that while the cosmological black hole with the horizon of $S^3$ in Ref.~\cite{Klemm:2000gh} is stationary, the cosmological black lens solution is dynamical in the sense that the horizon topology changes from the lens space of $L(n,1)=S^3/{\mathbb Z}_n$ into a sphere $S^3$. In fact, using the cosmological chart, we can analytically show that at early time the horizon is isometric to the lens space $L(n,1)=S^3/{\mathbb Z}_n$, whereas at late time it is isometric to a sphere $S^3$. For the stationary supersymmetric black lens solution, a black lens with zero angular momenta cannot be realized~\cite{Kunduri:2014kja,Tomizawa:2016kjh}, while for the cosmological black lens solution (at least, at early time), it can be realized due to the existence of a cosmological constant. \medskip This paper is organized as follows. In Sec.~\ref{sec:Review}, we first review the supersymmetric black lens solution~\cite{Tomizawa:2016kjh} in the bosonic sector of five-dimensional minimal supergravity. Next, we review the Klemm-Sabra (KS) solution of Ref.~\cite{Klemm:2000gh}, which can be regarded as a cosmological BMPV solution. In Sec.~\ref{sec:solution}, we explicitly present the cosmological black lens solution in the five-dimensional EMCS theory with a positive cosmological constant. In Sec.\ref{sec:analysis}, it is shown that---unlike the Klemm-Sabra solution---this solution obtained here describes a dynamical spacetime, which can be seen by showing that the horizon topology changes from the lens space of $L(n,1)$ into a sphere $S^3$. In Sec.~\ref{sec:summary}, we give the summary and some discussions. \section{Review}\label{sec:Review} \subsection{Review of the black lens solution}\label{sec:black lens} We give a brief review on the supersymmetric black lens solution~\cite{Kunduri:2014kja,Tomizawa:2016kjh} in five-dimensional minimal ungauged supergravity, whose bosonic Lagrangian is described by the Einstein-Maxwell-Chern-Simons theory: \begin{eqnarray} \mathcal L=R \star 1 -2 F \wedge \star F -\frac 8{3\sqrt 3}A \wedge F \wedge F \,, \label{eq:Lagrangian} \end{eqnarray} where $F=d A$ is the Maxwell field. The metric and gauge potential $1$-form are given, respectively, by \begin{eqnarray} \label{metric} ds^2&=&-f^2(dt+\omega)^2+f^{-1}ds_{M}^2,\\ A&=&\frac{\sqrt 3}{2} \left[f(d t+\omega)-\frac KH(d \psi+\chi)-\xi \right]\,, \end{eqnarray} where $ds^2_M$ is the Gibbons-Hawking metric, which can be written in terms of the spherical coordinates and the harmonic function $H$ on $\mathbb E^3$ that has $n$ point sources as \begin{eqnarray} ds^2_M&=&H^{-1}(d\psi+\chi)^2+H[dr^2+r^2(d\theta^2+\sin^2\theta d\phi^2)],\\ H&=&\sum_{i=1}^n\frac{h_i}{r_i}:=\frac{n}{r_1}-\sum_{i=2}^n\frac{1}{r_i}, \label{Hdef}\\ \chi&=&\sum_{i=1}^nh_i \frac{z-z_i}{r_i}. \end{eqnarray} Here, $r_i:=|{\bm r}-{\bm r_i}|=\sqrt{r^2-2rz_i\cos\theta +z_i^2}$, where the constants $z_i$ and $n$ takes positive integer values. Furthermore, the function $f^{-1}$ and the $1$-form $\omega$ in the metric are given, respectively, by \begin{eqnarray} f^{-1}&=&H^{-1}K^2+L,\label{eq:fi}\\ \omega&=&\omega_\psi(d\psi+\chi)+\hat \omega, \end{eqnarray} where the function $\omega_\psi$ and the $1$-form $\hat \omega$ are written as \begin{eqnarray} \omega_\psi&=&H^{-2}K^3+\frac{3}{2} H^{-1}KL+M, \\ \hat \omega&=&\sum_{i,j=1(i\not=j)}^n\left(h_im_j+\frac{3}{2}k_i l_j \right)\frac{r^2-(z_i+z_j)r\cos\theta+z_iz_j}{z_{ji}r_ir_j}\nonumber\\ &&-\sum_{i=1}^n\left(m_0h_i+\frac{3}{2}l_0k_i\right)\frac{z-z_i}{r_i}+c, \end{eqnarray} and $ K, L, M$ are the harmonic functions with $n$ point sources on $\mathbb E^3$, which are given by \begin{eqnarray} K&=&\sum_{i=1}^n\frac{k_i}{r_i},\label{Kdef}\\ L&=&l_0+\sum_{i=1}^n\frac{l_i}{r_i},\label{Ldef}\\ M&=&m_0+\sum_{i=1}^n\frac{m_i}{r_i},\label{Mdef} \end{eqnarray} and $c$ is a constant and $z_{ji}:=z_j-z_i$. The $1$-form $\xi$ in the gauge potential $A$ is written as \begin{eqnarray} \xi&=-&\sum_{i=1}^nk_i \frac{z-z_i}{r_i}. \end{eqnarray} The point source ${\bm r}={\bm r}_1$ corresponds to a degenerate Killing horizon which has the lens space topology $L(n,1)$ and each point source ${\bm r}={\bm r}_i\ (i=2,\ldots,n)$ denotes a regular timelike surface as an origin in the Minkowski spacetime under the appropriate conditions on the parameters. \medskip The solution contains $4n+1$ parameters ($c, l_0, m_0, k_i, l_i, m_{i\ge 2}, z_{i\ge 2}$) but must satisfy the equations \begin{eqnarray} l_0&=&1,\label{eq:l0}\\ c&=&-\sum_{i,j(i\not =j)}\frac{h_im_j+\frac{3}{2}k_il_j}{z_{ji}},\label{eq:c}\\ m_0&=&-\frac{3}{2}\sum_{i}k_il_0,\label{eq:m02} \end{eqnarray} \begin{eqnarray} l_i&=&k_i^2\ (i=2,\ldots,n),\label{eq:condition1}\\ m_i&=&\frac{1}{2}k_i^3\ (i=2,\ldots,n),\label{eq:condition2}\\ c_2&:=&m_0-\frac{3}{2}k_i\nonumber \\ &&+\sum_{j(\not =i)}\frac{1}{|z_{ji}|}[3k_i^2k_j+2k_i^3h_j-\frac{3}{2}(k_il_j+l_ik_j+k_il_ih_j)+m_j]=0\ (i=2,\ldots,n)\label{eq:c2} \end{eqnarray} and the inequalities \begin{eqnarray} R_1^2&:=&k_1^2+nl_1<0,\label{eq:R1ineq}\\ R_2^2&:=&\frac{l_1^2(3k_1^2+4nl_1)}{R_1^4}<0,\label{eq:R2ineq}\\ c_1&:=&1+\sum_{j(\not=i)}\frac{1}{|z_{ji}|}(l_j-2k_ik_j-k_i^2h_j)<0\ (i=2,\ldots,n),\label{eq:c1ineq} \end{eqnarray} where we assume $z_i>z_j$ for $i>j$ and $z_{ij}:=z_i-z_j$. Equations.~(\ref{eq:l0})---(\ref{eq:m02}) are required from asymptotic flatness and Eqs.~(\ref{eq:condition1})---(\ref{eq:c2}) remove curvature singularities and causal violation around each ${\bm r}_i\ (i=2,\ldots,n)$. In particular, the conditions (\ref{eq:c2}) are referred to as ``bubble equations'' in Refs.~\cite{Bena:2005va,Bena:2007kg}. The inequalities (\ref{eq:R1ineq}) and (\ref{eq:R2ineq}) exclude closed timelike curves (CTCs) around the horizon ${\bm r}_1$, and the inequality (\ref{eq:c1ineq}) ensures that the spacetime metric is Lorenzian around each ${\bm r}_i\ (i=2,\ldots,n)$. Thus, the physical requirements of regularity and causality reduce the number of parameters to $n+1$. The Arnowitt-Deser-Misner (ADM) mass and two ADM angular momenta can be computed, respectively, as \begin{eqnarray} M&=&\frac{\sqrt{3}}{2}Q=3\pi\left[\left(\sum_{i}k_i\right)^2+\sum_{i}l_i\right],\\ J_\psi&=&4\pi \left[ \left(\sum_{i}k_i\right)^3 +\frac{1}{2}\sum_{i=2}^nk_i^3+\frac{3}{2}\left(\sum_{i}k_i\right)\left(l_1+\sum_{i=2}^nk_i^2\right)\right],\label{eq:jpsi}\\ J_\phi&=&6\pi\left[ \left(\sum_{i}k_i\right) \left( \sum_{j=2}^nz_j\right)+\left(\sum_{i =2}^nk_iz_i\right) \right], \label{eq:jphi} \end{eqnarray} where $Q$ is the electric charge, which saturates the Bogomol'nyi bound. Let $(x,y,z)$ be Euclidean coordinates on ${\mathbb E}^3$ in the Gibbons-Hawking space. The $z$ axis of ${\mathbb E}^3$ is split into the $(n+1)$ intervals: $I_-=\{(x,y,z)|x=y=0, z<z_1\}$, $I_i=\{(x,y,z)|x=y=0,z_i<z<z_{i+1}\}\ (i=1,...,n-1)$, and $I_+=\{(x,y,z)|x=y=0,z>z_n\}$. The magnetic fluxes through $I_i\ (i=1,...,n-1)$ are defined as \begin{eqnarray} q[I_i]:=\frac{1}{4\pi}\int_{I_i}F\,, \end{eqnarray} which gives \begin{eqnarray} q[I_1]=\frac{\sqrt{3}}{2}\left[ \frac{k_1l_1}{2(k_1^2+nl_1)} -k_2 \right]\,, \qquad q[I_i]=\frac{\sqrt{3}}{2}(k_i-k_{i+1})~~ (i=2,... n-1). \label{mag_fluxes} \end{eqnarray} In particular, for $n=1$, this solution recovers the BMPV black hole solution. \subsection{Review of the Klemm-Sabra solution}\label{sec:Klemm} Next, we briefly review the Klemm-Sabra solution from Ref.~\cite{Klemm:2000gh}, which can be regarded as the BMPV black hole \cite{Breckenridge:1996is} in de Sitter or anti-de Sitter spacetime in the EMCS theory with a cosmological constant, whose Lagrangian is given by \begin{eqnarray} \mathcal L=(R+2\Lambda) \star 1 -2 F \wedge \star F -\frac 8{3\sqrt 3}A \wedge F \wedge F \,. \label{eq:Lagrangian2} \end{eqnarray} In particular, for $\Lambda>0$, the solution can be expressed in the cosmological coordinates as follows: \begin{eqnarray} ds^2 &=&- \left( \lambda \tau + \frac{m}{\rho^2} \right)^{-2} \left[ d\tau + \frac{j}{2 \rho^2} \left( d\psi + \cos \theta d\phi \right) \right] ^2 \notag \\ &&+ \left( \lambda \tau + \frac{m}{\rho^2} \right) \left[ d\rho^2 + \frac{\rho^2}{4} \left\{ d\theta^2+\sin^2\theta d\phi^2 +\left( d\psi + \cos \theta d\phi \right)^2 \right\} \right], \label{KSmet} \\ A&=&\frac{\sqrt{3}}{2}\left(\lambda\tau+\frac{m}{\rho^2} \right)^{-1}\left[ d\tau+ \frac{j}{2\rho^2}(d\psi+\cos\theta d\phi)\right].\label{KSA} \end{eqnarray} The constants $m$ and $j$ are the mass parameter and angular momentum parameter, respectively and the constant $\lambda$ is related to the positive cosmological constant by $\lambda=\pm2\sqrt{\Lambda/3}$. This metric seems to have time dependence due to the existence of the time coordinate $\tau $ in the metric, but it can be shown that it has a stationary region (see Ref.~\cite{Matsuno:2007ts}). \medskip To see the locations of the apparent horizons for the Klemm-Sabra black hole spacetime, let us define $x:=\lambda\tau \rho^2$. In terms of this $x$, the expansions of the outgoing and ingoing null geodesic congruences for $\psi,\phi,\theta={\rm constant}$ can be computed as \begin{eqnarray} \theta_\pm=\lambda\pm\frac{2x}{\sqrt{(x+m)^3-j^2}}. \end{eqnarray} Hence, it turns out that three horizons exist at $x=x_\pm,x_c$ $(x_-<x_+<x_c)$, which are the three roots of the cubic equation \begin{eqnarray} \lambda^2[(x+m)^3-j^2]-4x^2=0 \label{eq:cubic} \end{eqnarray} when the two parameters satisfy the inequalities \begin{eqnarray} 0\le m \lambda^2 \le \frac{2}{3}, \quad j_-^2 (m) \le j^2 \le j_+^2 (m), \label{hanni} \end{eqnarray} where \begin{eqnarray} j_\pm^2 (m) = \frac{4}{27 \lambda ^6} \left[9 m\lambda ^2 (8-3 m\lambda ^2) -32 \pm 8 \sqrt{2} (2-3 m\lambda ^2 )^{3/2}\right]. \end{eqnarray} In the case of $j=j_+$, the black hole horizon $x_+$ coincides with the inner horizon $x_-$, and in the case of $j=j_-$, the black hole horizon $x_+$ coincides with the cosmological horizon $x_c$. The naked singularity appears if $m$ and $j$ are outside of the ranges (\ref{hanni}). The curvature singularity exists at $x=-m$. Moreover, we can show the absence of CTCs on/outside the black hole horizon since the two-dimensional $(\psi,\phi)$ part of the metric can be shown to be positive within the ranges (\ref{hanni}). Finally, for use in the next section, we should keep in mind that this BMPV solution can be formally obtained by replacing $ ``\lambda\tau"$ of $\lambda\tau+m/\rho^2$ in Eqs.~(\ref{KSmet}) and (\ref{KSA}) with the constant $ ``1."$ \section{Cosmological black lens solution} \label{sec:solution} To obtain a cosmological black lens solution, we consider the special case of $k_i=\alpha h_i$ \ (where $\alpha$ is a constant) for $i=1,\ldots,n$ in the supersymmetric black lens solution, namely, \begin{eqnarray} K=\alpha H. \label{eq:assume} \end{eqnarray} Then, the conditions~(\ref{eq:c})-(\ref{eq:c2}) are written in simpler forms, respectively, as \begin{eqnarray} c&=&-\sum_{i,j(i\not =j)}\frac{h_i(m_j+\frac{3}{2}\alpha l_j)}{z_{ji}},\label{eq:ca}\\ m_0&=&-\frac{3}{2}\alpha, \label{eq:m0a}\\ l_i&=&\alpha^2\ (i=2,\ldots,n) ,\label{eq:condition1a}\\ m_i&=&-\frac{1}{2}\alpha^3\ (i=2,\ldots,n),\label{eq:condition2a}\\ l_1&=&-\frac{2}{3}n\alpha^2. \label{eq:c2a} \end{eqnarray} Also, the inequities (\ref{eq:R1ineq}), (\ref{eq:R2ineq}), and (\ref{eq:c1ineq}) can be written, respectively, as \begin{eqnarray} R_1^2&:=&\alpha^2n^2+nl_1>0,\label{eq:R1ineqa}\\ R_2^2&:=&\frac{l_1^2(3\alpha^2n^2+4nl_1)}{R_1^4}>0,\label{eq:R2ineqa}\\ (c_1&:=&)\ 1+\frac{l_1+n\alpha^2}{z_{i1}}<0.\label{eq:c1ineqa} \end{eqnarray} We should note that when the condition (\ref{eq:c2a}) holds, both of the inequalities (\ref{eq:R1ineqa}) and (\ref{eq:R2ineqa}) are automatically satisfied, whereas the inequality (\ref{eq:c1ineqa}) cannot be satisfied since the left-hand side is positive. Therefore, the metric never becomes Lorenzian near each point of ${\bm r}={\bm r}_i\ (i=2,\ldots,n)$ under the parameter setting (\ref{eq:assume}). This fact implies that a physical black lens with two zero angular momenta cannot be realized since it can be shown from Eq.~(\ref{eq:c2a}) that two angular momenta [Eqs.(\ref{eq:jpsi}) and (\ref{eq:jphi})] vanish when Eq.(\ref{eq:assume}) is imposed. As will be seen later, however, the existence of a positive cosmological constant changes this situation since a negative term relating to the cosmological constant term appears on the left-hand side. We will show that at least, at late time and early time, the parameters are such that all of these conditions can be satisfied. \medskip Let us recall that the Klemm-Sabra solution~(\ref{KSmet}) expressed in terms of the cosmological coordinates can be obtained by replacing the constant $1$ in the harmonic function $f=1+m/\rho^2$ with $\lambda \tau$ for the BMPV solution, where $\tau$ is a cosmological time coordinate and $\lambda=\pm2\sqrt{\Lambda/3}$. In the same way, let us formally replace the constant $l_0$ in Eq.(\ref{eq:fi}) with $\lambda \tau$. Then, we can see that the obtained metric and gauge potential 1-form are the solutions in the five-dimensional EMCS theory with a positive cosmological constant, whose Lagrangian is given by Eq.~(\ref{eq:Lagrangian2}). \medskip The metric and gauge potential obtained here for the cosmological black lens are presented, respectively, as \begin{eqnarray} \label{metric} ds^2&=&-f^2(d\tau+\omega)^2+f^{-1}ds_{M}^2,\\ A&=&\frac{\sqrt 3}{2} \left[f(d \tau+\omega)-\alpha (d \psi+\chi)-\xi \right]\,, \end{eqnarray} where $ds^2_M$ is the metric of the Gibbons-Hawking space \begin{eqnarray} ds^2_M&=&H^{-1}(d\psi+\chi)^2+H[dr^2+r^2(d\theta^2+\sin^2\theta d\phi^2)],\\ H&=&\sum_{i=1}^n\frac{h_i}{r_i}:=\frac{n}{r_1}-\sum_{i=2}^n\frac{1}{r_i}, \label{Hdef}\\ \chi&=&\sum_{i=1}^nh_i \frac{z-z_i}{r_i}. \end{eqnarray} The function $f^{-1} $ and the $1$-form $\omega$ are written as \begin{eqnarray} f^{-1} &=&\lambda \tau +\sum_i\frac{l_i+\alpha^2 h_i}{r_i},\\ \omega&=&\omega_\psi(d\psi+\chi)+\hat \omega. \end{eqnarray} The function $\omega_\psi$ and $1$-forms $(\hat \omega,\xi)$ are given, respectively, by \begin{eqnarray} \omega_\psi &=&m_0+\frac{3}{2}\alpha l_0+\sum_{i}\frac{\alpha^3h_i+\frac{3}{2}\alpha l_i+m_i}{r_i}, \end{eqnarray} \begin{eqnarray} \hat \omega&=&\Biggl[\sum_{i,j=1(i\not=j)}^nh_i\left(m_j+\frac{3}{2}\alpha l_j \right)\frac{r^2-(z_i+z_j)r\cos\theta+z_iz_j}{z_{ji}r_ir_j} \notag \\ && -\sum_{i=1}^nh_i\left(m_0+\frac{3}{2}l_0\alpha\right)\frac{r\cos\theta-z_i}{r_i}+c\Biggl]d\phi \,, \end{eqnarray} \begin{eqnarray} \xi&=-&\alpha\sum_{i=1}^nh_i \frac{z-z_i}{r_i}. \end{eqnarray} This solution has $(3n+4)$ constants $(l_0,l_{i\ge 1},m_0,m_{i\ge1},z_{i\ge1},\alpha,c)$, but as will be explained later, in order that we can regard this as a physical solution, appropriate conditions must be imposed on these parameters. \section{analysis} \label{sec:analysis} In this section, we analyze the cosmological black lens solution obtained in the previous section, focusing on the the contracting phase $\lambda<0$. Using the cosmological coordinates $\tau$, we can see that the spatial cross section of the apparent horizon changes from the lens space $L(n,1)=S^3/{\mathbb Z}_n$ into a sphere $S^3$. It is a hard task to see the process of this topology change analytically, but it is easy to see the asymptotic behaviors of the solution at early time $\tau=-\infty$ and late time $\tau=0$, since the spacetime near ${\bm r}={\bm r}_1$ and $r=\infty$ becomes (locally) the Klemm-Sabra spacetime, namely, it asymptotically becomes stationary. \subsection{Early time} First of all, we show that the spacetime around $r:=|\bm r-\bm r_1|=0$ can be locally approximated by the Klemm-Sabra black hole spacetime, namely, the spacetime asymptotically becomes stationary, at least, around this region. In fact, it turns out that for $r=|{\bm r}-{\bm r}_1| \to 0$, the metric behaves as \begin{eqnarray} ds^2&\simeq&- \left(\lambda\tau +\frac{l_1+n\alpha}{r}\right)^{-2}\left[d\tau+\left(\frac{n\alpha^3+\frac{3}{2}\alpha l_1+m_1}{r}\right)(d\psi+n\cos\theta d\phi)\right]^2 \notag\\ &+&\left(\lambda\tau +\frac{l_1+n\alpha}{r}\right)\left[\frac{r}{n}(d\psi+n\cos\theta d\phi)^2+\frac{n}{r}\left(dr^2+r^2(d\theta^2+\sin^2\theta d\phi^2)\right)\right], \end{eqnarray} where we have shifted $\tau$ appropriately. In terms of the five-dimensional radial coordinate $\rho:=\sqrt{4nr}$, the asymptotic form of the metric can be rewritten as \begin{eqnarray} ds^2&\simeq &- \left(\lambda\tau +4n\frac{l_1+n\alpha}{\rho^2}\right)^{-2}\left[d\tau+\left(4n^2\frac{n\alpha^3+\frac{3}{2}\alpha l_1+m_1}{\rho^2}\right)\left(\frac{d\psi}{n}+\cos\theta d\phi \right)\right]^2 \notag\\ &+&\left(\lambda\tau +4n\frac{l_1+n\alpha}{\rho^2}\right)\left[d\rho^2+\frac{\rho^2}{4}\left\{\left(\frac{d\psi}{n}+\cos\theta d\phi\right)^2+d\theta^2+\sin^2\theta d\phi^2\right\}\right].\label{eq:latetime} \end{eqnarray} We observe from Eq.(\ref{KSmet}) that this metric is locally isometric to that of the Klemm-Sabra solution obtained by identifying $(m,j)$ in Eq.~(\ref{KSmet}) with $(4n(l_1+n\alpha),8n^2(n\alpha^3+\frac{3}{2}n\alpha l_1+m_1))$. Therefore, the sufficiently small closed surface that is centered at $r=0\ ({\bm r}={\bm r}_1)$ turns out to be outer trapped at early time $\tau=-\infty$ since $\theta_+=0$ holds at $\rho^2=x_+(4n(l_1+n\alpha),8n^2(n\alpha^3+\frac{3}{2}n\alpha l_1+m_1))/(\lambda\tau)$ for a sufficiently small $\tau$. From Eq.~(\ref{eq:lens}), we must note that the spatial cross section of the apparent horizon is topologically not a sphere $S^3$, but rather the lens space of $L(n,1)=S^3/{\mathbb Z}_n$, since $\psi$ in the asymptotic metric~(\ref{eq:latetime}) is divided by $n$. \medskip Second, we show that the spacetime near each $\bm r=\bm r_i\ (i=2,\dots,n)$ behaves as an origin in the Minkowski spacetime written in polar coordinates. For $r:=|{\bm r}-{\bm r}_i| \to 0$ $(i=2,\ldots,n)$ and $\tau\to-\infty $ and keeping $\lambda \tau r=:\beta$\ (where $\beta$ is a positive constant), the functions $(f^{-1},\omega_\psi)$ are approximated as \begin{eqnarray} f^{-1}&\simeq& \frac{l_i+\alpha^2h_i+\beta}{r}+c_1,\\ \omega_\psi &\simeq& \frac{\alpha^3h_i+\frac{3}{2}\alpha l_i+m_i}{r}+c_2, \end{eqnarray} where the constants $c_1$ and $c_2$ are defined by \begin{eqnarray} c_1&:=& \sum_{j(\not=i)}\frac{l_j+\alpha^2 h_j}{|z_{ji}|},\\ c_2&:=& m_0+\frac{3}{2}\alpha l_0+\sum_{j(\not=i)}\frac{\alpha^3h_j+\frac{3}{2}\alpha l_j+m_j}{|z_{ji}|}. \end{eqnarray} The metric behaves as \begin{eqnarray} ds^2&\simeq& -\left(\frac{l_i+\alpha^2h_i+\beta}{r}+c_1\right)^{-2} \biggl[d\tau+\left(\frac{\alpha^3h_i+\frac{3}{2}\alpha l_i+m_i}{r}+c_2\right)\nonumber\\ &\times& \left\{d\psi+(-\cos\theta +\chi_{(0)})d\phi\right\}+(\hat\omega_{(1)}\cos\theta+\hat \omega_{(0)})d\phi \biggr]^2\nonumber\\ &-&\left(\frac{l_i+\alpha^2h_i+\beta}{r}+c_1\right)r\left[\left\{d\psi+(-\cos\theta +\chi_{(0)})d\phi\right\}^2+\frac{dr^2}{r^2}+d\theta^2+\sin^2\theta d\phi^2\right], \end{eqnarray} where $\chi_{(0)}$, $\hat\omega_{(0)}$, and $\hat\omega_{(1)}$ are given, respectively, by \begin{eqnarray} \chi_{(0)} & := &- \sum_{j(\not=i)}\frac{h_jz_{ji}}{|z_{ji}|} \,,\\ \hat \omega_{(0)} & := & \sum_{k,j(\not=i,k\not=j)}\left(h_km_j+\frac{3}{2}\alpha h_kl_j\right)\frac{z_{ji}z_{ki}}{|z_{ji}z_{ki}|z_{jk}}+\sum_{j(\not= i)}\left(m_0h_j+\frac{3}{2}\alpha h_jl_0\right)\frac{z_{ji}}{|z_{ji}|}+c\,, \\ \hat \omega_{(1)}& := & -\sum_{j(\not=i)}\left(h_im_j-h_jm_i+\frac{3}{2}\alpha(h_il_j-h_jl_i)\right)\frac{1}{|z_{ji}|}-\left(m_0h_i+\frac{3}{2}\alpha h_il_0\right)\,. \end{eqnarray} \medskip To remove the divergence of the metric at $r=|{\bm r}-{\bm r}_i|=0\ (i=2,\ldots,n)$, we impose the following conditions on the set of parameters $(l_i,m_i)\ (i=2,\ldots,n)$: \begin{eqnarray} l_i&=&\alpha^2-\beta, \label{eq:li}\\ m_i&=&-\frac{1}{2}\alpha^3+\frac{3}{2}\alpha \beta,\label{eq:mi} \end{eqnarray} which imply \begin{eqnarray} c_2=\hat\omega_{(1)}. \end{eqnarray} Therefore, the metric is \begin{eqnarray} ds^2&\simeq& -c_1^{-2}\biggl[d\tau +c_2\left\{d\psi+\chi_{(0)}d\phi\right\}+\hat \omega_{(0)}d\phi \biggr]^2\\ &-&c_1r\left[\frac{dr^2}{r^2}+\left\{d\psi+(-\cos\theta +\chi_{(0)})d\phi\right\}^2+d\theta^2+\sin^2\theta d\phi^2\right]. \end{eqnarray} Since the existence of $c_2$ and $\hat\omega_{(0)}$ yield CTCs around ${\bm r}={\bm r}_i\ (i=2,\ldots,n)$, furthermore, we must impose the following additional condition \begin{eqnarray} (c_2=)\ m_0+\frac{3}{2}\alpha l_0+\frac{n\alpha^3+\frac{3}{2}l_1\alpha+m_1}{z_{i1}}=0. \label{eq:c20} \end{eqnarray} It can be shown from $c_2=0$ that $\hat \omega_{(0)}=0$ automatically holds. In addition, in order that the metric has Lorenzian signature, we need to require \begin{eqnarray} (c_1=)\ \frac{l_1+n\alpha^2}{z_{i1}}-\beta\sum_{2\le j(\not=i)}\frac{1}{|z_{ji}|}<0. \label{eq:c1ineq2} \end{eqnarray} Under this parameter setting, the asymptotic form of the metric can be written as \begin{eqnarray} ds^2 &=&-(dt')^2+d\rho^2+\frac{\rho^2}{4}\left[(d\psi'-\cos\theta d\phi)^2+d\theta^2+\sin^2\theta d\phi^2\right], \end{eqnarray} where we have introduced the new coordinates $(t',\psi',\rho)$ \begin{eqnarray} t'=c_1^{-1}\tau,\quad \psi'=\psi+\chi_{(0)}\phi,\quad \rho=2\sqrt{-c_1r}. \end{eqnarray} As we showed in Sec.II, for the stationary black lens solution with $k_i=\alpha h_i\ (i=1,\ldots, n)$, the inequality (\ref{eq:c1ineq2}) with $\beta=0 $ cannot be satisfied. In contrast to this, for the cosmological black lens solution, the parameter region satisfying the inequality (\ref{eq:c1ineq2}) indeed exists since the first term on the left-hand side of Eq.(\ref{eq:c1ineq2}) is still positive, while the second term---which obviously appears by the existence of a positive cosmological constant---is negative. Hence, under these conditions, the region around ${\bm r} ={\bm r}_i\ (i=2,\ldots,n)$ at early time is isometric to the Minkowski spacetime. \subsection{Late time} For $\rho:=2\sqrt{r}\to \infty$, under the conditions~(\ref{eq:li}) and (\ref{eq:mi}), the two functions $(f^{-1},\omega_\psi)$ and the $1$-form $\hat \omega$ behave, respectively, as \begin{eqnarray} f^{-1} &\simeq&\lambda\tau +\frac{l_1+n\alpha^2-(n-1)\beta}{r}+{\rm const.},\label{eq:fi}\\ \omega_\psi&\simeq& m_0+\frac{3}{2}\alpha l_0+\frac{\alpha^3\sum_ih_i+\frac{3}{2}\alpha \sum_il_i+\sum_im_i}{r}\\ &=&\frac{\alpha^3+\frac{3}{2}\alpha l_1+m_1}{r}, \end{eqnarray} and \begin{eqnarray} \hat \omega_\phi&\simeq &\sum_{i,j=1(i\not=j)}^n\left(h_im_j+\frac{3}{2}k_i l_j \right)\frac{1}{z_{ji}}\left(1-\frac{z_{ji}^2\sin^2\theta}{2r^2}+{\cal O}(r^{-3})\right) \notag \\ && -\sum_{i=1}^n\left(m_0h_i+\frac{3}{2}l_0k_i\right)\left(\cos\theta -\frac{z_i\sin^2\theta}{r}+{\cal O}(r^{-2})\right)+c\\ &=& \sum_{i,j=1(i\not=j)}^n\left(h_im_j+\frac{3}{2}k_i l_j \right)\frac{1}{z_{ji}}+c -\sum_{i=1}^n\left(m_0h_i+\frac{3}{2}l_0k_i\right)\cos\theta \notag \\ && -\sum_{i=1}^n\left(m_0h_i+\frac{3}{2}l_0k_i\right)\frac{-z_i\sin^2\theta}{r}+{\cal O}(r^{-2})\\ &=& \sum_{i=1}^n\left(m_0+\frac{3}{2}l_0\alpha\right)h_i\left(\cos\theta-\frac{-z_i\sin^2\theta}{r}\right)+{\cal O}(r^{-2})\\ &=&{\cal O}(r^{-2}), \end{eqnarray} where to ensure an asymptotic de Sitter space (or, the absence of CTCs) at $r\to \infty$, we have imposed \begin{eqnarray} c&=&-\sum_{i,j}\left(h_im_j+\frac{3}{2}k_il_j\right)\frac{1}{z_{ij}},\\ m_0&+&\frac{3}{2}\alpha l_0=0. \label{eq:m0} \end{eqnarray} \medskip Then, the metric can be approximately written as \begin{eqnarray} ds^2&\simeq&- \left(\lambda\tau +\frac{l_1+n\alpha^2-(n-1)\beta}{r}\right)^{-2}\left[d\tau+\frac{\alpha^3+\frac{3}{2}\alpha l_1+m_1 }{r}(d\psi+\cos\theta d\phi)\right]^2 \notag\\ &+& \left(\lambda\tau +\frac{l_1+n\alpha^2-(n-1)\beta}{r}\right)\left[r(d\psi+\cos\theta d\phi)^2+\frac{1}{r}\left(dr^2+r^2d\Omega_{S^2}^2\right)\right]\\ &=&- \left(\lambda\tau +\frac{4[l_1+n\alpha^2-(n-1)\beta]}{\rho^2}\right)^{-2}\left[d\tau+\frac{4\{\alpha^3+\frac{3}{2}\alpha l_1+m_1 \}}{\rho^2}(d\psi+\cos\theta d\phi)\right]^2\notag\\ &+&\left(\lambda\tau +\frac{4[l_1+n\alpha^2-(n-1)\beta]}{\rho^2}\right)\left[d\rho^2+\frac{\rho^2}{4}\left\{(d\psi+\cos\theta d\phi)^2+d\theta^2+\sin^2\theta d\phi^2\right\}\right], \end{eqnarray} where we have shifted $\tau$ so that the constant term in~Eq.(\ref{eq:fi}) vanishes. Compared with Eq.~(\ref{KSmet}), we immediately find that this obtained asymptotic metric coincides with the metric of the Klemm-Sabra solution with the mass parameter $m=4[l_1+n\alpha^2-(n-1)\beta]$ and angular momentum parameter $j=8[\alpha^3+\frac{3}{2}\alpha l_1+m_1]$. Therefore, a sufficiently large closed surface $r={\rm const}$ turns out to be outer trapped at late time $\tau\simeq -0$ since $\theta_+=0$ holds at $\rho^2=x_+(4(l_1+n\alpha^2-(n-1)\beta),8(\alpha^3+\frac{3}{2}\alpha l_1+m_1))/(\lambda\tau)$ if $\tau$ is sufficiently close to $0$ and negative. Here, we must note that the spatial cross section of the apparent horizon is topologically a sphere $S^3$, whereas it is topologically the lens space $L(n,1)=S^3/{\mathbb Z}_n$ at early time. \medskip From the behaviors at early time and at late time,, we can make the physical interpretation that the solution obtained here describes the topology change of a black hole from the lens space $L(n,1)$ into a sphere $S^3$. We note from the conditions~(\ref{eq:m0}) and (\ref{eq:c20}) that at early time, the angular momentum along $\partial/\partial\psi$, $j$ vanishes, whereas at late time it does not vanish. Hence, apparently, the angular momentum seems to not be preserved but this is not so very surprising since at early time, the Maxwell field exists outside the black hole horizon, which can be considered to contribute to the mass and angular momenta. \subsection{$\tau=$const.} It seems to be difficult to analytically see the behavior of the solution for $-\infty<\tau<0$ since the spacetime is not stationary. However, we can see the behaviors around the points ${\bm r}={\bm r}_i\ (i=2,\ldots,n)$ at which the metric diverges for finite $\tau\ (-\infty<\tau<0)$. For $r:=|{\bm r}-{\bm r}_i| \to 0$ and finite $\tau\ (-\infty<\tau<0)$, the metric functions $(f^{-1},\omega_\psi)$ behave as \begin{eqnarray} f^{-1}&\simeq& \frac{-\beta}{r}+\lambda\tau+c_1,\\ \omega_\psi &\simeq& \frac{\alpha^3h_i+\frac{3}{2}\alpha l_i+m_i}{r}+c_2={\cal O}(r) ,\\ \hat \omega_\phi&\simeq& \hat \omega_{(1)}\cos\theta+\hat\omega_{(0)}= 0, \end{eqnarray} where we have used the conditions~(\ref{eq:li}), (\ref{eq:mi}), and (\ref{eq:c20}). The metric is approximated by \begin{eqnarray} ds^2&\simeq&\frac{-r^2}{\beta^2}dt^2+d\rho^2+\beta \left[(d\psi'-\cos\theta d\phi)^2+d\theta^2+\sin^2\theta d\phi^2 \right], \end{eqnarray} where we have introduced the new coordinates $(\psi',\rho)$ \begin{eqnarray} \psi'=\psi+\chi_{(0)}\phi,\quad \rho=\sqrt{\beta}\log r. \end{eqnarray} Thus, the divergence of the metric at $r=0$ has been removed. It turns out from the above asymptotic form that for $-\infty<\tau<0$, each point ${\bm r}={\bm r}_i$ approximately behaves like a Killing horizon with the spatial cross section of a sphere of radius $2\sqrt{\beta}$. \section{Summary} \label{sec:summary} In this paper, we have obtained the cosmological black lens solution in the five-dimensional EM theory with a Chern-Simons term and a positive cosmological constant. We have also discussed some properties of the rotating charged black lens solution in terms of the cosmological coordinates. It has been shown that this solution can be regarded as a dynamical black hole spacetime such that at early time, the horizon cross section is isometric to the lens space $L(n,1)=S^3/{\mathbb Z}_n$, while at late time, it is isometric to a sphere $S^3$. This solution has been obtained from the special limit of the extreme black lens in Refs.~\cite{Kunduri:2014kja,Tomizawa:2016kjh} in the same way that Klemm and Sabra's cosmological charged black hole was obtained from the BMPV black hole solution. In this restricted limit, the supersymmetric black lens spacetime behaves pathologically near the point sources $\bm r=\bm r_i\ (i=2,\ldots,n)$ outside the horizon, whereas the cosmological solution does not show this behavior (at least, at early time and late time) due to the presence of a cosmological constant. \medskip For $n=1$, our solution exactly coincides with the Klemm-Sabra solution that can be physically regraded as a stationary, rotating black hole in de Sitter space. It hence follows that the angular momentum along $\partial/\partial \psi$ is conserved. For $n\ge 2$, in turn, it apparently seems to describe the dynamical spacetime that violates the conservation of angular momentum. In fact, it can be seen from Eqs.~(\ref{eq:c20}) and (\ref{eq:m0}) that at early time, ${\bm r}={\bm r}_1$ locally behaves like the five-dimensional Reissner-Nordstr\"om de Sitter black hole ({\it i.e.,} the Klemm-Sabra black hole with zero angular momentum), whereas at late time, $r=\infty$ behaves as the Klemm-Sabra black hole with nonzero angular momentum. We may consider that this is because at early time, the Maxwell field outside the black hole horizon carries the same amount of the angular momentum. \medskip The cosmological multi-black-hole solutions were first obtained by Kastor and Trashen~\cite{Kastor:1992nn} for the four-dimensional EM theory with a positive cosmological constant. Furthermore, these solutions were immediately generalized to the five-dimensional EM theory with a cosmological constant and a Chern-Simons term in Ref.~\cite{London:1995ib} for the nonrotating case and in Ref.~\cite{Klemm:2000gh} for the rotating case, respectively. For the present solution, we can consider the multicentered black hole solution if one does not impose the conditions (\ref{eq:li}), (\ref{eq:mi}), and (\ref{eq:c20}). It is easily expected that such a solution describes the coalescence of black lenses since multiple black holes with various lens space topologies exist at early time, while only a single black hole with spherical topology exists at late time. \medskip From the viewpoint of the AdS/CFT correspondence, one of the most interesting generalizations is to look for an anti-de Sitter black lens. Concerning the spherical topology, the charged rotating black hole solutions were obtained in the five-dimensional EMCS theory with a negative cosmological constant in Refs.~\cite{Klemm:2000gh,Gutowski:2004ez,Gutowski:2004yv,Chong:2004na,Chong:2005da,Chong:2005hr,Chong:2006zx}. As for the ring topology $S^1\times S^2$, it was shown that there is no supersymmetric black ring solution in five-dimensional minimal gauged supergravity, although it is not yet known if one exists in nonminimal supergravity. These results seem to not prohibit the existence of a black lens solution even in five-dimensional minimal gauged supergravity. Therefore, to construct such solutions is also an interesting and challenging problem. This issue deserves further study. \acknowledgments This work was supported by the Grant-in-Aid for Scientific Research (C) (Grant Number ~17K05452) from the Japan Society for the Promotion of Science (S.T.).
{ "timestamp": "2018-01-12T02:05:44", "yymm": "1712", "arxiv_id": "1712.05132", "language": "en", "url": "https://arxiv.org/abs/1712.05132" }
\section{Proposed Algorithm} \label{sec:proposed} We claim that an action can be recognized from a video by identifying a set of key segments presenting important action components. So we design a neural network that learns to measure the importance of each segment in a video and automatically selects a sparse subset of representative segments to predict the video-level class labels. Only ground-truth video-level class labels are required for training the model. For action localization at inference time, we first identify relevant classes in each video and then generate temporal action proposals from temporal class activations and attentions to find the temporal location of each relevant class. The network architecture for our weakly supervised action recognition component is illustrated in Figure~\ref{fig:architecture}. We describe each step of our algorithm in the rest of this section. \subsection{Action Classification} \label{sub:weakly} To predict class labels in each video, we sample a set of segments and extract feature representations from each segment using pretrained convolutional neural networks. Each feature vector is then fed to an attention module that consists of two fully connected (FC) layers and a ReLU layer located between the two FC layers. The output of the second FC layer is given to a sigmoid function that enforces the generated attention weights to be between 0 and 1. These class-agnostic attention weights are then used to modulate the temporal average pooling---a weighted sum of the feature vectors---to create a video-level representation. We pass this representation through an FC layer followed by a sigmoid layer to obtain class scores. Formally, let ${\bf x}_t \in \mathbb{R}^m$ be the $m$ dimensional feature representation extracted from a video segment centered at time $t$, and $\lambda_t$ be the corresponding attention weight. The video level representation, denoted by $\bar{{\bf x}}$, corresponds to an attention weighted temporal average pooling, which is given by \begin{equation} \bar{{\bf x}} = \sum_{t=1}^T \lambda_t {\bf x}_t, \label{eq:video_feature} \end{equation} where ${\bm \lambda} = (\lambda_1, \dots, \lambda_T)^{\top}$ is a vector of scalar outputs from the attention module and $T$ is the total number of sampled video segments. The attention weight vector ${\bm \lambda}$ is defined in a class-agnostic way, which is useful to identify segments relevant to all the actions of interest and estimate the temporal intervals of the detected actions. The loss function in the proposed network is composed of two terms, the classification loss and the sparsity loss, which is given by% \begin{equation} \mathcal{L} = \mathcal{L}_\text{class} + \beta \cdot \mathcal{L}_\text{sparsity}, \label{eq:loss} \end{equation} where $\mathcal{L}_\text{class}$ denotes the classification loss computed on the video-level class labels, $\mathcal{L}_\text{sparsity}$ is the sparsity loss on the attention weights, and $\beta$ is a constant to control the trade-off between the two terms. The classification loss is based on the standard multi-label cross-entropy loss between ground-truth and $\bar{{\bf x}}$ (after passing through a few layers as illustrated in Figure~\ref{fig:architecture}), while the sparsity loss is given by the $\ell_1$ norm on attention weights $|| {\bm \lambda} ||_1$. Because of the use of the sigmoid function and the $\ell_1$ loss, all the attention weights tend to have values close to either 0 or 1. Note that integrating the sparsity loss is aligned with our claim that an action can be recognized with a sparse subset of key segments in a video. \subsection{Temporal Class Activation Mapping} \label{sub:temporal} \begin{figure*}[ht] \captionsetup{font=small} \centering \includegraphics[width=1\linewidth]{figures/cam_attention_example2.pdf} \caption{Illustration of the ground-truth temporal intervals for the {\it ThrowDiscus} class, the temporal attentions, and the T-CAM for an example video in the THUMOS14 dataset~\cite{jiang14thumos}. The horizontal axis in the plots denote the timestamps. In this example, the T-CAM values for {\it ThrowDiscus} provide accurate action localization information. Note that the temporal attention weights are large at several locations that do not correspond to the ground-truth annotations. This is because temporal attention weights are trained in a class-agnostic way.} \label{fig:tcam_example} \end{figure*} To identify the time intervals corresponding to target actions, we extract a number of action interval candidates. Based on the idea in \cite{zhou16learning}, we derive a one dimensional class-specific activation map in the temporal domain, referred to as the Temporal Class Activation Map (T-CAM). Let ${\bf w}^{c}(k)$ denote the $k$-th element in the weight parameter ${\bf w}^c$ of the final fully connected layer, where the superscript $c$ represents the index of a particular class. The input to the final sigmoid layer for class $c$ is \begin{align} s^c & = \sum_{k=1}^m {\bf w}^{c}(k) \bar{\bf x}(k) \nonumber \\ & = \sum_{k=1}^m {\bf w}^{c}(k) \sum_{t=1}^T \lambda_t {\bf x}_t(k) \\ & = \sum_{t=1}^T \lambda_t \sum_{k=1}^m {\bf w}^{c}(k) {\bf x}_t(k). \nonumber \end{align} T-CAM, denoted by ${\bf a}_t = (a_t^1, a_t^2, \dots, a_t^{C})^{\top}$, indicates the relevance of the representations to each class at time step $t$, where each element $a_t^c$ for class $c$ ($c = 1, \dots, C$) is given by \begin{equation} a_t^c = \sum_{k=1}^m {\bf w}^c(k) {\bf x}_t(k). \label{eq:t-cam} \end{equation} Figure~\ref{fig:tcam_example} illustrates an example of the attention weights and the T-CAM outputs in a video given by the proposed algorithm. We can observe that the discriminative temporal regions are effectively highlighted by the attention weights and the T-CAMs. Also, some temporal intervals with large attention weights do not correspond to large T-CAM values because such intervals may represent other actions of interest. The attention weights measure the generic actionness of temporal video segments while the T-CAMs present class-specific information. \subsection{Two-stream CNN Models} \label{sub:two-stream} We employ the recently proposed I3D model~\cite{carreira17quo} to compute feature representations for the sampled video segments. Using multiple streams of information such as RGB and optical flow has become a standard practice in action recognition and detection~\cite{carreira17quo,feichtenhofer16convolutional,simonyan14two} as it often provides a significant boost in performance. We also train two action recognition networks separately with identical settings as illustrated in Figure~\ref{fig:architecture} for the RGB and the flow stream. Note that our I3D networks are pretrained on the Kinetics dataset~\cite{kay2017kinetics}, and we only use it as feature extraction machines without any fine-tuning on our target datasets. Our two-stream networks are then fused to localize actions in an input video. The procedure is discussed in the following subsection. \subsection{Temporal Action Localization} \label{sub:temporal} For an input video, we identify relevant class labels based on video-level classification scores (Section~\ref{sub:weakly}). For each relevant action, we generate temporal proposals, \ie, one-dimensional time intervals, with their class-specific confidence scores, corresponding to segments that potentially enclose the target actions. To generate temporal proposals, we compute the T-CAMs for both the RGB and the flow streams, denoted by $a_{t,\text{RGB}}^c$ and $a_{t,\text{FLOW}}^c$ respectively, based on \eqref{eq:t-cam} and use them to derive the weighted T-CAMs, $\psi^c_\text{t, \text{RGB}}$ and $\psi^c_\text{t, \text{FLOW}}$ as \begin{align} \psi^c_{t,\text{RGB}} & = \lambda_{t, \text{RGB}} \cdot \text{sigmoid}(a_{t, \text{RGB}}^c) \\ \psi^c_{t, \text{FLOW}} & = \lambda_{t, \text{FLOW}} \cdot \text{sigmoid}(a_{t, \text{FLOW}}^c). \end{align} Note that $\lambda_t$ is an element of the sparse vector $\bm \lambda$, and multiplying $\lambda_t$ can be interpreted as a soft selection of the values from the following sigmoid function. Similar to~\cite{zhou16learning}, we threshold the weighted T-CAMs, $\psi^c_{t,\text{RGB}}$ and $\psi^c_{t,\text{FLOW}}$ to segment these signals. The temporal proposals are then the one-dimensional connected components extracted from each stream. It is intuitive to generate action proposals using the weighted T-CAMs, instead of directly from the attention weights, because each proposal should contain a single kind of action. Optionally, we linearly interpolate the weighted T-CAM signals between sampled segments before thresholding to improve the temporal resolution of the proposals with minimal computation addition. Unlike the original CAM-based bounding box proposals~\cite{zhou16learning} where only the largest bounding box is retained, we keep all the connected components that pass the predefined threshold. Each proposal $[t_{\text{start}},t_{\text{end}}]$ is assigned a score for each class $c$, which is given by the weighted average T-CAM of all the frames within the proposal: \begin{equation} \sum_{t=t_{\text{start}}}^{t_{\text{end}}}\lambda_{t, *} \frac{\alpha \cdot a^c_{t,\text{RGB}}+ (1-\alpha) \cdot a^c_{t,\text{FLOW}}}{t_{\text{end}}-t_{\text{start}}+1}\label{eq:box_score}, \end{equation}% where $* \in \{ \text{RGB}, \text{FLOW} \}$ and $\alpha$ is a parameter to control the magnitudes of the two modality signals. Finally, we perform non-maximum suppression among temporal proposals of each class independently to remove highly overlapped detections. \subsection{Discussion} \label{sub:discussion} Our algorithm attempts to localize actions in untrimmed videos temporally by estimating sparse attention weights and T-CAMs for generic and specific actions, respectively. The proposed method is principled and novel when compared to the existing UntrimmedNet~\cite{wang17untrimmednets} because of the following reasons. \begin{itemize} \item Our model has a unique deep neural network architecture with classification and sparsity losses. \item Our action localization procedure is based on a completely different pipeline that leverages class-specific action proposals using T-CAMs. \end{itemize} Note that~\cite{wang17untrimmednets} follows a similar framework used in~\cite{bilen16weakly}, where softmax functions are employed across both action classes and proposals; it has a critical limitation in handling multiple action classes and instances in a single video. Similar to pretraining on the ImageNet dataset~\cite{deng09imagenet} for weakly supervised learning problems in images, we utilize features from I3D models~\cite{carreira17quo} pretrained on the Kinetics dataset~\cite{kay2017kinetics} for video representation. Although the Kinetics dataset has considerable class overlap with our target datasets, its video clips are mostly short and contain only parts of actions, which makes their characteristics different from the ones in our untrimmed target datasets. We also do not fine-tune the I3D models and our network may not be optimized for the classes in the target tasks and datasets. \section{Conclusion} \label{sec:conclusion} We presented a novel weakly supervised temporal action localization algorithm based on deep neural networks. The classification is performed by evaluating a video-level representation given by a sparsely weighted mean of segment-level features where the sparse coefficients are learned with a sparsity loss in our deep neural network. For weakly supervised temporal action localization, one-dimensional action proposals are extracted from which proposals relevant to target classes are selected to identify the time intervals of actions. Our proposed approach achieved state-of-the-art performance on the THUMOS14 dataset, and we reported weakly supervised temporal action localization results on the ActivityNet1.3 dataset for the first time. \section{Experiments} \label{sec:experiment} This section first describes the details of the benchmark datasets and the evaluation setup. Our algorithm, referred to as Sparse Temporal Pooling Network (STPN), is compared with other state-of-the-art techniques based on fully and weakly supervised learning. Finally, we analyze the contribution of individual components in our algorithm. \subsection{Datasets and Evaluation Method} \label{sub:datasets} We evaluate STPN on two popular action localization benchmark datasets, THUMOS14~\cite{jiang14thumos} and ActivityNet1.3~\cite{heilbron15activitynet}. Both datasets are untrimmed, meaning the videos include frames that contain no target actions and we do not exploit the temporal annotations for training. Note that there may exist multiple actions in a single video and even in a single frame in these datasets. The THUMOS14 dataset has video-level annotations of 101 action classes in its training, validation, and testing sets, and temporal annotations for a subset of videos in the validation and testing sets for 20 classes. We train our model with the 20-class validation subset, which consists of 200 untrimmed videos, without using the temporal annotations. We evaluate our algorithm using the 212 videos in the 20-class testing subset with temporal annotations. This dataset is challenging as some videos are relatively long (up to 26 minutes) and contain multiple action instances. The length of an action varies significantly, from less than a second to minutes. The ActivityNet dataset is a recently introduced benchmark for action recognition and localization in untrimmed videos. We use ActivityNet1.3, which originally consisted of 10,024 videos for training, 4,926 for validation, and 5,044 for testing\footnote{In our experiments, there were 9740, 4791, and 4911 videos accessible from YouTube in the training, validation, and testing set respectively.}, with 200 activity classes. This dataset contains a large number of natural videos that involve various human activities under a semantic taxonomy. We follow the standard evaluation protocol based on mean average precision (mAP) values at several different levels of intersection over union (IoU) thresholds. The evaluation of both the datasets is conducted using the benchmarking code for the temporal action localization task provided by ActivityNet\footnote{\url{https://github.com/activitynet/ActivityNet/blob/master/Evaluation/}}. The result on the ActivityNet1.3 testing set is obtained by submitting results to the evaluation server. \subsection{Implementation Details} We use two-stream I3D networks~\cite{carreira17quo} trained on the Kinetics dataset~\cite{kay2017kinetics} to extract features for video segments. For the RGB stream, we rescale the smallest dimension of a frame to $256$ and perform the center crop of size $224 \times 224$. For the flow stream, we apply the TV-$L1$ optical flow algorithm~\cite{wedel09improved}. The inputs to the I3D models are stacks of $16$ (RGB or flow) frames sampled at $10$ frames per second. We sample $400$ segments at uniform interval from each video in both training and testing. During training, we perform stratified random perturbation on the segments sampled for data augmentation. The network is trained using Adam optimizer with learning rate $10^{-4}$. At testing time, we first reject classes whose video-level probabilities are below $0.1$, and then retrieve one-dimensional temporal proposals for the remaining classes. We set the modality balance parameter $\alpha$ in \eqref{eq:box_score} to $0.5$. Our algorithm is implemented in TensorFlow. \subsection{Results} \label{sub:results} Table~\ref{table:comparison_thumos14} summarizes the test results on THUMOS14 for action localization methods in the past two years. We included both fully and weakly supervised approaches in the table. Our algorithm outperforms the other two existing approaches based on weakly supervised learning~\cite{wang17untrimmednets,singh17hide}. Even with significant difference in the level of supervision, our algorithm presents competitive performance to several recent fully supervised approaches. We also present performance of our model using the features extracted from the pretrained UntrimmedNet~\cite{wang17untrimmednets} two-stream models to evaluate the performance of our algorithm based on weakly supervised representation learning. For this experiment, we adjust $\alpha$ to $0.1$ to handle the heterogeneous signal magnitudes of the two modalities. From~Table~\ref{table:comparison_thumos14}, we can see that STPN also outperforms the UntrimmedNet~\cite{wang17untrimmednets} and the Hide-and-Seek algorithm~\cite{singh17hide} in this setting. We also present performance of our algorithm on the validation and the testing set of ActivityNet1.3 dataset in Table~\ref{table:comparison_activitynet_validation} and \ref{table:comparison_activitynet_test}, respectively. We can see that our algorithm outperforms some fully supervised approaches on both the validation and the testing set. Note that most of the action localization results available on the leaderboard are specifically tuned for the ActivityNet Challenge, which may not be directly comparable with our algorithm. To our knowledge, this is the first attempt to evaluate weakly supervised action localization performance on this dataset, and we report the results as a baseline for future reference. \begin{figure*}[t] \captionsetup{font=small} \centering \begin{subfigure}[b]{\textwidth} \includegraphics[width=\textwidth]{figures/qualitative_result_1_1058_impressive_new2} \caption{An example of the {\it HammerThrow} action.} \label{fig:qualitative_impressive} \end{subfigure} % \begin{subfigure}[b]{\textwidth} \includegraphics[width=\textwidth]{figures/qualitative_result_4_450_impressive_new2} \caption{An example of the {\it VolleyballSpiking} action.} \label{fig:qualitative_impressive_2} \end{subfigure} % \begin{subfigure}[b]{\textwidth} \includegraphics[width=\textwidth]{figures/qualitative_result_2_45_multi_new2} \caption{An example of the {\it ThrowDiscus} (blue) and {\it Shotput} (red) actions.} \label{fig:qualitative_multi} \end{subfigure} % \begin{subfigure}[b]{\textwidth} \includegraphics[width=\textwidth]{figures/qualitative_result_3_1209_failure_new2} \caption{An example of the {\it JavelinThrow} action.} \label{fig:qualitative_failure} \end{subfigure} \caption{Qualitative results on THUMOS14. The horizontal axis in the plots denote the timestamps (in seconds). (a) There are many action instances in the input videos and our algorithm shows good action localization performance. (b) The appearance of the video remains similar from the beginning to the end. There is little motion between frames. Our model is still able to localize the time window where the action actually happens. (c) Two different actions appear in a single video and their appearance and the motion patterns are similar. Even in the case, the proposed algorithm successfully identifies two actions accurately despite some false positives. (d) Our results have several false positives, but they are often from missing ground-truth annotations. Another source of false alarms is the similarity of the observed actions to the target action.} \label{fig:qualitative_results} \end{figure*} Figure~\ref{fig:qualitative_results} demonstrates qualitative results on the THUMOS14 dataset. As mentioned in Section~\ref{sub:datasets}, videos in this dataset are often long and contain many action instances, which may be composed of multiple categories. Figure~\ref{fig:qualitative_impressive} presents an example with a number of action instances along with our predictions and the corresponding T-CAM signals. Our algorithm effectively pinpoints the temporal boundaries of many action instances. In Figure~\ref{fig:qualitative_impressive_2}, the appearance of all the frames are similar, and there is little motion between frames. Despite these challenges, our model still localizes the target action fairly well. Figure~\ref{fig:qualitative_multi} illustrates an example of a video containing action instances from two different classes. Visually, the two involved action classes---{\it Shotput} and {\it ThrowDiscus}---are similar in their appearance (green grass, person with blue shirt, on a gray platform) and motion patterns (circular throwing). STPN is able to not only localize the target actions but also classify the action categories successfully, despite several short-term false positives. Figure~\ref{fig:qualitative_failure} shows a instructional video for {\it JavelinThrow}, where our algorithm detects most of the ground-truth action instances while it also generates many false positives. There are two causes for the false alarms. First, the ground-truth annotations for {\it JavelinThrow} are often missing, making true detections counted as false positives. The second source is related to the segments, where the instructors demonstrate javelin throwing but only parts of such actions are visible. These segments resemble a real {\it JavelinThrow} action in both appearance and motion. \begin{table}[t] \captionsetup{font=small} \centering \caption{Results on the ActivityNet1.3 validation set. The entries with an asterisk (*) are from the ActivityNet Challenge submissions. Note that~\cite{shou17cdc} is the result of post-processing based on~\cite{wang16anet}, making the comparison difficult.} \label{table:comparison_activitynet_validation} \vspace{-0.2cm} \small \scalebox{0.9}{ \begin{tabular}{c|c||ccc} \multirow{2}{*}{} & \multirow{2}{*}{Method} & \multicolumn{3}{c}{AP@IoU} \\ & & 0.5 & 0.75 & 0.95 \\ \hline \multirow{6}{*}{\shortstack{Fully \\ supervised}} & Singh \& Cuzzolin~\cite{singh16untrimmed}* & 34.5 & -- & -- \\ & Wang \& Tao~\cite{wang16anet}* & 45.1 & {\color{white}0}4.1 & {\color{white}0}0.0 \\ & Shou~\etal~\cite{shou17cdc}* & 45.3 & 26.0 & {\color{white}0}0.2 \\ & Xiong~\etal~\cite{xiong17pursuit}* & 39.1 & 23.5 & {\color{white}0}5.5 \\ \cline{2-5} & Montes~\etal~\cite{montes16temporal} & 22.5 & -- & -- \\ & Xu~\etal~\cite{xu17r} & 26.8 & -- & -- \\ \hline \shortstack{Weakly \\ supervised} & \shortstack{ \vspace{0.15cm} \\ STPN \\ \vspace{0.1cm}} & \shortstack{ \vspace{0.15cm} \\ 29.3 \\ \vspace{0.1cm}} & \shortstack{ \vspace{0.15cm} \\ 16.9 \\ \vspace{0.1cm}} & \shortstack{ \vspace{0.15cm} \\ {\color{white}0}2.6 \\ \vspace{0.1cm}} \\ \hline \end{tabular } \end{table} \begin{table}[t] \captionsetup{font=small} \centering \caption{Results on the ActivityNet1.3 testing set. The entries with an asterisk (*) are from the ActivityNet Challenge submissions.} \label{table:comparison_activitynet_test} \vspace{-0.2cm} \small \scalebox{0.9}{ \begin{tabular}{c|c||c} & Method & mAP\\ \hline \multirow{5}{*}{\shortstack{Fully \\ supervised}} & Singh \& Cuzzolin~\cite{singh16untrimmed}* & 17.83 \\ & Wang \& Tao~\cite{wang16anet}* & 14.62\\ & Xiong~\etal~\cite{xiong17pursuit}* & 26.05 \\ \cline{2-3} & Singh~\etal~\cite{singh16multi} & 17.68\\ & Zhao~\etal~\cite{zhao17temporal} & 28.28\\ \hline \shortstack{Weakly \\ supervised} & \shortstack{ \vspace{0.15cm} \\ STPN \\ \vspace{0.1cm}} & \shortstack{ \vspace{0.15cm} \\ 20.07 \\ \vspace{0.1cm}} \\ \hline \end{tabular } \end{table} \subsection{Ablation Study} \label{sub:ablation} We investigate the contribution of several components proposed in our weakly supervised architecture and implementation variations. All the experiments in our ablation study are performed on the THUMOS14 dataset. \paragraph{Choice of architectures} Our premise is that an action can be recognized with a sparse subset of segments in a video. When we learn our action classification network, two loss terms---classification and sparsity losses---are employed. Our baseline is the architecture without the attention module and the sparsity loss, which share the motivation with the architecture in~\cite{zhou16learning}. We also test another baseline with the attention module but without the sparsity loss. Figure~\ref{fig:experiment_architecture_choice} shows the comparisons between our baselines and the full model. We observe that both the sparsity loss and the attention weighted pooling make substantial contributions to the performance improvement. \begin{figure}[t] \captionsetup{font=small} \centering \includegraphics[width=0.9\linewidth]{figures/architecture_choice.png}\\ \vspace{-0.1cm} \caption{Performance with respect to architectural variations. The attention module is useful as it allows the model to explicitly focus on important parts of input videos. Enforcing sparsity in action recognition via $\ell_1$ loss gives significant boost to the performance.} \label{fig:experiment_architecture_choice} \end{figure} \begin{figure}[t] \captionsetup{font=small} \centering \includegraphics[width=0.9\linewidth]{figures/experiment_feature_choice.png}\\ \vspace{-0.1cm} \caption{Performance with respect to modality choices. Optical flow offers stronger cues than the RGB frames for action localization and the combination of the two features leads to significant performance improvement.} \label{fig:experiment_feature_choice} \end{figure} \paragraph{Choice of modalities} As mentioned in Section~\ref{sub:two-stream}, we use two-stream I3D networks for generating temporal action proposals and computing the attention weights. We also combine the two modalities for scoring the proposals. Figure~\ref{fig:experiment_feature_choice} illustrates the effectiveness of each modality and their combination. When comparing the individual performance of each modality, the flow stream offers stronger performance than the RGB steam. Similar to action recognition, the combination of these modalities provides significant performance improvement. \section{Introduction} \label{sec:introduction} Action recognition and localization in videos are crucial problems for high-level video understanding tasks including, but not limited to, event detection, video summarization, and visual question answering. Many researchers have been investigating these problems extensively in the last decades, but the main challenge remains the lack of appropriate representation methods of videos. Contrary to the almost immediate success of convolutional neural networks (CNNs) in many visual recognition tasks for images, applying deep neural networks to videos is not straightforward due to the inherently complex structures of video data, high computation demand, lack of knowledge for modeling temporal information, and so on. Some attempts to using the representations only from deep learning~\cite{karpathy14large,simonyan14two,tran15learning,wang16temporal} were not significantly better than methods relying on hand-crafted visual features~\cite{laptive05on,wang13action,wang13motionlets}. As a result, many existing algorithms seek to achieve state-of-the-art performance by combining hand-crafted and learned features. Many existing video understanding techniques rely on trimmed videos as their inputs. However, most videos in the real world are untrimmed and contain large numbers of irrelevant frames pertaining to target actions and these techniques are prone to fail due to the challenges in extracting salient information. While action localization algorithms are designed to operate on untrimmed videos, they usually require temporal annotations of action intervals, which is prohibitively expensive and time-consuming at large scale. Therefore, it is more practical to develop competitive localization algorithms that require minimal temporal annotations for training. \begin{figure}[t] \captionsetup{font=small} \centering \includegraphics[width=0.95\linewidth]{figures/overview.pdf} \caption{Overview of the proposed algorithm. Our algorithm takes a two-stream input---RGB frames and optical flow between frames---from a video, and performs action classification and localization concurrently. For localization, Temporal Class Activation Maps (T-CAMs) are computed from the two streams and employed to generate one dimensional temporal action proposals, from which the target actions are localized in the temporal domain.} \label{fig:overview} \end{figure} Our goal is to temporally localize actions in untrimmed videos. To this end, we propose a novel deep neural network that learns to select a sparse subset of useful video segments for action recognition in each video by using a loss function that measures the video-level classification error and the sparsity of selected segments. Temporal Class Activation Maps (T-CAMs) are employed to generate one dimensional temporal proposals used to localize target actions. Note that we do not exploit temporal annotations of the actions in target datasets during training, and our models are trained only with video-level class labels. An overview of our algorithm is shown in Figure~\ref{fig:overview}. The contributions of this paper are summarized as below. \begin{itemize} \item We introduce a principled deep neural network architecture for weakly supervised action localization in untrimmed videos, where actions are detected from a sparse subset of segments identified by the network. \item We present a method for computing and combining temporal class activation maps and class agnostic attentions for temporal localization of target actions. \item The proposed weakly supervised action localization technique achieves state-of-the-art results on THUMOS14~\cite{jiang14thumos} and outstanding performance in the ActivityNet1.3~\cite{heilbron15activitynet} action localization task. \end{itemize} The rest of this paper is organized as follows. We discuss the related work in Section~\ref{sec:related} and describe our action localization algorithm in Section~\ref{sec:proposed}. Section~\ref{sec:experiment} presents the details of our experiment and Section~\ref{sec:conclusion} concludes this paper. \section{Related Work} \label{sec:related} Action recognition aims to identify a single or multiple actions per video and is often formulated as a simple classification problem. Before the success of CNNs, the algorithm based on improved dense trajectories~\cite{wang13action} presented outstanding performance. When it comes to the era of deep learning, convolutional neural networks have been widely used. Afterwards, two-stream networks~\cite{simonyan14two} and 3D convolutional neural networks (C3D)~\cite{tran15learning} are popular solutions to learn video representations and these techniques, including their variations, are extensively used for action recognition. Recently, a combination of two-stream networks and 3D convolutions, referred to as I3D~\cite{carreira17quo}, was proposed as a generic video representation learning method. On the other hand, many algorithms develop techniques to recognize actions based on existing representation methods~\cite{wang16temporal,wang17spatiotemporal,feichtenhofer16spatiotemporal,girdhar17actionvlad,feichtenhofer17spatiotemporal,shi17learning}. Action localization is different from action recognition, because it requires the detections of temporal or spatiotemporal volumes containing target actions. There are various existing methods based on deep learning including structured segment network~\cite{zhao17temporal}, contextual relation learning~\cite{soomro15action}, multi-stage CNNs~\cite{shou16temporal}, temporal association of frame-level action detections~\cite{gkioxari15finding}, and techniques using recurrent neural networks~\cite{yeung16end,ma16learning}. Most of these approaches rely on supervised learning and employ temporal or spatio-temporal annotations to train the models. To facilitate action detection and localization, many algorithms use action proposals~\cite{buch17sst,escorcia16daps,wang16actioness}, which is an extension of object proposals for object detection in images. There are only a few approaches based on weakly supervised learning that rely solely on video-level class labels to localize actions in temporal domain. UntrimmedNet~\cite{wang17untrimmednets} learns attention weights on precut video segments using a temporal softmax function and thresholds the attention weights to generate action proposals. The algorithm improves the video-level classification performance. However, generating action proposals solely from class-agnostic attention weights is suboptimal and the use of the softmax function across proposals may not be effective to detect multiple instances. Hide-and-seek~\cite{singh17hide} proposes a technique that randomly hides regions to force residual attention learning and thresholds class activation maps at inference time for weakly supervised spatial object detection and temporal action localization. While working well at spatial localization tasks, this method fails to show satisfactory performance in temporal action localization tasks in videos. Both algorithms are motivated by the recent success of weakly supervised object localization in images. In particular, the formulation of UntrimmedNet for action localization heavily relies on the idea proposed in \cite{bilen16weakly}. There are some other approaches~\cite{bojanowski14weakly,huang16connectionist,rechard17weakly} that learn to localize or segment actions in a weakly supervised setting by exploiting the temporal order of subactions during training. The main objective of these studies is to find the boundaries of sequentially presented subactions, while our approach aims to extract temporal intervals of full actions from input videos. There are several publicly available datasets for action recognition including UCF101~\cite{soomro12ucf101}, Sports-1M~\cite{karpathy14large}, HMDB51~\cite{kuehne11hmdb}, Kinetics~\cite{kay2017kinetics} and AVA~\cite{gu17ava}. The videos in these datasets are trimmed so that the target actions appear throughout each clip. In contrast, THUMOS14 dataset~\cite{jiang14thumos} and ActivityNet~\cite{heilbron15activitynet} provide untrimmed videos that contain background frames and temporal annotations about which frames are relevant to the target actions. Note that each video in THUMOS14 and ActivityNet may have multiple actions happening in a single frame.
{ "timestamp": "2018-04-04T02:12:23", "yymm": "1712", "arxiv_id": "1712.05080", "language": "en", "url": "https://arxiv.org/abs/1712.05080" }
\section{Introduction} The research field of anomalous diffusion and transport \cite{Shlesinger74,Scher75,Bouchaud1990,AvrahamHavlin,Hughes, BouchaudAnnPhys90,Bouchaud92,Metzler01,FractDyn2011,GoychukACP12,MetzlerPCCP} currently flourishes getting ever more experimental support and manifestations in such diverse research areas as transport processes in living cells and polymeric solutions \cite{MasonPRL,AmblardPRL,Saxton97,Waigh,Seisenberg,Caspi,WeissBJ04,Tolic,BanksBJ05,Barkai,Bruno11,BrunoPRE,Golding,Guigas,SzymanskiPRL,HoflingReview,Jeon11,Luby,PanPRL,WongPRL,Harrison,Parry,Robert,Bruno11,BrunoPRE,PLoSONE14,PCCP14,PhysBio15,TabeiPNAS,WeigelPNAS,WeissPRE13,Santamaria,Bertzeva12}, colloidal systems \cite{MasonPRL,Waigh,EversReview,HanesPRE}, dust plasmas \cite{NukomuraPRL}, organic photoconductors \cite{SchubertPRB13}, conformational diffusion in proteins \cite{Yang03,KouPRL,GoychukPRE04,Kneller04,MinPRL,Calladrini10,Calligari11,Calligari15,GoychukPRE15,HuNatPhys16}, self-diffusion in lipid bilayers \cite{KnellerJCP11,JeonPRL12,JeonPRX}, diffusion of proteins on DNA strands \cite{Kong16,Kong17,Liu17}, to name just a few. Differently from normal diffusion, $\alpha=1$, the variance of the diffusing particle positions, $\langle \delta x^2(t)\rangle\propto t^\alpha$, often grows sublinearly, $\alpha<1$, or superlinearly, $\alpha>1$, in time, with some power law exponent $\alpha$. Accordingly, anomalous diffusion is classified into the subdiffusion, $\alpha<1$, and the superdiffusion, $\alpha>1$. This classification is, however, not complete. For example, Sinai diffusion \cite{Sinai82,Bouchaud1990,AvrahamHavlin} is characterized by a logarithmically slow growth, $\langle \delta x^2(t)\rangle\propto \ln^4 t $. This is clearly a sub-diffusion, which sometimes is named ultraslow \cite{Bouchaud1990,AvrahamHavlin}. The research field of anomalous diffusion remains rather controversial because one and the same phenomena are often described by very different theories \cite{GoychukACP12,GoychukFractDyn11}, such as continuous time random walks (CTRWs) with a divergent mean residence time in local traps \cite{Shlesinger74,Scher75,Bouchaud1990,AvrahamHavlin,Hughes,Metzler01}, and generalized Langevin dynamics with sub-Ohmic memory friction \cite{WeissBook,Pottier,Kupferman,GoychukPRE09,GoychukACP12}. Such different theories of fractional diffusion and transport can look at first very similar \cite{GoychukFractDyn11}. However, a deeper analysis reveals fundamental differences in ergodic \textit{vs.} weakly non-ergodic behavior \cite{Bouchaud92,MetzlerPCCP}, major features of nonlinear diffusion and transport in tilted periodic potentials \cite{GoychukPRE06Rapid,HeinsaluPRE06,GoychukPRE09,GoychukACP12,GoychukFractDyn11}, as well as in response to external time-periodic modulations \cite{BarbiPRL05,SokolovPRL06,HeinsaluPRL07,HeinsaluPRE09,GoychukPRE07Rapid,GoychukACP12,GoychukCTP14}. The pertinent diffusion in cytosol of biological cells is three-dimensional, and one in the cell plasma membrane is two-dimensional. However, the insights obtained from simplified one-dimensional theoretical models proved their usefulness over the years of research \cite{Shlesinger74,Scher75,Bouchaud1990,AvrahamHavlin,Hughes,Metzler01}. Hence, in this paper we will concentrate on a very simplified, minimal 1d model, which, nevertheless, is rich and complex enough. It suits well for getting such important insights and is based on two theoretical approaches to anomalous diffusion, which are especially important in view of their profound dynamical origin. One is based on the Bogolyubov-Ford-Kac-Mazur-Kubo-Zwanzig \cite{Bogolyubov,Ford65,Kubo66,Zwanzig73} generalized Langevin equation (GLE) \begin{equation}\label{GLEA} m\ddot x+\int_{0}^{t}\eta(t-t')\dot x(t')dt'=f(x,t)+\xi(t), \end{equation} for the particle position $x(t)$ with an algebraically decaying memory kernel $\eta(t)\propto t^{-\alpha}$ \cite{WangTokuyama,LutzFLE,Pottier,GoychukPRL07,GoychukPRE09,GoychukACP12}. Another one relies on normal, memory-less Langevin diffusion in random potentials \cite{Bouchaud1990,BouchaudAnnPhys90}. In Eq. (\ref{GLEA}), $m$ is the mass of the particle, $f(x,t)=-\partial U(x,t)/\partial x$ is an external force acting on the particle, which can be random, both in space and in time, and $\xi(t)$ is an equilibrium thermal noise with zero mean value. It has a Gaussian statistics and hence is named Gaussian. As any zero-mean stationary Gaussian process, it is completely characterized by its autocorrelation function (ACF), $\langle \xi(t)\xi(t')\rangle$. This one is related to the memory kernel $\eta(t)$ by the classical fluctuation-dissipation relation (FDR), $\langle \xi(t)\xi(t')\rangle=k_BT\eta(|t-t'|)$ \cite{Kubo66,Zwanzig73,WeissBook}, named also the second fluctuation-dissipation theorem (FDT) by Kubo \cite{Kubo66}. Here, $T$ is temperature and $k_B$ is the Boltzmann constant. Standard Langevin equation presents a particular memory-less case with $\eta(t)=2\eta_0\delta(t)$, where $\eta_0$ is a viscous friction coefficient, yielding $\eta_0 \dot x$ for the friction term in (\ref{GLEA}). There are many studies of GLE dynamics, both potential-free and in some regular potentials, as well as of normal Langevin dynamics in random potentials. However, viscoelastic GLE subdiffusion in random potentials presents currently a practically unexplored topic despite its obvious relevance for diffusion processes in cytosol of living cells and other inhomogeneous viscoelastic media. Only in a parabolic weakly corrugated trapping potential such a diffusion was partially addressed recently \cite{DuanEPJB}. This is the main purpose of this paper to do the first systematic study of viscoelastic GLE subdiffusion in stationary Gaussian potentials $U(x)$ with decaying correlations. We shall investigate two such models of general interest: (i) with exponentially decaying correlations (Ornstein-Uhlenbeck process in space), and (ii) algebraically decaying correlations possessing no effective correlation length. Starting from classical works by Bogolyubov \cite{Bogolyubov}, Ford, Kac, Mazur \cite{Ford65}, and Zwanzig \cite{Zwanzig73}, the GLE (\ref{GLEA}) has repeatedly been derived \cite{LindenbergSeshardi,Ford88,Kupferman,WeissBook,GoychukACP12} from a fully dynamical system, where the environment is modeled by a large system of harmonic oscillators forming a thermal bath. The only non-dynamical element, which enters this theory, is that the initial positions and momenta of those oscillators are canonically distributed at a given fixed temperature. In this respect, this dynamical theory of Brownian motion presents a precursor and companion of molecular dynamics \cite{AlderJCP,AlderPRL}, in a very simplified, model fashion. It is also easy to generalize towards quantum-mechanical Brownian motion \cite{Magalinskii59,Caldeira83,Ford88,WeissBook} and to nonlinear models of coupling between the Brownian particle and its linear environment \cite{LindenbergSeshardi,WeissBook}. The influence of the environment in this approach is fully characterized by its spectral density \cite{Caldeira83,Ford88,WeissBook}, $J(\omega)$. It yields the memory kernel as \cite{WeissBook,GoychukACP12} $\eta(t)=(2/\pi)\int_0^{\infty}d\omega J(\omega)\cos(\omega t)/\omega$. Here, a very insightful basic model is $J(\omega)=\eta_\alpha|\sin(\pi\alpha/2)|\omega^\alpha\exp(-\omega/\omega_c) $\cite{Caldeira83,Ford88,WeissBook}. In accordance with it, the environment is customarily classified by the low-frequency behavior of $J(\omega)$ as Ohmic ($\alpha=1$), sub-Ohmic ($0<\alpha<1$) and super-Ohmic ($\alpha>1$) \cite{WeissBook}. Here, $\eta_\alpha$ is a fractional friction coefficient and $\omega_c$ is a frequency cutoff. It must be present in any condensed medium beyond the continuous medium approximation, which is, however, often used. This spectral model yields \cite{GoychukACP12,SiegleEPL11} \begin{eqnarray}\label{reg} \eta(t)&=&\eta_{\alpha}\frac{|\sin(\pi\alpha/2)|}{\pi/2}\Gamma(\alpha) {\rm Re}(it+1/\omega_c)^{-\alpha}\\ &=&\eta_{\alpha}\frac{|\sin(\pi\alpha/2)|}{\pi/2}\frac{\Gamma(\alpha)\omega_c^\alpha}{(1+\omega_c^2t^2)^{\alpha/2}} \cos[\alpha\arctan(\omega_c t)], \nonumber\; \end{eqnarray} where $\Gamma(z)$ is special gamma-function. Asymptotically, in the limit $t\to\infty$, and for the potential-free diffusion, this model yields $\langle \delta x^2(t)\rangle\propto t^\alpha$, for $0<\alpha<2$. It covers both sub- and super-diffusion. For $\alpha>2$, diffusion is ballistic \cite{WeissBook}, $\langle \delta x^2(t)\rangle\propto t^2$. In the singular limit, with unbounded energy spectrum, $\omega_c\to\infty$, and in neglecting quantum effects, the Ohmic model leads exactly \cite{WeissBook} to the standard classical Langevin equation with viscous friction and thermal white Gaussian noise, which are related by the FDR. By the same token, the sub-Ohmic model in the singular limit $\omega_c\to\infty$ leads to a subdiffusive fractional Langevin equation (FLE) \cite{MainardiFLE,LutzFLE,Pottier,Kupferman,GoychukPRL07,GoychukACP12}. It is a GLE (\ref{GLEA}) with an algebraically decaying memory kernel, $\eta(t)=\eta_{\alpha} t^{-\alpha}/\Gamma(1-\alpha)$. The corresponding frictional term with memory in Eq. (\ref{GLEA}) can be abbreviated as $\eta_\alpha d^\alpha x/dt^\alpha$ \cite{Caputo67,Mainardi97,Mathai17} using the notion of Caputo fractional derivative, $\frac{{\rm d}^\alpha x}{{\rm d}t^\alpha}:=\int_0^t (t-t')^{-\alpha} \dot x(t')dt'/\Gamma(1-\alpha)$. Thermal noise entering this FLE is a fractional Gaussian noise (fGn) \cite{Mandelbrot68}. This model of subdiffusion and its further generalizations emerge naturally in the context of anomalous diffusion in complex viscoelastic media such as complex liquids, including dense polymeric solutions, dust plasmas, colloids, etc., with a prominent application to anomalous diffusion in cytosol of biological cells, as well as in intrinsic conformational dynamics of proteins. Likewise, the super-Ohmic model with $1<\alpha<2$ leads to a superdiffusive GLE with a sign-changing and mostly negative memory kernel \cite{MainardiFLE,GoychukACP12,SiegleEPL11}. Its integral is always positive and tends to zero with the upper limit of integration (vanishing integral friction). Its absolute value also decays algebraically slow asymptotically. All these features can be understood from Eq. (\ref{reg}). For $1<\alpha<2$, the sign changes at $t_c=\tan(\pi/(2\alpha))/\omega_c$. The limit $\omega_c\to \infty$ is singular, $t_c\to 0+$, and for $t>0$, $\eta(t) = -\eta_\alpha t^{-\alpha}/|\Gamma(1-\alpha)|$. It must be handled with care because $\eta(t)$ is not a function but distribution in this limit. Then, the corresponding integral term in (\ref{GLEA}) can be short-handed \cite{MainardiFLE,GoychukACP12,SiegleEPL11} as $\eta_{\alpha}\sideset{_{0}}{_t}{\mathop{\hat D}^{\alpha-1}} \dot x(t)$ using the notion of fractional Riemann-Liouville derivative \cite{Mathai17}, $\sideset{_{0}}{_t}{\mathop{\hat D}^{\gamma}}v(t):=\frac{1}{\Gamma(1-\gamma)} \frac{d}{dt}\int_{0}^t dt' \frac{v(t')}{(t-t')^\gamma}$, with $\gamma=\alpha-1$ and $v=\dot x$. Such a related FLE, describing, however, an asymptotically normal diffusion, captures hydrodynamic memory effects by Boussinesq and Basset \cite{LandauHydro}. It takes the form of Eq. (\ref{GLEA}) with the frictional term abbreviated as $\eta_0\dot x+\eta_{\alpha}\sideset{_{0}}{_t}{\mathop{\hat D}^{\alpha-1}} \dot x(t)$ and two corresponding FDR-related noise terms \cite{MainardiFLE}. These memory effects lead to a famous long tail in the velocity ACF of Brownian particles even in simple fluids \cite{AlderPRL,MainardiFLE,SiegleEPL11,GoychukACP12}. Experimental manifestations of such effects for Brownian particles were found quite recently \cite{HuangNatPhys,FranoschNature}. The GLE approach naturally provides a dynamical underpinning and justification of the mathematical model of fractional Brownian motion (fBm) by Kolmogorov \cite{Kolmogorov,KolmogorovTrans}, Mandelbrot and van Ness \cite{Mandelbrot68} within the FLE description upon neglecting the inertial effects. This dynamical origin and consistency with thermodynamics and equilibrium statistical physics for undriven dynamics make this approach superior to many others in the field of anomalous diffusion \cite{GoychukACP12}. One of its special advantages is that it allows to study nonlinear anomalous dynamics in external multistable potentials. For example, such bistable subdiffusive dynamics was studied in Ref. \cite{GoychukPRE09} with a prominent result that the residence time distributions (RTDs) in the potential wells are of the stretched exponential type. In fact, no genuine rate description is possible. This is due to the presence of slow fluctuations in the medium such that \textit{relatively fast} escape events from one potential well to another can occur on the background of very sluggish, quasi-static fluctuations of the environment. Such fluctuations make the whole setup very different from one of the classical rate theory. The latter one assumes that the intrawell relaxation occurs much faster than the escape events. This basic assumption is fundamentally broken for viscoelastic subdiffusive escape, where the slow modes of viscoelastic environment result into a time-modulation of the escape rate formed by the \textit{relatively} faster relaxation modes. However, a characteristic time scale of transitions nevertheless exists. It is clearly seen in the distribution of the logarithmically-transformed residence times, where the maximum of distribution is well reproduced \cite{GoychukPRE09,GoychukACP12,GoychukPRE15} from a rate expression of the non-Markovian rate theory \cite{GroteHynes,HanggiRevModPhys}. It depends in the Arrhenius manner on the height of potential barrier. Next, the viscoelastic GLE subdiffusion in a washboard potential was shown to be insensitive asymptotically to the presence of potential \cite{GoychukPRE09}. It approaches gradually a free-subdiffusion limit \cite{GoychukPRE09,GoychukACP12,GoychukFractDyn11} for any barrier height. This astounding feature is in a sharp contrast with both the intuition and well-known results on normal diffusion\cite{HanggiRevModPhys} and fractional Fokker-Planck (FFP) dynamics \cite{GoychukPRE06Rapid,HeinsaluPRE06} in washboard potentials. It was also demonstrated for diffusion and transport in other tilted periodic potentials including ratchets potentials with broken space inversion symmetry \cite{GoychukACP12,GoychukFractDyn11}. Within a quantum-mechanical setting, it has been proven exactly, however, for strictly sinusoidal potentials only \cite{WeissBook,ChenLebowitz}. In fact, it is not caused by quantum-mechanical effects at all, as one might possibly believe, and it is not restricted by sinusoidal potentials only \cite{GoychukACP12}. In this respect, it is also important to mention that the potential-free viscoelastic diffusion is ergodic: the ensemble and single-trajectory averages coincide \cite{GoychukPRE09}, in a sharp contrast e.g. with continuous time random walk (CTRW) semi-Markovian subdiffusion \cite{HePRL08,LubelskiPRL08,MetzlerPCCP}. However, imposing a periodic potential makes it transiently non-ergodic \cite{GoychukPRE09}. These earlier results are very important to understand some of the key findings of this study. Another important approach to anomalous diffusion is based on normal diffusion in random potentials \cite{Bouchaud1990,BouchaudAnnPhys90,Hughes}. Such a description naturally emerge in inhomogeneous disordered materials, including viscoelastic cytosol of living cells as well. This is also a very rich and versatile approach. For example, the model of exponentially distributed energy disorder with root-mean-square (rms) amplitude of fluctuations $\sigma$ leads in a mean-field approximation to CTRW subdiffusion with a power law RTD, $\psi(\tau)\propto \tau^{-1-\alpha}, 0<\alpha=k_BT/\sigma<1$, in local traps \cite{Hughes}. It is featured by divergent mean residence times (MRTs) \cite{Scher75,Shlesinger74} and is (weakly) nonergodic \cite{Bouchaud92,Bel05,LubelskiPRL08,HePRL08,Barkai,Sokolov09}: The ensemble and trajectory averages are radically different. However, in random potentials presenting stationary Gaussian processes in space such a diffusion is asymptotically normal for any decaying correlations in space\cite{GoychukPRL14}, i. e. $\alpha(t)\to 1$, for $t\to\infty$. Here, a prominent result by de Gennes, B\"assler, and Zwanzig holds on the renormalized normal diffusion coefficient, $D_{\rm ren}=D_0\exp[-\sigma^2/(k_BT)^2]$, where $D_0$ is the potential-free diffusion coefficient \cite{DeGennes75,BasslerPRL87,BasslerReview,ZwanzigPNAS}. The same renormalization is valid for FFP dynamics in such potentials, $\langle \delta x^2(t)\rangle \propto D_{\rm ren}t^\alpha$, where $D_0$ must be treated as fractional diffusion coefficient, see in Appendix A. The corresponding temperature dependence is often measured in disordered materials \cite{HecksherNatPhys} and the Gaussian model of energy disorder is physically well justified in many cases, e.g. for diffusion of electrons and holes in organic photoconductors \cite{BasslerReview,DunlapPRL96}, colloidal particles in random laser fields \cite{EversReview,BewerungePRA,HanesPRE} and regulatory proteins on DNA tracks \cite{GerlandPNAS,LassigReview,SlutskyPRE,BenichouPRL09,GoychukPRL14}. However, for a sufficiently strong disorder $\sigma >2k_BT$ long subdiffusive transients occur on a mesoscale \cite{Romero98,KhouryPRL,SimonPRE13,GoychukPRL14} with a time-dependent sub-diffusion exponent $\alpha(t)\propto \log(t)$ \cite{Goychuk2017}. For $\sigma > (4 - 5) k_BT$, this mesoscale subdiffusion can reach even the macroscale \cite{GoychukPRL14}, and $\alpha(t)$ can be nearly constant for a very long time \cite{GoychukPRL14,Goychuk2017}. Remarkably, in this regime it exhibits the same temperature dependence, $\alpha\propto k_BT/\sigma$, as in the case of exponential disorder, despite a very different physical mechanism \cite{Goychuk2017}. Such transient subdiffusion also exhibits a strong scatter in single-trajectory averages\cite{GoychukPRL14} featuring a weak ergodicity breaking \cite{LubelskiPRL08,HePRL08,Barkai,MetzlerPCCP,Krapf18,Dean16}. Gaussian disorder characterized by a stationary random force $f(x)$ or a random drift coefficient \cite{Sinai82} is generally very different from the stationary potential disorder. It is also very important in applications\cite{Bouchaud1990}. Here, the simplest model is given \cite{Bouchaud1990} by the uncorrelated force disorder, $\langle f(x)f(x')\rangle \propto \delta(x-x')$, which leads to a logarithmically slow subdiffusion \cite{Sinai82,Bouchaud1990}, $\langle \delta x^2(t)\rangle \propto \log^a(t)$ with $a=4$ \cite{Bouchaud1990}. It is named Sinai diffusion. The corresponding Gaussian potential $U(x)$ is a non-stationary random process. It presents an unbounded Brownian motion (Wiener process) occurring in space, rather than time. If the potential presents a fBm in space, a generalized Sinai diffusion with $a\neq 4$ emerges \cite{OshaninPRL}. Astoundingly, a generalized Sinai diffusion emerges also transiently in stationary correlated Gaussian potentials for a sufficiently strong disorder, $\sigma > 5 k_BT$. This has been shown recently for four different models of disorder correlations in Ref. \cite{Goychuk2017}, where also the genuine mechanism of the discussed transient subdiffusion has been clarified using a scaling theory argumentation. This paper is devoted to overdamped viscoelastic subdiffusion in random environments modeled by stationary random potentials with Gaussian statistics. The rest of the paper is organized as follows. In Sec. II, we formulate the model and the numerical approach. In Sec. III, we present the main results and their discussion. Finally, in Sec. IV, the main conclusions will be drawn.\\ \section{Model and numerical approach} We consider one-dimensional viscoelastic subdiffusion governed by the following overdamped subdiffusive FLE \cite{GoychukACP12,PRE13,GoychukPRE15} \begin{equation} \eta_0\frac{{\rm d} x}{{\rm d}t}+\eta_\alpha\frac{{\rm d}^\alpha x}{{\rm d}t^\alpha}=f(x)+ \xi_0(t)+\xi_{\alpha}(t), \label{GLE1} \end{equation} where $0<\alpha<1$. The particles are moving in a random potential $U(x)$ yielding static random force $f(x)=-d U(x)/dx$. They are also subjected to thermal Gaussian forces $\xi_0(t)$ and $\xi_{\alpha}(t)$, as well as a memoryless Stokes friction with the friction coefficient $\eta_0$ and a friction with memory or frequency-dependent friction, which is characterized by the fractional friction coefficient $\eta_{\alpha}$, as detailed in the Introduction. Thermal noises and the corresponding frictional parts are related by the FDT relations $\langle\xi_0(t)\xi_0(t')\rangle=2k_B T\eta_0\delta(t-t')$, and $\langle\xi_\alpha(t)\xi_\alpha(t')\rangle=k_B T\eta_\alpha | t-t'|^{-\alpha}/\Gamma(1-\alpha)$, correspondingly. This ensures statistical equilibrium description in the absence of driving forces \cite{Kubo66}. The both noises, $\xi_\alpha(t)$ and $\xi_0(t)$, present singular stochastic processes with infinite variance. In the language of spectral bath functions, this description corresponds to $J(\omega)=\eta_0\omega+\eta_\alpha|\sin(\pi\alpha/2)|\omega^\alpha$, i.e. a mixture of Ohmic and sub-Ohmic thermal baths \cite{WeissBook}, in the singular limit $\omega_c\to\infty$. In the case of cytosol or complex polymeric liquids, the Stokes friction accounts for the water component of solution, whereas the friction with memory is caused by various dissolved polymers forming e.g. actin meshwork. We neglect the inertial effects in anomalous dynamics, which can also be easily included \cite{GoychukPRE09,GoychukACP12}, because we wish to arrive at a largest possible time scale accessible in numerical simulations. Moreover, very often such effects can indeed be neglected on physical grounds, see Appendix B for a justification. Hydrodynamic memory effects are also neglected, as usually. The solution of Eq. (\ref{GLE1}) for free subdiffusion, $f(x)=0$, yields \cite{PRE13} \begin{eqnarray}\label{exact} \langle\delta x^2(t)\rangle =2D_{0} t E_{1-\alpha,2}[-(t/\tau_0)^{1-\alpha}], \end{eqnarray} where $E_{a,b}(z):=\sum_0^{\infty}z^n/\Gamma(an+b)$ is generalized Mittag-Leffler function \cite{Mathai17}, $D_0=k_BT/\eta_{0}$ is a normal diffusion coefficient, and $\tau_0=(\eta_0/\eta_{\alpha})^{1/(1-\alpha)}$ is a transient time constant. Initially, for $t\ll \tau_0$, diffusion is normal, $\langle\delta x^2(t)\rangle \approx 2D_{0} t$, and asymptotically, $t\gg \tau_0$, it is anomalously slow, $\langle\delta x^2(t)\rangle \sim 2D_{\alpha} t^\alpha/\Gamma(1+\alpha)$, with anomalous diffusion coefficient $D_\alpha=k_BT/\eta_{\alpha}$. In the particular case of $\alpha=1/2$, Eq. (\ref{exact}) yields \begin{eqnarray}\label{exact2} \langle\delta x^2(t)\rangle &= & 2D_{1/2} \Bigg \{ 2\sqrt{\frac{t}{\pi}} \nonumber \\ &+ & \sqrt{\tau_0} \left [ e^{t/ \tau_0}{\rm erfc}\left (\sqrt{\frac{t}{\tau_0}}\right ) -1\right ] \Bigg \}, \end{eqnarray} which is used to test numerical solutions below. \begin{figure}[h] \centering \includegraphics[height=6cm]{Fig1.eps} \caption{(Color online) Realizations of random potentials (energy in units of $\sigma$, and coordinate in units of $\lambda$) for exponential and power law correlations. The lattice grid size is $\Delta x=0.02$. In the case of power-law correlations, $\gamma=0.8$.} \label{Fig1} \end{figure} We consider stationary zero-mean random Gaussian potentials, which are completely characterized by the normalized stationary autocorrelation function, $g(x)=\langle U(x_0+x) U (x_0)\rangle /\langle U^2(x)\rangle$, and the rms of fluctuations, $\sigma=\langle U^2(x)\rangle^{1/2}$. Two models of correlations are considered: (i) exponential, $g(x)=\exp(-|x|/\lambda)$, and (ii) power law decaying, $g(x)=1/(1+x^2/\lambda^2)^{\gamma/2}$. In the first case, $\lambda$ is the correlation length, $\lambda_{\rm corr}=\int_0^\infty g(x)dx$, and in the second case $\lambda_{\rm corr}\to \infty$, for $0<\gamma <1$, as e.g. for diffusion of proteins on biological DNAs \cite{PengNature,AvrahamHavlin,Goychuk2017}. Then, $\lambda$ is just a scaling parameter, which is convenient to use to scale distance in numerics. Furthermore, the time will be scaled in the units of $\tau_r=(\lambda^2\eta_\alpha/\sigma)^{1/\alpha}$, $k_BT$ in units of $\sigma$, and normal friction coefficient $\eta_0$ in units of $\eta_{\alpha}\tau_r^{1-\alpha}$. \subsection{Numerical approach \\ \vspace{0cm}} \subsubsection{Random potential generation} Random Gaussian potentials are generated on a lattice evenly spaced with a grid size $\Delta x\ll \lambda$, using a spectral method in accordance with the numerical algorithm detailed in Ref. \cite{SimonFNL}. It requires to use a periodic boundary condition imposed on $U(x)$ with a very large period $L$. The method is based on the fact the power spectrum $S(k)$ of the random process $U(x)$ is obtained by the Fourier transform of its ACF (Wiener-Khinchine theorem). Moreover, it characterizes the absolute values of the amplitudes of the Fourier components $\hat U_k$ of $U(x)$ in the wave number space, $\langle \hat U_k\hat U_{k'}^*\rangle=L S(k)\delta_{k,k'}$ \cite{Papoulis}. First, $S(k)$ is obtained from $g(x)$ by a fast Fourier transform (FFT). Next, Fourier transform of a Gaussian process is a Gaussian process \cite{Papoulis}. This allows to calculate the random wave amplitudes $\hat U_k$ from a set of independent Gaussian variables based on $S(k)$ and using another FFT. Finally, numerical inversion of the random wave components $\hat U_k$ to the coordinate space is done with inverse FFT. This yields the random realizations of $U(x)$. The method uses two direct and one inverse FFTs. The quality of the algorithm is checked and controlled by calculating numerically the ACF of the generated $U(x)$ and comparing it with the original ACF. It is impressively good. The readers are referred to Ref. \cite{SimonFNL} for further detail. In our numerics we fix $\Delta x=0.02$, and $L=2^{19}\approx 10^4$. Samples of random potentials with different correlations are presented in Fig. \ref{Fig1}. Notice a very rough character of the potential fluctuations for exponential correlations. There are many minima and maxima present within a correlation length. This is because this is a singular model of correlations. As a matter of fact, the corresponding force fluctuation $\langle \delta f^2(x)\rangle^{1/2}=\sqrt{2/(\Delta x \lambda)}\sigma$ diverges in the limit of $\Delta x\to 0$. This is a crucial point: $\Delta x$ must be finite, on physical grounds in any such singular model \cite{Goychuk2017}. Otherwise, local \textit{static} forces acting on the particle can take very large values for a vanishing $\Delta x\to 0$. Any stochastic Langevin simulation in such a situation is damned to fail, if the time step $\Delta t$ in simulations is not chosen appropriately small: $\Delta t\to 0$ with $\Delta x\to 0$. The smaller $\Delta x$, the smaller $\Delta t$ must be used for Langevin simulations of such singular models of disorder \cite{GoychukPRL14}. The model of a delta-correlated potential is physically a model with $U(x)$ values uncorrelated on the lattice sites. However, because of continuity of potential it remains correlated between the sites of the lattice, anyway \cite{Goychuk2017}. This is actually the case, where the potential fluctuations have the wildest character, and do not exhibit a \textit{local bias}, which otherwise is \textit{always present} because of correlations. In our numerics, we connect the lattice values of potential by parabolic splines, i.e. the potential is locally parabolic and $f(x)$ is piece-wise linear. Notice also that the power-law correlated potential is much smoother and it does not display the discussed singularity in the limit $\Delta x \to 0$. \\ \subsubsection{Approximation of the memory kernel} Our numerical approach to integrate the FLE dynamics is based on approximation of the power-law memory kernel by a sum of exponentials, \begin{eqnarray}\label{Prony} \eta(t)=\sum_{i=1}^N k_i\exp(-\nu_i t), \end{eqnarray} i.e. using a Prony series expansion \cite{GoychukPRE09,McKinley}. Eq. (\ref{Prony}) presents a particular case of more general Prony series \cite{Prony,Hauer,Park99,Schapery99}, $s(t)=\sum_{i=1}^N k_i\exp(-\nu_i t)\cos(\omega_i t+\delta_i)$, used to approximate any empirical signal $s(t)$ using $N$ decaying wave-forms, with decay rates $\nu_i$, frequencies $\omega_i$, and phase shifts $\delta_i$. It presents a further generalization of Fourier series and has been introduced originally by Prony in 1795 \cite{Prony}. The expansions of viscoelastic memory kernels like one in Eq. (\ref{Prony}) naturally emerge in the theory of polymer dynamics and polymeric melts \cite{Doi}. For example, for the Rouse model of a polymer consisting of $N$ monomers \cite{Doi}, $\nu_i=i^p\nu_{l}$ with $p=2$ and $k_i=const$, in terms of some smallest $\nu_{ l}$ in the hierarchy of relaxation rates $\nu_i$. This yields \cite{McKinley} $\eta(t)\propto 1/t^{1/p}$, e.g. $\alpha=1/2$ for $p=2$, in the range of $\tau_l\ll t \ll \tau_{h}$, with $\tau_{h}=1/\nu_{l}$, and $\tau_{l}=N^p/\nu_{l}=\nu_0$. Notice that it is $\nu_0$ which plays a fundamental role being related to the overdamped dynamics of one monomer in the Rouse chain \cite{Doi}. It determines the lower time cutoff of the power law dependence $\eta(t)\propto t^{-\alpha}$. Accordingly, $\nu_{l}=\nu_0/N^p$. The larger $N$, the larger is the upper time cutoff $\tau_h=N^p/\nu_0$, whereas $\tau_l$ remains unchanged. Notice that the both time cutoffs naturally emerge in the dynamics of polymeric melts. They always exist. The polymeric scaling of the relaxation rates $\nu_i$ is not unique. Another way is to choose a fractal scaling, $\nu_i=\nu_0/b^{i-1}$, with the spring constants $k_i =C_\alpha(b)\nu_i^\alpha/\Gamma(1-\alpha)\propto \nu_i^\alpha$, where $C_\alpha(b)$ is some constant, which depends on $\alpha$ and $b$ \cite{PalmerPRL85,Hughes,GoychukPRE09,GoychukACP12}. It is used e.g. in a phenomenological temporary network model of polymeric melts \cite{Larson}. Already for a rather crude scaling with $b=10$, the accuracy of this approximation between two memory cutoffs, $\tau_l=1/\nu_0$ and $\tau_h=\tau_l b^{N-1}$, reaches several percents for $\alpha=0.5$ \cite{GoychukPRE09}. The great advantage of the fractal scaling over the polymeric one is that it is requires a much smaller number $N$ of viscoelastic modes in the memory kernel approximation. Indeed, for having the same range $\tau_h/\tau_l$ of power law scaling within the polymeric scaling as within the fractal scaling with $N$ terms one needs \begin{eqnarray} M=b^{(N-1)/p}\; \end{eqnarray} terms in (\ref{Prony}). For example, for $\alpha=0.5$ and $p=2$ this would give $M=10^5$ (!) instead of $N=11$ within the fractal scaling with $b=10$, or $N=35$ with $b=2$. This clearly establishes superiority of the fractal scaling in numerics \cite{GoychukACP12}. It allows for a numerically very efficient approach to integrate FLE \cite{GoychukPRE09,GoychukACP12}. Notice that $\nu_0$ can be chosen somewhat smaller (to avoid numerical instability) than the inverse time step $1/\Delta t$ in the numerical simulation, and even $N\sim 10 - 20$ is typically sufficient in numerical simulations with $b=10$. For the scaling with $b=2$, the accuracy of the memory kernel approximation improves to $0.01\%$ \cite{MMNP13}. Then, however, one should also increase $N$ accordingly, which would provide an extra time burden in the numerics. Accuracy of several percents is normally sufficient. \subsubsection{Markovian embedding} Next, we introduce a subset of auxiliary variables $x_i$ \cite{GoychukPRE09} corresponding to the viscoelastic modes of the environment. Physically, they can be interpreted as coordinates of some auxiliary Brownian quasi-particles modeling viscoelastic Maxwellian modes of the environment and elastically coupled with spring constants $k_i$ to the central Brownian particle \cite{GoychukACP12}. The fractional Gaussian noise $\eta_{\alpha}(t)$ in this approach is approximated by a sum of Ornstein-Uhlenbeck processes with autocorrelation times $1/\nu_i$. Very important and even crucial in applications is that this Maxwell-Langevin approach to viscoelastic subdiffusive dynamics allows for a straightforward Markovian embedding \cite{GoychukACP12}: \begin{subequations} \begin{eqnarray}\label{embedding1} &&\eta_0\dot{x}=f(x)-\sum_{i=1}^N k_i(x-x_i)+\sqrt{2k_BT\eta_0}\zeta_0(t), \\ &&\eta_i\dot{x}_i=k_i(x-x_i)+\sqrt{2k_BT\eta_i}\zeta_i(t), \label{embedding2} \end{eqnarray} \end{subequations} where $\zeta_i(t)$ are $N+1$ uncorrelated white Gaussian noises, $\langle\zeta_i(t)\zeta_j(t')\rangle=\delta_{ij}\delta(t-t')$, and $\eta_i=k_i/\nu_i$ are frictional coefficients of auxiliary Brownian particles. This Markovian dynamics in the space of $N+1$ dimensions can be propagated using well-established algorithms like stochastic Euler or stochastic Heun methods \cite{GardBook} without principal difficulties, with a well controlled numerical accuracy. By excluding the auxiliary variables $x_i$ in Eq. (\ref{embedding1}), (\ref{embedding2}) it is easy to show that the resulting GLE for the coordinate $x$ has indeed the memory kernel, which is presented by the sum of exponentials (\ref{Prony}), and the correlated noise term, which is the sum of corresponding Ornstein-Uhlenbeck processes. For this, one has to first rewrite (\ref{embedding2}) in terms of the viscoelastic force $u_i=k_i(x_i-x)$, and formally solve the resulting equation for $u_i$. This yields \begin{eqnarray}\label{u_exact} u_i(t)=-\int_0^t k_i e^{-\nu_i(t-t')}\dot x(t')dt' + \chi_i(t) \end{eqnarray} with \begin{eqnarray}\label{chi_exact} \chi_i(t)&=&u_i(0)e^{-\nu_i t}\\&+&\sqrt{2k_BTk_i\nu_i}\int_0^t e^{-\nu_i(t-t')}\zeta_i(t')dt'. \nonumber \end{eqnarray} Each noise component $\chi_i(t)$ depends on $u_i(0)$, and all the noise components are mutually independent. Indeed, considering $u_i(0)$ as independent random Gaussian variables with $\langle u_i(0)\rangle=0$ and $\langle u_i^2(0)\rangle=k_i k_BT$, one can show that $\chi_i(t)$ present wide sense stationary Gaussian stochastic processes with $\langle \chi_i(t)\chi_j(t')\rangle =k_BTk_i\delta_{ij}e^{-\nu_i|t-t'|}$. Substituting (\ref{u_exact}) in (\ref{embedding1}) establishes the stated equivalence \cite{GoychukPRE09,GoychukACP12}, provided that the initial $x_i(0)$ in (\ref{embedding2}) are random Gaussian variables such that, $\langle x_i(0)\rangle =x(0)$, $\langle [x_i(0)-x(0)]^2\rangle=k_BT/k_i$. The considered Markovian embedding is exact, when the memory kernel is exactly the sum of exponentials (\ref{Prony}). The whole idea of Markovian embedding is very natural and sound in view of a dynamical origin of GLE dynamics: Instead of considering huge many thermal bath oscillators one replaces them by a handful of overdamped stochastic Brownian oscillators, with a nice physical interpretation in terms of a generalized Maxwell-Langevin theory of viscoelasticity \cite{GoychukACP12}. The efficiency of the resulting numerical approach has a proven record \cite{GoychukPRE09,ChemPhys10,GoychukACP12,NJP12,MMNP13,PLoSONE14,PCCP14,PhysBio15}. Upon a modification, this method can also be used for Markovian embedding of superdiffusive FLE dynamics \cite{SieglePRE10,SiegleEPL11}. Clearly, it can be also considered as an independent approach to anomalous dynamics without any relation to FLE. In any particular case, one has to choose the embedding parameters appropriately, considering a trade-off between the numerical accuracy and feasibility of simulations on the required time scale. The accuracy is controlled by comparison with the exact results like one in Eqs. (\ref{exact}), (\ref{exact2}). In our simulations below we use for $\alpha=0.5$, $b=10$, $\eta_0=0.1$, $\nu_0=10^3$, $N=12$, $C_{0.5}(10)=1.3$, which warrants $3 - 5\%$ accuracy in numerics. The rms of potential $\sigma$ is fixed in simulations, whereas temperature varies in units of $\sigma/k_B$. The time-step of integration was chosen $\Delta t=5\times 10^{-5}$, and the maximal time was $t_{\rm max}=2\times 10^5$. Stochastic Heun method (see in Appendix C) was used with double precision on high-performance graphical processors Tesla K20. In the ensemble trajectory simulations, $n=10^4$ particles were initially uniformly distributed within $[0,L]$ spatial interval with 10 different potential realizations in each case ($10^5$ particles in the ensemble averaging). Random potentials were generated as described above, in accordance with \cite{SimonFNL}, and simulations were run with periodic boundary conditions. It took typically about 5 days of computational time for each ensemble-averaged curve presented below. The results were first tested against the exact analytical result in Eq. (\ref{exact2}) in the absence of random potential. The numerical results coincide in this case with the analytical result within the width of the plotted curves like in Fig. 2, a in Ref. \cite{GoychukPRE09}, and especially, Figs. 5, 6 in Ref. \cite{GoychukACP12}, Fig. 2 of \cite{MMNP13} and inset of Fig. 1 in Ref. \cite{PRE13}. \\ \section{Results and Discussion} \subsection{Ensemble averaging} \begin{figure}[h] \centering \resizebox{0.8\columnwidth}{!}{\includegraphics{Fig2a.eps}}\\[1cm] \resizebox{0.8\columnwidth}{!}{\includegraphics{Fig2b.eps}} \caption{(Color online) Ensemble-averaged mean squared displacement versus time in units of $\tau_r=(\lambda^2\eta_\alpha/\sigma)^{1/\alpha}$ for different values of $k_BT$ in units of the disorder strength $\sigma$ for (a) exponential decay of correlations and (b) power-law decay with $\gamma=0.8$. The fit of the numerical results (full black lines) is performed for $T=0.2$ with the expression (\ref{central}) and for $T=0.1$ using (\ref{Sinai}). The fitting parameters are shown in the plot. Dashed red lines depict exact results for free subdiffusion in accordance with Eq. (\ref{exact2}): $\alpha=0.5$, $\eta_0=0.1$ and $\tau_0=0.01$. } \label{Fig2} \end{figure} \begin{figure}[h] \centering \resizebox{0.8\columnwidth}{!}{\includegraphics{Fig3a.eps}}\\[0.8cm] \resizebox{0.8\columnwidth}{!}{\includegraphics{Fig3b.eps}} \caption{(Color online) Time-dependent power law exponent $\alpha(t)$ for an assumed subdiffusive law $\langle\delta x^2(t)\rangle\propto t^{\alpha(t)}$ obtained as the logarithmic derivative of the traces in Fig.~\ref{Fig2}, for different temperatures in the case of (a) exponential correlations, (b) power law correlations with $\gamma=0.8$. } \label{Fig3} \end{figure} We first concentrate on the ensemble averaging. The results are shown in Fig. \ref{Fig2} for the exponentially decaying correlations in part (a) and for the power-law decaying correlations with $\gamma=0.8$ in part (b), for several different values of temperature starting from $T=1$ and ending with $T=0.1$. The first striking feature for both types of correlations is that random potential practically does not matter for $T=1$ and the results are not different from the exact result of potential-free subdiffusion depicted by a broken red line in accordance with Eq. (\ref{exact2}). This is not a trivial feature at all, even if it could be expected from the earlier results on viscoelastic subdiffusion in periodic potentials \cite{GoychukPRE09,GoychukACP12}. However, intuition says that a combination of slowness caused viscoelastic effects with sluggishness caused by random potential should result into an ultraslow behavior. This intuition is wrong. Very differently from memoryless diffusion in stationary Gaussian potentials, which is asymptotically normal and exponentially suppressed by disorder, viscoelastic subdiffusion is not suppressed asymptotically by disorder at all, on the ensemble level. This result is very surprising indeed because another fractional dynamics in such random potentials, namely fractional Fokker-Planck dynamics, predicts a very different result, see in Appendix A, $\langle \delta x^2(t)\rangle=2D_\alpha \exp[-\sigma^2/(k_BT)^2] t^\alpha/\Gamma(1+\alpha)$, i.e. the renormalization factor is the same for as for normal diffusion. Especially in the case exponential correlations this result of viscoelastic fractional subdiffusion is very surprising even for $T=1$, as soon one realizes that in this case the amplitude of potential fluctuations can largely exceed $k_BT$ well within a distance of the correlation length, see in Fig. \ref{Fig1}. However, with the lowering temperature the influence of potential becomes visible already for $T=0.5$, although the potential-free asymptotics is almost reached at the end point of simulations in Fig. \ref{Fig2}, and the influence is really small, barely detectable. For $T=0.1$, it becomes very distinct, and the corresponding transient regime lasts indeed very long: No slightest signature of an asymptotic regime is even present in Fig. \ref{Fig2} for $T=0.1$. The corresponding asymptotics is simply impossible to reach numerically. Instead of a power-law subdiffusion, one clearly detects a nominally ultra-slow logarithmic diffusion of the Sinai type: \begin{eqnarray} \label{Sinai} \langle \delta x^2(t) \rangle \approx x_{\rm in}^2 \left [(k_B T/\sigma_{\rm eff})\ln(t/t_0) \right ]^4 \end{eqnarray} with three fitting parameters: $\sigma_{\rm eff}$, $ x_{\rm in}^2 $, and $t_0$, two of which can be combined in the only one, $x_{\rm in}^2/\sigma_{\rm eff}^4$, in this case. It describes numerics nicely over about $7$ time decades for both types of correlations. For a larger $T=0.2$, numerics are fitted well by a more complex, yet only three parameters dependence \cite{Goychuk2017} \begin{eqnarray} \label{central} \langle \delta x^2(t) \rangle = x_{\rm in} ^2 \left\{e^{[(k_BT/\sigma_{\rm eff})\ln (t/t_0)]^2}-1 \right\}^2\;. \end{eqnarray} It follows from a scaling consideration assuming that a typical time to travel a certain distance $x$ is defined in an Arrhenius manner by the largest potential barrier met on the pathway and the fact that this largest barrier scales as \cite{ZhangPRL,HanesJPCM,Goychuk2017} \begin{eqnarray}\label{second} \delta U_{\rm max}\sim 2\sigma \sqrt{2 \ln(x/x_{\rm in})} \end{eqnarray} with the distance. Indeed, let us estimate a typical time $t$ it takes for a particle to travel the distance $x$ starting at $x_0$. It is reasonable to assume that on the intermediate time scales, where the presence of potential is very essential, this time is defined, like in the case of normal diffusion, by the largest barrier met on the particle's way, $t=t_0\exp[|\delta U_{\rm max}(x)|/(k_BT)]$, where $t_0$ is a prefactor. From this scaling ansatz, upon taking (\ref{second}) into account, we obtain the estimate in (\ref{central}) with $\sigma_{\rm eff}=2\sqrt{2}\sigma$. Given a very crude character of this estimate, $\sigma_{\rm eff}$ should be considered as a fitting parameter. Like for memoryless diffusion, this result holds also for viscoelastic subdiffusion because a typical mean time to overcome a potential barrier does scale in Arrhenius manner with its height \cite{GoychukPRE09,GoychukACP12}. Namely this kind of behavior dominates in the transient regime, where the influence of potential on viscoelastic subdiffusion is very essential. Sinai diffusion in Eq. (\ref{Sinai}) just follows from Eq. (\ref{central}) as the lowest order expansion in $k_BT/\sigma_{\rm eff}$. The fitted values of $\sigma_{\rm eff}$ agree actually fairly well with the theoretical value $\sigma_{\rm eff}=2\sqrt{2}\sigma\approx 2.83\sigma$ \cite{Goychuk2017}. The agreement of the fitted values with theoretical value of $x_{\rm in}=\pi\lambda /\sqrt{\gamma}\approx 3.51 \lambda$ for $\gamma=0.8$ \cite{Goychuk2017} is also rather good for power law correlated potentials, see especially for $T=0.2$. For singular model with exponential correlations, which predicts $x_{\rm in}=\pi\sqrt{\lambda \Delta x/2}\approx 0.315 \sqrt{\lambda}$ for $\Delta x=0.02$, the agreement worsens. Nevertheless, scaling argumentation of Refs. \cite{Bouchaud1990,Goychuk2017} works surprisingly good, given its very rough character, also for viscoelastic subdiffusion in random potentials. Notice that in power-law correlated potentials, Sinai-like subdiffusion is essentially faster in absolute terms than one in exponentially correlated potentials. The reason becomes immediately clear from Fig. \ref{Fig1}. This is because power-law correlated disorder is much smoother, and the maximal barrier met on the same distance is essentially smaller than in the case of exponential correlations. Furthermore, an interesting transient effect on the ensemble level is that viscoelastic subdiffusion in random potential can be even faster than the potential-free subdiffusion, see for power law correlations and $T=0.1$ in Fig. \ref{Fig2}, b. This is because of an alternating local bias felt by each separate particle \cite{Goychuk2017}. Such a local bias and the resulting random drift are responsible e.g. for the Golosov phenomenon in the case of genuine Sinai diffusion \cite{Golosov84,Bouchaud1990}. Golosov phenomenon describes at the first look paradoxical effect that in an environment with random bias two particles starting nearby do not diffuse strongly apart being subjected to one and the same local bias in the environment \cite{Golosov84,Bouchaud1990}. The observed phenomenon is different. However, it has precisely the same physical origin: a strong local bias which alternates randomly its direction. Single-trajectory averages, see below, do not show such a paradoxical feature. Then, subdiffusion is always suppressed by random potential. \begin{figure*}[] \centering \includegraphics[width=6cm]{Fig4a.eps} \hspace*{0.8cm} \includegraphics[width=6cm]{Fig4b.eps} \\ [0.8cm] \includegraphics[width=6cm]{Fig4c.eps} \hspace*{0.8cm} \includegraphics[width=6cm]{Fig4d.eps} \caption{(Color online) Single-trajectory, time-averaged mean squared displacement for two values of temperature $T=0.5$ and $T=0.2$, and two types of correlations shown in each panel. The trajectories time length was ${\cal T}_{\rm w}= 10^5$. 20 trajectory averages were made for particles starting from different locations. They are depicted with solid lines. The results of the ensemble-averaged, as well as ensemble-averaged time-averaged (EATA), and potential-free subdiffusion are also depicted for comparison. Insets show the distribution of the scaled subdiffusion coefficient $D$ and power law exponent $\alpha$ for single-trajectory fits with dependence $Dt^\alpha$. The green cross therein corresponds to the averaged values of $D$ and $\alpha$, while the red star to the ensemble-averaged result. The result of free subdiffusion is depicted for comparison as a blue diamond in each inset. Remarkably, the ensemble average is only slightly suppressed by the random potential even for $T=0.2$ in the case of power law correlations at the end point of simulations, see in the panel d, whereas transiently it is even faster. However, single-trajectory averages are suppressed essentially stronger. Generally, scatter in single-trajectory averages is visibly stronger for exponential correlations. } \label{Fig4} \end{figure*} An alternative to Eq. (\ref{central}) way to represent the results is to introduce a time-dependent exponent $\alpha(t)$ of power-law subdiffusion \begin{eqnarray} \label{intermed} \langle \delta x^2(t)\rangle\approx x_{\rm in}^2 [t/t_0]^{\alpha(t)}\;. \end{eqnarray} Its behavior is depicted Fig. \ref{Fig3}. For Sinai-like diffusion at $T=0.1$ and $T=0.2$, $\alpha(t)$ declines in time. It should reach a minimum \cite{Goychuk2017} and then logarithmically slow grow, $\alpha(t)\propto \log(t)$, as Eq. (\ref{central}) predicts, \cite{Goychuk2017} for a possibly very long period of intermediate times, until the assumptions which lead to (\ref{central}) remain valid. Indeed, in the course of time, when the unity becomes negligible in Eq. (\ref{central}), it reduces to Eq. (\ref{intermed}) with \begin{eqnarray}\label{intermed2} \alpha(t)=2(k_B T/\sigma_{\rm eff})^2\ln(t/t_0). \end{eqnarray} For $T=0.2$ and exponential correlations, the minimum is indeed reached at $\alpha_{\rm min }\approx 0.32$ in Fig. \ref{Fig3}, a, which is approximately the same value as for normal diffusion in this potential \cite{Goychuk2017}. However, it is still not achieved in Fig. \ref{Fig3}, b, for $T=0.2$ in the case of power law correlations. Furthermore, for $T=0.1$, it is not achieved for the both types of correlations. Unfortunately, the regime of logarithmically growing $\alpha(t)$ is numerically not achievable in our simulations even for exponential correlations and $T=0.2$. To find it, one should probably propagate the dynamics by a factor of 100 longer. This is clearly not feasible computationally at present. This behavior is in contrast with one of the major features of memoryless diffusion in the studied random potentials \cite{Goychuk2017}, where such a long-lasting intermediate regime was clearly detected, for both exponential and power-law potential correlations. This is because in the case viscoelastic subdiffusion various transient regimes last much longer. However, computationally it is much more demanding and the corresponding time scale is difficult to reach. \subsection{Single-trajectory averages} Single-trajectory averages \cite{HePRL08,LubelskiPRL08,Barkai,MetzlerPCCP} \begin{equation}\label{single} \overline{\delta x^2(t)}^{{\cal T}_w}=\frac{1}{{\cal T}_w-t}\int_0^{{\cal T}_w-t}\Big[ \delta x(t|t')\Big]^2dt' \end{equation} of the mean-squared displacement $\delta x(t|t')=x(t+t')-x(t')$ over the maximal time window ${\cal T}_w$ present a great interest, especially for experimentalists who often simply do not have a possibility to deal with macroscopically many particles. To avoid a trivial statistical scatter in Eq. (\ref{single}), the maximal time $t$ should be much smaller than ${\cal T}_w$. In the numerical results depicted in Fig. \ref{Fig4} it is just 1\%. Remarkably, even for $T=0.5$ the scatter in single-trajectory averages is strong. It is clearly more pronounced in the case of exponential correlations for the reason, which is already obvious. Interesting, fitting the single-trajectory averages as $\overline{\delta x^2(t)}^{{\cal T}_w}=Dt^\alpha$, with a trajectory-specific scaled anomalous diffusion coefficient $D$ and corresponding power exponent $\alpha$, gives the corresponding values broadly scattered, see in insets in Fig.~\ref{Fig4}. The mean value $\bar \alpha$ of this trajectory-specific $\alpha$ is depicted by a green cross in the corresponding insets. For power law correlations (part \textbf{b}), $\bar \alpha \approx 0.50$ is the the same as for the ensemble-averaged curve (red star) and free subdiffusion (blue diamond), while for the exponential correlations in the part \textbf{a} it is slightly different, $\bar \alpha \approx 0.56$. This is an interesting feature. For example, in Ref. \cite{Golding} single-trajectory averages for subdifusing mRNA molecules are scattered around the same $\alpha=0.70$ (different from $\alpha=0.5$ fixed in our numerics here). Moreover, in Ref. \cite{MagdziarzPRL} it has been shown that the data in Ref. \cite{Golding} are more consistent with a fractional Brownian motion than a CTRW subdiffusion. Indeed, we see in the inset of our Fig. \ref{Fig4}, a that $D$ is scattered over a range of about 40 between the minimal and maximal values, whereas in experiment the scattering range is about 100. Taking into account that the size of mRNA is also distributed \cite{Golding} and the results of Ref. \cite{MagdziarzPRL}, it is indeed looks likely that viscoelastic subdiffusion in a random environment is better suited to explain subdiffusion in bacterial cytoplasm than CTRW. For a stronger disorder of $\sigma=5\;{k_BT}$ in Fig. \ref{Fig4}, c,d, $\alpha$ is scattered stronger and its mean value is smaller, $\bar \alpha=0.33$ in the part \textbf{c} and $\bar \alpha=0.303$ in the part \textbf{d}, which correlate the corresponding fitting values of the ensemble-averaged subdiffusion, $0.325$ and $0.36$, correspondingly, see also the corresponding end points in Fig. \ref{Fig3}. These values are not related to $\alpha$ of free subdiffusion and have a very different origin, the same as for normal diffusion in such potentials \cite{Goychuk2017}, see also above. The scatter of $D$ becomes also more pronounced. Notice also that while the ensemble-average of single-trajectory averages, EATA, in figure \ref{Fig4} gradually converges to the ensemble-averaged result, in the case $T=0.5$, some of the single-trajectory averages can look yet very different. In this respect, one should mention that many experimental data on subdiffusion in living cells seem to clearly point out on the viscoelastic mechanism of this subdiffusion upon use of several strict criteria \cite{Robert,SzymanskiPRL,WeissPRE13,MagdziarzPRL}. However, other researchers doubt it because single trajectory averages reveal essential non-ergodic features \cite{WeigelPNAS,TabeiPNAS}. A tentative resolution of this paradox is that the discussed biologically related anomalous diffusion is viscoelastic subdiffusion in a random, inhomogeneous and fluctuating environment. It seems almost obvious, on physical grounds. Differently from Ref. \cite{TabeiPNAS}, we model this fact by imposing a random potential on viscoelastic subdiffusion, rather than subordinating physical time to a random clock of CTRW. Indeed a typical mesh size of random actin meshwork in eukaryotes cytosol and model polymeric fluids is $0.1 - 1$ $\mu$m depending on the actin concentration \cite{WongPRL}. Let us take it to be $\lambda \approx 0.308$ $\mu$m and associate it with the correlation length of random potential. Furthermore, let us consider diffusion of globular proteins of the radius $R=2.5$ nm (a typical one) in such a system. Actin meshwork is charged and globular proteins are also typically charged \cite{WongPollack10,Grosberg02,Messina09}. This will cause a screened (by mobile ions) electrostatic interaction. The strength can be variable depending on the mesh size, the screening length, and the size of particle. The whole problem is highly nontrivial and given the complexity of electrostatic interactions in soft matter \cite{WongPollack10,Grosberg02,Messina09} it does not seem to be even properly approached at the moment. Nevertheless, given a typical strength of electrostatic interactions in soft matter it is not unreasonable to take $\sigma=(2 - 5)\;k_BT$ as a first reasonable guess in our estimate. Indeed, distribution of binding energy of regulatory proteins to DNA tracks has also the same typical range \cite{LassigReview,GerlandPNAS,BenichouPRL09}. The subdiffusion coefficient of a particle of radius 2.5 nm in cytosol should be about the same as for a gold nanoparticle of the same radius in HeLa cells in Ref. \cite{Guigas}. It is estimated to be $D_\alpha\approx 0.644 \;{\rm \mu m^2/s^{1/2}} $ in our notations (see Table I in Ref. \cite{PRE12b}). The inertial effects in such a case are completely negligible and the time scale parameter $\tau_r$ is estimated to be $\tau_r\approx 5.48 $ ms, see in Appendix B. Hence, the maximal time in our Fig. \ref{Fig4} is about $5.48$ s for single trajectory averages. In accordance with our results, for $\sigma=2\;k_BT$ they would be broadly scattered on this time scale as in Fig. \ref{Fig4}, a, b, even if the averaging time window ${\cal T}_w$ would be 548 s long. However, the ensemble average would be almost independent of the presence of random potential, like in Fig. \ref{Fig2} for $T=0.5$. This is a first crude idea to resolve some current controversies. However, a further quantitative analysis of the available experimental data from the discussed perspective of viscoelastic subdiffusion in random potentials is required and welcome. \subsection{Escape time distribution} \begin{figure} \centering \resizebox{0.75\columnwidth}{!}{\includegraphics{Fig5a.eps}}\\[0.8cm] \resizebox{0.75\columnwidth}{!}{\includegraphics{Fig5b.eps}} \\[0.8cm] \resizebox{0.75\columnwidth}{!}{\includegraphics{Fig5c.eps}} \caption{(Color online) Probability density function of log-transformed first escape times, $z=\ln t$, from the interval $[-\lambda,\lambda]$ for two temperatures $T=0.5$ (in blue) and $T=0.2$ (in black). The cases in the different panels are: (a) exponential correlations, (b) power-law correlations with $\gamma=0.8$. The symbols represent simulations data, the lines correspond to a fit with the probability density (\ref{distro}). Parameters are shown in the panels. With decreasing temperature the distributions become broader and the parameter $b_2$ smaller. In (c), probability densities of the original non-transformed time variable are plotted which correspond to the part (a). In this case, $\psi(t)$ seems to show some parts with power law dependencies. Especially confusing is the case of $T=0.2$, where the part of distribution with negative exponent $-1.3$ (indicated in red), in a conjunction with $\alpha(t)\approx \alpha_{\rm min}\approx 0.32 $ in Fig. \ref{Fig3}, a and broad scatter of single trajectory averages in Fig. \ref{Fig4}, c can be erroneously interpreted within a CTRW theory with divergent mean residence time. } \label{Fig5} \end{figure} The success of the scaling argumentation extended from normal to subdiffusive viscoelastic dynamics in stationary Gaussian potentials suggests that the escape time distributions should also be similar. We consider escape of the particles out of $[-\lambda,\lambda]$ spatial interval, which are initially located at its center. The distribution of logarithmically transformed escape times, $z=\ln t$, is plotted in Fig. \ref{Fig5}. Indeed, a generalized log-normal distribution of Ref. \cite{Goychuk2017} \begin{eqnarray} \nonumber \psi(t)&=&\frac{C}{t}\Big[ e^{-|\ln(t/t_m)/\kappa_1|^{b_1}}\theta(t_m-t)\\ &&+e^{-|\ln(t/t_m)/\kappa_2|^{b_2}}\theta(t-t_m)\Big], \label{distro} \end{eqnarray} where $C=b_1b_2/[b_2\kappa_1\Gamma(1/b_1)+b_1\kappa_2\Gamma(1/b_2)]$ is a normalization constant, $b_{1,2}>1$, and $\kappa_{1,2}>0$, fits excellently to the numerical data for the both considered models of correlations. General features are similar to those of memoryless diffusion. The escape density has a maximum at $\ln t_m$ value of the logarithmically transformed time variable. Furthermore, escape in the power-law correlated potentials occurs much faster than in the exponentially correlated potentials. The exponent $b_2$ is strongly temperature dependent. With lowering temperature it becomes smaller and closer to one. However, all the moments of RTD remain finite. Notice that this generalized log-normal distribution can sometimes be easily mistaken for a power law, if to plot it in doubly logarithmic coordinates for the original non-transformed time variable as e.g. in Fig. \ref{Fig5}, c, in the case of exponential correlations. It is also reminiscent of a stretched exponential distribution \cite{GoychukPRE09}. This is why to know the moments of experimental distributions is important, as well as using other representations of experimental data, upon a transformation of random variable, like in our Fig. \ref{Fig5}, a, b. This would allow a new look on the existing experimental data, such as e.g. in Ref. \cite{WongPRL}, in the light of our model. Indeed, it is very tempting to interpret $\psi(t)\propto 1/t^{1.3}$ in Fig. \ref{Fig5}, c for $T=0.2$ in conjunction with $\alpha(t)\approx \alpha_{\rm min}\approx 0.32 $ in Fig. \ref{Fig3}, a, and a strong scatter of single-trajectory averages in Fig. \ref{Fig4}, c within the traditional CTRW theory with divergent mean residence time, like done in Ref. \cite{WongPRL}. However, real physics in our particular case is very different. In our work, these results are produced by viscoelastic subdiffusion in a random Gaussian potential. Unlike the case of CTRW subdiffusion \cite{MetzlerPCCP}, single-trajectory averages have in the studied case of viscoelastic subdiffusion an averaged $\bar \alpha$, which correlates well with the power law exponent of the ensemble-average, see in Fig. \ref{Fig4}. This can provide an important experimental criterion to distinguish among various theoretical explanations possible. \subsubsection{Digression on dimensionality} The question of whether one can or not directly apply the results obtained within a one-dimensional model to two- or three-dimensional diffusion in living cells, or dense heterogeneous polymeric liquids is not trivial. First of all, in the one-dimensional case the particle cannot avoid a trap, or a barrier on its way. However, in 2d and 3d it can find the ways around. This remark is especially important for viscoelastic subdiffusion, which can thoroughly explore the space. Indeed, the fractal Hausdorff dimension of fBm trajectories occupying 3d space is \cite{Feder} $d_H = 2/\alpha$, for $2/3 < \alpha < 1$, and $d_H = 3$, for $0 < \alpha < 2/3$. Hence, for $\alpha<2/3$ the fBm fills densely the three-dimensional Euclidean space and it can find all possible ways around. Thus, our first expectation is that in 2d and 3d viscoelastic subdiffusion will generally more easily overcome the medium's disorder than in 1d. At the same time, it will show a significant scatter in single-trajectory averages. This is namely a kind of behavior which is often observed, and which our 1d theory predicts. To reveal the regime of Sinai like diffusion in higher dimensions seems less likely, unless the disorder is very irregular, of singular type, like in the case of exponential correlations. In the extreme case of disorder uncorrelated on the sites of lattice, it is easy to grasp that to avoid the traps is hardly possible beyond the linear sizes of several lattice constants. These preliminary considerations require a lot of further research which is computationally very demanding. Nevertheless, the insights obtained from simplified 1d models are very important. They can drive and strongly impact the follow-up research, as it has been multiply proven in the historical development of the theory of both normal and anomalous diffusion. \section{Summary and conclusions} In this work, we studied numerically viscoelastic subdiffusion governed by a fractional Langevin equation in stationary Gaussian random potentials for several models of decaying correlations. Such theoretical models are of special interest in the context of biologically relevant viscoelastic subdiffusion in random environments. Our study revealed several surprises. First of all, on the ensemble level the influence of random potential is almost completely negligible for $\sigma = k_BT$. Viscoelastic subdiffusion easily wins over the potential randomness, even if the potential is wildly fluctuating as in the case of exponentially decaying correlation, see in Fig. \ref{Fig1}. This is a very unexpected result because (i) normal diffusion in such potentials is suppressed by the factor $\exp[-(\sigma/k_BT)^2]$ which is approximately $0.368$ in this case, (ii) slowness combined with sluggishness intuitively should result into a super-slowness. However, this intuition fails completely. Nevertheless, this surprising result was already partially anticipated in view of akin influence of the periodic potentials on viscoelastic subdiffusion \cite{GoychukPRE09,GoychukACP12}. It has precisely the same explanation: Distributions of the escape times out of metastable minima have finite moments, and the asymptotic behavior is determined by viscoelastic long-time correlations in the medium that yield unobstructed subdiffusion. With the increase of the disorder strength to $\sigma =2 k_BT$, the presence of a transient behavior becomes slightly visible. However, on the ensemble level the effect is really weak and one can clearly deduce from numerics that the asymptotic regime is almost achieved at the end of our simulations. At odds with $\exp[-(\sigma/k_BT)^2]\approx 0.018$ for normal diffusion in this case, viscoelastic subdiffusion is practically not suppressed at all. Nevertheless, in spite of a barely noticeable effect on the ensemble level, single-trajectory averages exhibit a substantial scatter. This can provide a key insight to understand some experiments on subdiffusion in biological cells. With a further increase of randomness strength to $\sigma\sim (5 - 10)k_BT$, a very distinct behavior emerges. For $\sigma =10\; k_BT$, it is clearly Sinai subdiffusion, $\langle \delta x^2(t)\rangle \propto \ln^4 (t/t_0)$, for both exponential and power law correlations. Its origin can be explain in a very similar manner as in the case of normal diffusion in such potentials. It is caused by extremal value fluctuations of the potential $\delta U_{\rm max}(x)\propto \sqrt{\ln x}$ and has clearly a transient character. Ultimately, the regime of potential-free viscoelastic subdiffusion will be reached. However, the transients can last so long that they never will be reached in reality. For an intermediate $\sigma=5\; k_BT$, a more complex behavior in Eq. (\ref{central}) substantiate, in agreement with numerics, the reasoning based on a scaling argumentation \cite{Goychuk2017} and Arrhenius character of viscoelastic diffusion over potential barriers \cite{GoychukPRE09}. It can, however, be also described with some effective power law exponent $\alpha(t)$ that temporally can be nearly a constant, which can be confused with a CTRW subdiffusion \cite{Goychuk2017}. The author is confident that these highly surprising results will attract attention of both theorists and experimentalists leading to a further research in this currently least explored area of anomalous diffusion. \\ \section*{Conflicts of interest} There are various conflicts of interest with competitors in the field of anomalous diffusion and its application to biological systems. \section*{Acknowledgment} Funding of this research by the Deutsche Forschungsgemeinschaft (German Research Foundation), Grant GO 2052/3-1 is gratefully acknowledged.
{ "timestamp": "2018-11-12T02:09:40", "yymm": "1712", "arxiv_id": "1712.05238", "language": "en", "url": "https://arxiv.org/abs/1712.05238" }
\section{Introduction} General theory of relativity is widely regarded to be the best theory of gravity as over the years it has passed many observational tests with flying colors. Significant observational aspects such as the perihelion precission of Mercury's orbit, gravitational lensing, redshift in the light spectrum from extragalactic objects are well documented in the framework of general relativity. However, general relativity can pose considerable intrigue in certain aspects such as the possibility of an extremely strong gravitational field imploding without limit and ending up in a region where the density of matter and the strength of the gravitational field can in principle, become infinite. Such a region is called a spacetime singularity. Aspects of gravitational collapse and the formation of a spacetime singularity forms an integral part of gravitational physics today. \\ In general, a continual gravitational collapse occurs whenever a massive astronomical body, upon exhausting it's nucleur fuel supply, fails to support itself against the force of gravity. For a simple enough configuration of collapsing matter, a horizon generally develops prior to the formation of the singularity, thereby enveloping the singularity producing a black hole end-state. The first analytic model of such an unhindered contraction of an idealized star ending up in a black hole was given by Oppenheimer and Snyder \cite{os} and independently by Dutt \cite{dutt} and these serve as the paradigm of gravitational collapse today. However, whether or not every sufficiently massive star undergoing gravitational collapse ultimately ends up in a black hole, remains an intriguing question even in the current context. In this regard, Penrose proposed \cite{rp} the cosmic censorship hypothesis (CCH), which roughly states that gravitational collapse of physically reasonable matters with generic regular initial data will always end up in a covered singularity. \\ The CCH, however, remains one of the most thought-provoking problems in gravitational physics today. There is no general proof of the CCH as yet which applies under all conditions. Moreover, there are many counterexamples in general relativity, where it is shown that the singularities formed in a collapse of reasonable distribution of matter can stay exposed, giving rise to the concept of a naked, i.e., an observable singularity (for a brief summary of such examples, we refer to \cite{thesis} and references therein). This much is realized that the nature of the final outcome of a collapse depends on the initial configuration from which the collapse evolves, and the allowed dynamical evolutions in the spacetime, as permitted by the non-linear field equations of gravity. For recent works and different aspects regarding the dynamics of a continued gravitational collapse we refer to the summary by Joshi \cite{joshi1, joshi2}. \\ The singularity or a spacetime region with infinitely high curvature can be realized only during the final stage of gravitational collapse, where the collapsing body almost reaches a zero proper volume and known laws of physics are expected to break down. A possibility of naked singularity implies that one may have a chance to observe indication of quantum effects of gravity. Amongst the many quantum theories of gravity proposed, superstring/M-theory is a promising candidate, which motivates the presence of a higher-dimensional spacetime \cite{M}. During the final stages of the stellar evolution, where the curvature of the central high-density region is very high, the effects of extra dimensions can perhaps play a crucial role. From such a perspective, higher-dimensional gravitational collapse models are studied in general relativity (we refer to the works of Banerjee, Debnath and Chakraborty \cite{bdc}, Patil \cite{patil}, Goswami and Joshi \cite{gosjos} in this regard). Given that the entire aspects of superstring/M-theory are not understood completely so far, taking their effects perturbatively into classical gravity is one possible approach to study higher curvature effects. The Gauss-Bonnet (GB) term, $G = R^2 - 4R_{\mu\nu}R^{\mu\nu} + R_{\mu\nu\alpha\beta}R^{\mu\nu\alpha\beta}$ in the standard Einstein-Hilbert Lagrangian is the higher curvature correction to general relativity which finds it's motivation from heterotic superstring theory \cite{superstring}. Such a theory is called the Einstein-Gauss-Bonnet gravity. \\ The additional elegance of including the GB term is that this is a Lovelock scalar and if included linearly in the action, the field equations include no higher than second order partial derivatives of the metric tensor (unlike $f(R)$ gravity whose field equations are fourth order in metric components). In a four dimensional spacetime the Gauss-Bonnet term does not modify the field equations. However, if the non-minimal coupling of a scalar field with the GB term is considered, the dynamical equations are quite different from the standard field equations and the influence of the GB term in a four dimensional universe is effective. \\ Recently there is an increasing interest in Gauss-Bonnet theory with a non-minimally coupled scalar field to suffice for the possible candidature of the late time acceleration of the universe \cite{nojiri1, nojiri2, cognola, koiv1, koiv2}. From such a perspective, spherically symmetric solutions has been studied by Boulware and Deser\cite{boul1, boul2} and Gurses \cite{gurses}. It has also been discussed that the effective action including correction terms of higher order in the curvature can perhaps play a significant role in the dynamics of early universe by Zwiebach\cite{zwi}, Zumino\cite{zu}, Boulware and Deser\cite{boul1, boul2}. Questions regarding gravitational instability, cosmological perturbation were also considered by Kawai, Sakagami and Soda\cite{kawa1, soda}, Kawai and Soda\cite{kawa2}. Observational restrictions over different cosmological aspects of the scalar field coupled Einstein Gauss Bonnet gravity were investigated by Guo and Schwarz\cite{guo}, Koh et. al.\cite{koh}. Spherically symmetric collapsing solutions of this theory have also gained some interest quite recently. For instance, Maeda presented a model of $n \geq 5$ dimensional spherically symmetric gravitational collapse of a null dust fluid in Einstein-Gauss-Bonnet gravity \cite{maeda1} and illustrated the possibility of a formation of massive naked singularity in higher dimensions. He comparitively analyzed the results with the general relativistic cases as well \cite{maeda2}, which serves as a higher order generalization of the Misner-Sharp formalism of the four-dimensional spherically symmetric spacetime with a perfect fluid in general relativity. Hamiltonian Formulation of spherically symmetric scalar field collapse in Gauss-Bonnet gravity was studied in detail by Taves, Leonard, Kunstatter and Mann \cite{mann1}. They also proved that such a formulation can readily be generalized to other matter fields minimally coupled to gravity. Apart from their role in cosmology, the role of scalar fields in a gravitational collapse is worthy of attention. It is indeed important to investigate if the CCH necessarily holds or violates in the collapse of fundamental matter fields. Moreover, a scalar field alongwith an interaction potential is known to mimic different kind of reasonable distribution of matter as discussed by Goncalves and Moss \cite{gonca}. Numerical simulations of the spacetime evolution of a massless scalar field minimally coupled to gravitational field studied by Choptuik \cite{chop}, Brady \cite{brady} and Gundlach \cite{gund} hint interesting possibilities, such as, the critical behavior observed around the threshold of black hole formation. There is a self-similar solution that sits at the black hole threshold, dubbed a critical solution, and also a very interesting mass scaling law for the formed black hole end-state. Motivated by this, self similar collapsing scenario of a massive scalar field was analytically studied by Banerjee and Chakrabarti \cite{scnb1} very recently. Deppe, Taves, Leonard, Kunstatter and Mann presented a numerical analysis in generalized flat slice co-ordinates of self-gravitating massless scalar field collapse in five and six dimensional Einstein Gauss Bonnet gravity near the threshold of black hole formation \cite{mann2}. Effects of higher order curvature corrections to Einstein's Gravity on the critical phenomenon near the black hole threshold, were investigated by Golod and Piran \cite{golod}. In general relativity also, possibilities and scope of scalar field collapse have been analytically studied extensively \cite{scalarcollapse}. Finally, the fact that the distribution of the dark energy component or the fluid still remains unknown, naturally inspires a continuing study of scalar field collapse under increasingly generalized setup (preferably alongwith a fluid distribution) towards a better understanding of the possible clustering of dark energy. \\ In this work, we study aspects of a scalar field collapse in a Scalar-Einstein-Gauss-Bonnet gravity, where the self-interacting scalar field $\phi$ is non-minimally coupled to the GB term. Very recently Banerjee and Paul \cite{nbtp} studied such a scalar field collapse where the coupling term was proportional to $e^{2\phi}$. In the present case the coupling term is proportional to $\phi^2$, i.e., the coupling is quadratic in $\phi$. Quite recently, Doneva and Yazadjiev showed that in a very similar setup of Scalar Einstein Gauss Bonnet theory (the conditons they imposed on the coupling function $f(\phi)$ are $f'(\phi = 0) = 0$ and $b^2 = f''(\phi = 0) > 0$), there exists new black hole solutions which are formed by spontaneous scalarization of the Schwarzaschild black holes in the extreme curvature regime and below a certain mass, the Schwarzschild solution becomes unstable and new branch of solutions with nontrivial scalar field bifurcate from the Schwarzschild one \cite{doneva1}. They also proved the existence of neutron stars in a class of extended scalar-tensor Gauss-Bonnet theories for which the neutron star solutions are formed via spontaneous scalarization of the general relativistic neutron stars \cite{doneva2}. Very recently, the spontaneous scalarization of black holes and compact stars from such a Gauss-Bonnet coupling has been investigated and dubbed as the Quadratic Scalar-Gauss-Bonnet gravity by Silva et. al. \cite{silva}. In this context, existence of regular black-hole solutions with scalar hair in the Einstein-scalar-Gauss-Bonnet theory was investigated by Antoniou, Bakopoulos and Kanti \cite{kanti0, kanti00}, with a general coupling function between the scalar field and the quadratic Gauss-Bonnet term which highlighted the limitations of the existing no-hair theorems. In a recent study Kanti, Gannouji and Dadhich have addressed the importance of such a coupling from a cosmological purview and proved by some simple analytical calculation that a quadratic coupling function, although a special choice, allows for inflationary, de Sitter-type solutions to emerge \cite{kanti}. \\ The inclusion of the Gauss-Bonnet terms make the dynamical field equations even more non-linear. We study a spatially homogeneous model where the energy momentum tensor is contributed by the self-interacting scalar field as well as a perfect fluid. To track down the system of equatons, we use the method of invertible point transformations and integrability of anharmonic oscillator equations; an approach which has been quite useful recently, in the study of minimally coupled massive scalar field collapse by Chakrabarti and Banerjee \cite{scnb1, scnb2}. \\ The paper is organized as follows. In section $II$, we introduce the action and basic field equations. Section $III$ contains the method of finding the exact solution and in sections $IV$ we study the evolution of the scale factor for different initial data. The time evolution of the scalar field is studied in section $V$. Evolution of the curvature scalars and the strong energy condition is studied in section $VI$. The physical nature of the singularity is addressed in $VII$. We complete the model by matching the solution with an exterior Vaidya solution in section $VIII$ and conclude in section $IX$. \section{Action and Basic Equations} The relevant action for a four dimensional action containing the Einstein-Hilbert part, massive scalar field and the Gauss-Bonnet term coupled with the scalar field. The corresponding action is given by \begin{eqnarray}\nonumber\nonumber \label{action} && S = \int d^4x \sqrt{-g} \bigg[R/(2\kappa^2)-(1/2)g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-V(\phi)\\&&\nonumber - \xi(\phi)G \bigg], \label{action}\\ \end{eqnarray} where $R$ is the Ricci scalar, $1/(2\kappa^2) = M_{p}^2$ is the four dimensional squared Planck scale and $G = R^2 - 4R_{\mu\nu}R^{\mu\nu} + R_{\mu\nu\alpha\beta}R^{\mu\nu\alpha\beta}$ is the GB term. $\phi$ and $V(\phi)$ denote the scalar field and the self interaction potential respectively. $\xi(\phi)$ defines the coupling between scalar field and GB term. Variation of the action with respect to metric and scalar field leads to the field equations as follows, \begin{eqnarray} &\frac{1}{\kappa^2}&[-R^{\mu\nu}+(1/2)g^{\mu\nu}R] + (1/2)\partial^{\mu}\phi\partial^{\nu}\phi \nonumber\\ &-& (1/4)g^{\mu\nu}\partial_{\rho}\phi\partial^{\rho}\phi + (1/2)g^{\mu\nu}\big[-V(\phi)+\xi(\phi)G\big] \nonumber\\ &-& 2\xi(\phi)RR^{\mu\nu} - 4\xi(\phi)R^{\mu}_{\rho}R^{\nu\rho} - 2\xi(\phi)R^{\mu\rho\sigma\tau}R^{\nu}_{\rho\sigma\tau} \nonumber\\ &+& 4\xi(\phi)R^{\mu\rho\nu\sigma}R_{\rho\sigma} + 2[\nabla^{\mu}\nabla^{\nu}\xi(\phi)]R-2g^{\mu\nu}[\nabla^2\xi(\phi)]R \nonumber\\ &-& 4[\nabla_{\rho}\nabla^{\mu}\xi(\phi)]R^{\nu\rho} - 4[\nabla_{\rho}\nabla^{\nu}\xi(\phi)]R^{\mu\rho} + 4[\nabla^2\xi(\phi)]R^{\mu\nu}\nonumber\\ &+&4g^{\mu\nu}[\nabla_{\rho}\nabla_{\sigma}\xi(\phi)]R^{\rho\sigma} + 4[\nabla_{\rho}\nabla_{\sigma}\xi(\phi)]R^{\mu\rho\nu\sigma} = 0 \label{mainfieldequation} \end{eqnarray} and \begin{equation} g^{\mu\nu}[\nabla_{\mu}\nabla_{\nu}\phi] - V'(\phi) - \xi'(\phi)G = 0, \label{scalarfieldequation} \end{equation} where a prime denotes the derivative with respect to $\phi$. \\ The metric for the interior is assumed to be a spatially flat Friedmann-Robertson-Walker metric and is given by \begin{equation} ds^2 = -dt^2 + a^2(t)\big[dr^2 + r^2d\theta^2 + r^2\sin^2{\theta}d\varphi^2\big] \label{metric} \end{equation} where the scale factor $a(t)$ governs the time evolution of the interior spacetime. For such a metric, the expression of Ricci scalar $R$ and GB term $G$ take the following form, \begin{eqnarray}\label{scalar} R = 6[2H^2 + \dot{H}]\nonumber\\ G = 24H^2[H^2 + \dot{H}] \nonumber \end{eqnarray} where $H = \dot{a}/a$ and dot denotes the derivative with respect to time $(t)$. \\ Here, we have assumed the interior of the collapsing star to be consisting of the scalar field as well as a perfect fluid. Therefore, using the metric in eqn. (\ref{metric}), the field equations can be written as, \begin{equation} -(3/\kappa^2)H^2 + (1/2)\dot{\phi}^2 + V(\phi) + 24H^3\dot{\xi} + \rho = 0, \label{G00} \end{equation} \begin{eqnarray}\nonumber && \frac{1}{\kappa^2}\big[2\dot{H}+3H^2\big] + (1/2)\dot{\phi}^2 - V(\phi) - 8H^2\ddot{\xi} \\&& - 16H\dot{H}\dot{\xi} - 16H^3\dot{\xi} + p = 0, \label{G11} \end{eqnarray} and \begin{equation} \ddot{\phi} +3H\dot{\phi} + V'(\phi) + 24\xi'(\phi)(H^4+H^2\dot{H}) = 0. \label{scalarKG} \end{equation} $\rho$ and $p$ signifies the density and pressure of the constituent fluid inside the collapsing star. \section{Exact Solution} It can be easily noted that, due to the inclusion of the additional fluid component (two additional unknown functions $\rho$ and $p$), the system of equations becomes even more difficult to deal with analytically, even more so without assuming any particular equation of state. However, we try to do away with the difficulty by incorporating a strategy, where the scalar field evolution equation (\ref{scalarKG}) is identified as an anharmonic oscillator equation and integrated straightaway without any apriori assumptions regarding the equation of state or the scale factor. In this way, the other field equations can be used to study the evolution of the fluid/scalar field distribution in a general manner. However, the only assumption being implemented here is that the equation (\ref{scalarKG}) is integrable. The criterion for such an integrability can be defined in terms of an invertible point transformation, first worked out by Duarte et. al. \cite{duarte}, Euler, Steeb and Cyrus \cite{euler}. Although the main motivation over this assumption of integrability is extracting information from the field equations at any cost without resorting to any assumption of equation of state, this assumption is by no means unphysical. In recent past, this approach has proved to be very useful to produce interesting solutions depicting dynamics of minimally coupled massive scalar field \cite{scnb2}, self-similar solutions \cite{scnb1} and also showed promise towards being a tool for reconstructing modified gravity lagrangians. Using this approach, Chakrabarti, Said and Farrugia have, quite recently studied a reconstruction method for teleparrallel gravity \cite{scjs}. This work therefore, carries a subtle motivation of testing the scope of this approach in the domain of Scalar-Einstein-Gauss-Bonnet gravity. \\ While a simple linear harmonic oscillator has a straightforward sinusoidal solution, an anharmonic oscillator has more contributing terms and can represent more physical features of a dynamical system. This takes the form of a nonlinear second order differential equation with variable coefficients as \begin{equation} \label{gen} \ddot{\phi}+f_1(t)\dot{\phi}+ f_2(t)\phi+f_3(t)\phi^n=0, \end{equation} where $f_i$ are functions of $t$ and $n \in {\cal Q}$ is a constant. Overhead dot represents differentiation with respect to cosmic time, $t$. Using Euler and Duarte’s \cite{duarte, euler, euler1} work on the integrability of the general anharmonic oscillator equation and the more applicable reproduction by Harko, Lobo and Mak\cite{harko}, this equation can be integrated under certain conditions. The essence of the integrability criterion is that, an equation of the form Eq.(\ref{gen}) can be point transformed into an integrable form iff $n\notin \left\{-3,-1,0,1\right\} $, provided the coefficients of Eq. (\ref{gen}) satisfy the differential condition \begin{eqnarray}\nonumber\nonumber \label{int-gen} && \frac{1}{n+3}\frac{1}{f_{3}(t)}\frac{d^{2}f_{3}}{dt^{2}} - \frac{n+4}{\left( n+3\right) ^{2}}\left[ \frac{1}{f_{3}(t)}\frac{df_{3}}{dt}\right] ^{2} \\&& + \frac{n-1}{\left( n+3\right) ^{2}}\left[ \frac{1}{f_{3}(t)}\frac{df_{3}}{dt}\right] f_{1}\left( t\right) \\&& + \frac{2}{n+3}\frac{df_{1}}{dt}+\frac{2\left( n+1\right) }{\left( n+3\right) ^{2}}f_{1}^{2}\left( t\right)=f_{2}(t). \end{eqnarray} Introducing a pair of new variables $\Phi$ and $T$ given by \begin{eqnarray} \label{Phi} \Phi\left( T\right) &=&C\phi\left( t\right) f_{3}^{\frac{1}{n+3}}\left( t\right) e^{\frac{2}{n+3}\int^{t}f_{1}\left( x \right) dx },\\ \label{T} T\left( \phi,t\right) &=&C^{\frac{1-n}{2}}\int^{t}f_{3}^{\frac{2}{n+3}}\left( \xi \right) e^{\left( \frac{1-n}{n+3}\right) \int^{\xi }f_{1}\left( x \right) dx }d\xi ,\nonumber\\ \end{eqnarray}% where $C$ is a constant, Eq.(\ref{gen}) can then be written in an integrable form as \begin{equation} \label{Phi1} \frac{d^{2}\Phi}{dT^{2}}+\Phi^{n}\left( T\right) = 0. \end{equation} We focus our investigation on a particular case of polynomial coupling, i.e., where $\xi(\phi) = \xi_{0} \frac{\phi^2}{2}$. We also take the self-interaction potential $V(\phi) = V_{0} \frac{\phi^{(n+1)}}{(n+1)}$. Both positive powers and inverse powers of $\phi$ are very useful in the cosmological setting, in particular, inverse power law models are extremely useful as quintessence fields among other interesting properties. With this assumption, the scalar field evolution equation becomes \begin{equation}\label{scalarKG1} \ddot{\phi} +3H\dot{\phi} + 24 \xi_{0} \phi(H^4+H^2\dot{H}) + V_{0} \phi^n = 0. \end{equation} One can easily identify $f_{1}(t) = 3H$, $f_{2}(t) = 24 \xi_{0} (H^4 + H^2 \dot{H})$ and $f_{3} = V_{0}$. \\ Provided $n\notin \left\{-3,-1,0,1\right\}$, the integrability criterion produces a differential equation for $H = \frac{\dot{a}}{a}$ given by \begin{equation} 24 \xi_{0} H^4 - 18 \frac{(n+1)}{(n+3)^2} H^2 + \dot{H} (24 \xi_{0} H^2 - \frac{6}{(n+3)}) = 0, \end{equation} which we rewrite in the form \begin{equation} \dot{H} = \frac{18\frac{(n+1)}{(n+3)^2} - 24 \xi_{0} H^2}{24 \xi_{0} - \frac{6}{(n+3) H^2}}. \end{equation} During the final stages of the evolution of the collapsing scalar field, it is expected that the proper volume is very small and sharply decreasing in nature. Moreover, the rate of collapse must also be increasing rapidly. With that in mind one can say, $H = \frac{\dot{a}}{a} >> 0$ and is a sharply increasing function of time. Therefore, $\frac{1}{H^2}$ can be neglected compared to other term on the denominator on the RHS. With that simplification and the fact that $\dot{a} < 0$ for a collapsing model, we can write a solution for the scale factor, defining the time evolution of the collapsing scalar field as \begin{eqnarray}\nonumber \label{solution} && a(t) = \frac{1}{6(1+n)} e^{\frac{\sqrt{3(1+n)}}{2(3+n)\sqrt{\xi_{0}}}(t_{0}-t)} \\&& - 2 \xi_{0} a_{0} (3+n)^2 e^{-\frac{\sqrt{3(1+n)}}{2(3+n)\sqrt{\xi_{0}}}(t_{0}-t)}. \end{eqnarray} Here both $t_{0}$ and $a_{0}$ are constants of integration and serves as the initial condition of the model alongwith the choice of self-interaction potential (value of $n$). In order to have a real time evolution one must enforce the restrictions $n > -1$ and $\xi_{0} > 0$. \\ The time of reaching the zero proper volume can be calculated from equation (\ref{solution}) as \begin{equation}\label{ts} t_{s} = t_{0} - \frac{2(3+n)\sqrt{\xi_{0}}}{\sqrt{3(1+n)}} ln [2\sqrt{3 a_{0} \xi_{0} (1+n)} (3+n)]. \end{equation} \section{Evolution of the Scale Factor for different initial conditions} \subsection{Evolution of the scale factor for $a_{0} > 0$} We present different possible outcomes of the collapse graphically and study the evolution varying different initial conditions. \begin{figure}[h] \begin{center} \includegraphics[width=0.40\textwidth]{scalefactor.eps} \caption{Evolution of the scale factor with time for $V(\phi) = \frac{\phi^4}{4}$ and for different positive values of $a_{0}$. (Colour Code : $Blue \rightarrow a_{0} = 0.6$, $Yellow \rightarrow a_{0} = 0.5$, $Green \rightarrow a_{0} = 0.4$ and $Red \rightarrow a_{0} = 0.3$). For all values of the initial parameter $a_{0}$, the collapse reaches a zero proper volume, however the rate of collapse depends on the choice of the parameter.} \end{center} \label{fig:ltb1} \end{figure} In figure $1$, the scale factor is plotted as a function of time for a fixed value of $t_{0}$, for different positive values of $a_{0}$ and $n = 3$, i.e., $V(\phi) = \frac{\phi^4}{4}$. The evolution shows a rapid collapsing behavior For all values of $a_{0}$, and an ultimate zero proper volume end-state, however, the rapidity of the collapse and the time of formation of zero proper volume depends on the choice of the parameter. \\ An important note to make here is that, for the particular case of $n = 3$, i.e., $V(\phi) = \frac{\phi^4}{4}$, $t_{0} = 20$ and as long as a positive choice of $a_{0}$ is made, $\xi_{0}$ must be taken in between $0$ and $1$. For $\xi_{0} < 0$, there is no real evolution and for $\xi_{0} > 1$, the evolution becomes negative. Therefore a condition of $0 < \xi_{0} \leq 1$ must be enforced upon the strength of the coupling. However, this range can differ for a different set of initial conditions, i.e., for a different set of $n$ and $t_{0}$. We have presented a particular case with the note that for any such model, the strength of the coupling is very important and therefore the restriction over the allowed domain of $\xi_{0}$ must be accounted for. \begin{figure}[h] \begin{center} \includegraphics[width=0.35\textwidth]{scalefactordifferentn.eps} \includegraphics[width=0.35\textwidth]{scalefactordifferentn2.eps} \caption{Evolution of the scale factor with time for different self-interaction potential defined by $V(\phi) = \frac{\phi^{(n+1)}}{(n+1)}$ and for fixed $a_{0}$ and $\xi_{0}$. $(Colour Code : Blue \rightarrow n = 3$, $Yellow \rightarrow n = 1.5$, $Green \rightarrow n = 0.5$ and $Red \rightarrow n = 0.001)$. The two different plots are for different ranges.} \end{center} \label{fig:ltb1} \end{figure} In figure $2$, we plot the collapsing behavior for $\xi_{0} = 0.4$ and for different choices of $n$, i.e., for different choice of self interaction potential defined by $n = 3, 1.5, 0.5$ and $0.001$. For all the cases, the evolution starts at a finite value of the scale factor (as shown in the first figure) and a zero proper volme is reached at a finite future. The time of formation of singularity changes depending on the choice of $n$ (as shown by the figure below). \begin{figure}[h] \begin{center} \includegraphics[width=0.35\textwidth]{scalefactordifferentxi1.eps} \includegraphics[width=0.35\textwidth]{scalefactordifferentxi2.eps} \caption{Evolution of the scale factor with time for $V(\phi) = \frac{\phi^4}{4}$, fixed $a_{0}$ and different values of $\xi_{0}$.} \end{center} \label{fig:ltb1} \end{figure} In figure $3$, we show the evolution of the scale factor with time for different choice of the coupling $\xi_{0}$, fixing other initial parameters. For different value of $\xi_{0}$ the collapse ends up in a zero proper volume afterall, but for different $t_{s}$, which is also evident from equation (\ref{ts}). \subsection{Evolution of the scale factor for $a_{0} < 0$} \begin{figure}[h] \begin{center} \includegraphics[width=0.35\textwidth]{scalefactorbounce1.eps} \includegraphics[width=0.35\textwidth]{scalefactorbounce2.eps} \caption{Evolution of the scale factor with time for $V(\phi) = \frac{\phi^4}{4}$, $\xi_{0} = 0.4$ and for different negative values of $a_{0}$. $(Colour Code : Blue \rightarrow a_{0} = -10$, $Yellow \rightarrow a_{0} = -1$, $Green \rightarrow a_{0} = -0.1$ and $Red \rightarrow a_{0} = -0.00001)$. The evolution suggests that the scalar field experiences a collapse initially, only until a critical point after which the collapse changes into an expanding phase.} \end{center} \label{fig:ltb2} \end{figure} However, depending on the initial conditions, the collapse may not always lead to a zero proper volme. As shown in figure $4$, for all negative values of $a_{0}$, the system experiences a collapse initially, but only until a critical point (a non-zero minimum radius) after which it can not shrink further and experiences rapid expansion. The plot here is for $V(\phi) = \frac{\phi^4}{4}$ and $\xi_{0} = 0.4$. This behavior is valid for all negative values of $a_{0}$. \begin{figure}[h] \begin{center} \includegraphics[width=0.35\textwidth]{scalefactorbouncedifferentn1.eps} \includegraphics[width=0.35\textwidth]{scalefactorbouncedifferentn2.eps} \caption{Evolution of the scale factor with time for different $n$, i.e., for different choices of $V(\phi) = \frac{\phi^{(n+1)}}{n+1}$, $\xi_{0} = 0.4$ and for a particular negative value of $a_{0} = -1$. $(Colour Code : Blue \rightarrow n = 3$, $Yellow \rightarrow n = 1.5$, $Green \rightarrow n = 0.5$ and $Red \rightarrow n = 0.001)$.} \end{center} \label{fig:ltb2} \end{figure} We present the evolution graphically for different choices of the self-interaction potential (depending on the choice of $n$) and for a particular choice of negative $a_{0}$, and fixed $\xi_{0}$. The evolution shown in figure $5$ suggests that the scalar field experiences a collapse initially, only until a critical point after which it experiences a bounce. The overall qualitative behavior remains the same for different $n$, however, there is an indication that, depending on $n$, the bouncing behaviour may be scaled. \begin{figure}[h] \begin{center} \includegraphics[width=0.40\textwidth]{scalefactorbouncedifferentxi.eps} \caption{Evolution of the scale factor with time for $V(\phi) = \frac{\phi^4}{4}$, $a_{0} = -1$ and for a different values of $\xi_{0}$. $(Colour Code : Blue \rightarrow \xi = 0.4$, $Yellow \rightarrow \xi = 1$, $Green \rightarrow \xi = 2$ and $Red \rightarrow \xi = 3)$.} \end{center} \label{fig:ltb2} \end{figure} In figure $6$, we plot the scale factor as a function of time for $V(\phi) = \frac{\phi^4}{4}$, $a_{0} = -1$ and for a different values of the coupling parameter $\xi_{0}$. The evolution suggests that the overall qualitative behavior may change depending on the value of $\xi_{0}$, in the sense that, the phase of an initial contraction of the scale factor depends on the choice of $\xi_{0}$. For instance, for $\xi_{0} = 0.4$, (shown by the blue curve) there is an initial collapsing phase before reaching an eventual non zero minimum cutoff, and the expanding phase begins thereafter. However, if one increases the value of $\xi_{0}$ gradually, it can be seen (Yellow for $\xi = 1$, Green for $\xi = 2$ and Red for $\xi = 3$) that the initial collapsing phase seems to become negligible and the entire solution turns into an expanding solution. \section{Evolution of the Scalar Field} Using the defined point transformations (\ref{Phi}) and (\ref{T}), a general solution of the scalar field can be constructed from the transformed integrable form of the scalar field evolution equation (\ref{Phi1}). The solution is given in the form \begin{equation} \frac{dT}{d\Phi} = \frac{1}{\sqrt{2 \Big(C_{0} - \frac{\Phi^{(n+1)}}{(n+1)} \Big)}}. \end{equation} Calculating the coefficients $f_{1}(t) = 3 H$, $f_{2}(t) = 24 \xi_{0} (H^4 + H^2 \dot{H})$ and $f_{3} = V_{0}$ using the solution for scale factor (\ref{solution}), one appears at a differential equation governing the behaviour of the scalar field itself given by \begin{equation} \Big(\dot{\phi} + \frac{2\phi}{(n+3)} \Big)^2 = \frac{C^{-(n+1)} a^{-6\frac{(n+1)}{(n+3)}}}{2C_{0} + \frac{2}{(n+1)}\Big[C \phi a^{\frac{6}{(n+3)}}\Big]^{(n+1)}}. \end{equation} It is extremely non-trivial to solve this equation analytically without any choice of $n$. We present a particular case here. Using $C = 1$, $C_{0} = 0$ and investigating the equation numerically for $n = 3$ (i.e., for $V(\phi) = \frac{\phi^4}{4}$), we numerically solve the equation and the solution is written as \begin{equation}\label{intescalar} \phi(t) = e^{-\frac{1}{3}} \Bigg[C_{1} + 3 \int {\frac{g_{1}(t)}{(g_{2}(t) - \delta)^{4}}} \Bigg]. \end{equation} Here, $g_{1}(t)$ and $g_{2}(t)$ are functions of time which we have written in this form for the sake of brevity. \\ ALthough it would have been better if a neat and closed for of the evolution could be written, however, the numerical integration of the equation (\ref{intescalar}) produces a very large expression for the scalar field (about $145$ terms). We present in brief the functional form so as to give an idea regarding how the scalar field can, in principle, evolve. \begin{eqnarray} \nonumber && \phi(t) = \psi(t)^{\frac{1}{3}}, \\&& \nonumber \psi(t) = s_{0} e^{-t} + \frac{g(t)}{j(t)}, \\&& \nonumber j(t) = s_{1} \left(1728 b e^{\frac{\sqrt{\frac{1}{m}}t}{\sqrt{3}}} m-e^{\frac{d \sqrt{\frac{1}{m}}}{\sqrt{3}}}\right)^{3}, \\&& \nonumber \nonumber g(t) \sim \Bigg[a_{1}e^{a_{2}-t}\Bigg(a_{i} + a_{j}e^{a_{k}+a_{l}t}\Bigg)\Bigg] + b_{i}e^{b_{j} + b_{k}t}\\&& \nonumber {_2}F_1\left(1,2+\sqrt{3\xi_{0}};3+\sqrt{3\xi_{0}};1728 \xi_{0} a_{0} e^{\frac{1-t_{0}}{\sqrt{3\xi_{0}}}}\right) \\&& \nonumber + c_{i}e^{c_{j} + c_{k}t} {_2}F_1\left(1,2+\sqrt{3\xi_{0}};3+\sqrt{3\xi_{0}};1728 \xi_{0} a_{0} e^{\frac{t-t_{0}}{\sqrt{3\xi_{0}}}}\right). \end{eqnarray} All the parameteres $a_{1}$, $a_{2}$, $a_{i}-$s, $a_{j}-$s etc, are infact defined in terms of the parameters of the theory, i.e., $\xi_{0}$, $n$ and $a_{0}$. The function $g(t)$ consists of a total of $(80+33+31)$ terms where there are $80$ terms of the form of $\Bigg(a_{i} + a_{j}e^{a_{k}+a_{l}t}\Bigg)$, $33$ terms of the form of $b_{i}e^{b_{j} + b_{k}t}$ and $31$ terms of the form of $c_{i}e^{c_{j} + c_{k}t}$. It is quite obvious that a numerical examination is absolutely necessary over the time evolution of the scalar field. We note that, the parameters $\xi_{0}$ (i.e., the strength of the coupling of scalar field with the Gauss-Bonnet term) and the constant of integration $C_{1}$ play an important part in determining the behavior of the scalar field afterall. In the next subsection we present the numericla results of the evolution of the scalar field as a function of time for different parameters. \subsection{Evolution of the scalar field for $a_{0} > 0$, i.e., for collapsing scale factor} \begin{figure}[h] \begin{center} \includegraphics[width=0.40\textwidth]{scalarfield1.eps} \caption{Scalar field as a function of time for $V(\phi) = \frac{\phi^4}{4}$, $\xi_{0} = 0.4$ and for different values of $C_{1}$ ($Color Code : Blue \Rightarrow C_{1} = 0.001, Yellow \Rightarrow C_{1} = 0.01, Green \Rightarrow C_{1} = 0.1$ and $Red \Rightarrow 1.0$).} \end{center} \label{fig:ltb4} \end{figure} In figure $7$, the scalar field is plotted as a function of $t$ for $V(\phi) = \frac{\phi^4}{4}$, $\xi_{0} = 0.4$ and for different values of $C_{1}$ ($C_{1} = 0.001, C_{1} = 0.01, C_{1} = 0.1 and 1.0$). The scalar field diverges around the time of formation of singularity. However, depending on the value of $C_{1}$, the nature of the evolution before reaching singularity may be a little different but the qualitative behavior is this; the time evolution of the scalar field starts at a finite value and then decreases more-or-less steadily, before reaching a minimum critical value. Thereafter, the scalar field increases with time rapidly and diverges. \begin{figure}[h] \begin{center} \includegraphics[width=0.40\textwidth]{scalarfield2.eps} \caption{Evolution of the scalar field as a function of time for $V(\phi) = \frac{\phi^4}{4}$, $C_{1} = 0.01$ and for different choices of coupling parameter $\xi_{0} = 0.4 (Blue)$ and $\xi_{0} = 0.5 (Yellow)$. } \end{center} \label{fig:ltb4} \end{figure} Evolution of the scalar field as a function of time is plotted in figure $8$ for $V(\phi) = \frac{\phi^4}{4}$, $C_{1} = 0.01$ and for different choices of $\xi_{0} = 0.4$ and $0.5$. For different values of the coupling parameter $\xi_{0}$, the scalar field diverges at different time. This is quite consistent because the time of formation of singularty depends on $\xi_{0}$. The same can be verified from equation (\ref{ts}) as well. \subsection{Evolution of the scalar field for $a_{0} < 0$} As discussed by figure $3$, $4$ and $5$, the evolution is not always collapsing forever, depending on the choice of the parameter $a_{0}$. For $a_{0} < 0$, the scale factor experiences a transition from a state of contraction into a rapid expansion. Here we show the behavior of the scalar field with time for such cases, i.e., for $a_{0} < 0$. \begin{figure}[h] \begin{center} \includegraphics[width=0.40\textwidth]{scalarfieldbounce1.eps} \caption{Evolution of the scalar field as a function of time for $V(\phi) = \frac{\phi^4}{4}$, $C_{1} = 10$, $\xi_{0} = 0.4$ and $a_{0} = -1$.} \end{center} \label{fig:ltb4} \end{figure} In figure $9$, we have plotted $\phi(t)$ vs $t$ for $V(\phi) = \frac{\phi^4}{4}$, $C_{1} = 10$, $\xi_{0} = 0.4$ and $a_{0} = -1$. The scalar field starts at some finite value and gradually decreases, exhibiting some periodic behavior with time. Eventually it decays into a negligibly small positive value as shown in the figure. \begin{figure}[h] \begin{center} \includegraphics[width=0.40\textwidth]{scalarfieldbounce2.eps} \caption{Evolution of the scalar field as a function of time for $V(\phi) = \frac{\phi^4}{4}$, $C_{1} = 0.01$, $a_{0} = -1$ and for different choices of $\xi_{0} = 0.4 (Blue)$, $\xi_{0} = 1 (Yellow)$ and $\xi_{0} = 2 (Green)$.} \end{center} \label{fig:ltb4} \end{figure} However, the periodic time evolution of the scalar field is sensitive over the choice of $\xi_{0}$ as shown in figure $10$. The periodic nature seems to be absent as one gradually increases the value of $\xi_{0}$, (here, plotted for $\xi_{0} = 0.4, 1$ and $2$). \section{Evolution of the strong energy condition} A collapsing perfect fluid is physically reasonable if it obeys the strong energy condition which is satisfied if for any timelike unit vector $w^{\alpha}$ and the following inequality holds \begin{equation} 2 T_{\alpha\beta} w^{\alpha} w^{\beta} + T \geq 0, \end{equation} where $T$ is the trace of the energy momentum tensor. The energy conditions were investigated in details for imperfect fluids by Kolassis, Santos and Tsoubelis \cite{kola}. Following their work, we investigate the validity of the strong energy condition ($(\rho+3p) > 0$) for our model. The strong energy condition can be violated only if the total energy density is negative or if there exists a large negative principal pressure of $T^{\alpha\beta}$. \\ Since we have studied the solution for the scale factor from the scalar field evolution (\ref{scalarKG}) equation straightaway, the field equations (\ref{G00}) and (\ref{G11}) can be used to study the evolution of the constituent fluid density and pressure. We numerically study the strong energy condition and present it's nature by the following plots. \begin{figure}[h] \begin{center} \includegraphics[width=0.40\textwidth]{energycond1.eps} \caption{Evolution of $(\rho+3p)$ with time for $V(\phi) = \frac{\phi^4}{4}$, $t_{0} = 20$, $a_{0} = 10$, $\xi_{0} = 0.4$ and for different choices of $C_{1}$ ($C_{1} = 0.1 (Blue)$, $C_{1} = 10 (Yellow)$, $C_{1} = 100 (Green)$ and $C_{1} = 10000 (Red)$).} \end{center} \label{fig:ltb4} \end{figure} In figure $11$, we plot the evolution of $(\rho+3p)$ as a function of time for a particular choice of potential $V(\phi) = \frac{\phi^4}{4}$, positive $a_{0}$ (ensuring the collapsing nature of the solution), $\xi_{0} = 0.4$ and for different choices of the initial conditon $C_{1}$ ($C_{1} = 0.1 (Blue), 10 (Yellow), 100 (Green)$ and $10000 (Red)$). The evolution suggests that that $(\rho+3p) \geq 0$ is satisfied throughout the evolution of the collapsing body, before reaching the curvature singularity where $(\rho+3p)$ increases sharply, as both pressure and density diverges eventually. \begin{figure}[h] \begin{center} \includegraphics[width=0.40\textwidth]{energycond2.eps} \caption{Evolution of $(\rho+3p)$ with time for $V(\phi) = \frac{\phi^4}{4}$, $t_{0} = 20$, $a_{0} = 10$, $\xi_{0} = 0.4$ and for very small values of $C_{1}$ ($C_{1} = 0.001 (Blue)$ and $C_{1} = 0.0001 (Yellow)$.} \end{center} \label{fig:ltb4} \end{figure} However, depending on the value of $C_{1}$, we also find some cases where the strong energy condition may be violated during the collapsing evolution, before eventually diverging at the singular epoch. In figure $12$, we plot the evolution of $(\rho+3p)$ as a function time for $V(\phi) = \frac{\phi^4}{4}$, positive $a_{0}$, $\xi_{0} = 0.4$ and for very small positive values of $C_{1}$ ($C_{1} = 0.001 (Blue)$ and $C_{1} = 0.0001 (Yellow)$. \\ However, the evolution of the strong-energy condition in time leading to the conclusion that this is often violated, is not really an unexpected outcome in theories of gravity that contain strong-curvature terms. In the present case one can argue that the Gauss-Bonnett term in priciple can create an effective energy-momentum tensor whose contribution to the total energy-momentum tensor can lead to the violation of the strong-energy condition. Therefore, it may not be the nature of the perfect fluid afterall that violates the strong-energy condition. We also note here that the energy conditions of general relativity are a mathematical way of making the notion of locally positive energy density by stating that various linear combinations of the components of the energy momentum tensor must stay non-negative and it is sometimes argued that subtle quantum effects can violate all of the energy conditions. Moreover, there are examples of classical systems that violate all the energy conditions as well (For instance, Lorentzian-signature traversable wormholes \cite{barcelo}). The simplest possible source of classical energy condition violations is from the contribution of scalar fields, in particular, non-minimally coupled scalar field contributions, as worked out in details by Visser and Barcelo \cite{visser, barcelo}, Flanagan and Wald \cite{flanagan}. \\ \begin{figure}[h] \begin{center} \includegraphics[width=0.40\textwidth]{energycond3.eps} \caption{\bf Evolution of $(\rho+3p)$ with time for $V(\phi) = \frac{\phi^4}{4}$, $t_{0} = 20$, $a_{0} = 10$ and for different value of $\xi_{0} = 0.04 (Blue)$ and $\xi_{0} = 0.4 (Yellow)$.} \end{center} \label{fig:ltb4} \end{figure} In figure $13$, the evolution of $(\rho+3p)$ with time is plotted for $V(\phi) = \frac{\phi^4}{4}$, positive $a_{0}$, $C_{1} = 10000$ and for different value of $\xi_{0} = 0.04 (Blue)$ and $\xi_{0} = 0.4 (Yellow)$. As can be seen from the graph, $(\rho+3p)$ maintains a positive signature throughout the collapse, however, for different value of $\xi_{0}$ the energy condition goes to positive infinity at different time. This is quite consistent, since we have already discussed that the strength of coupling with the Gauss-Bonnet term $\xi_{0}$ plays a crucial role in determining the time of formation of singularity. \section{Nature and Visibility of the Singularity} In order to investigate whether the singularity is a curvature singularity or just an artifact of coordinate choice, one must look into the behavior of the Kretschmann curvature ($K$) scalar at $t\rightarrow t_{s} = t_{0} - \frac{2(3+n)\sqrt{\xi_{0}}}{\sqrt{3(1+n)}} ln [2\sqrt{3 a_{0} \xi_{0} (1+n)} (3+n)]$, i.e., when the scale factor $a(t)$ goes to zero. For the metric presented in equation (\ref{metric}), $K$ has the expression, \begin{equation} K = 6\bigg[\frac{\ddot{a}(t)^2}{a(t)^2} + \frac{\dot{a}(t)^4}{a(t)^4}\bigg] \label{curvature_scalar1} \end{equation} Using the solution of $a(t)$ (equation (\ref{solution})), it is straightforward to deduce that the Kretschmann scalar diverges at the zero proper volume and thus the collapsing body discussed here ends up in a curvature singularity. \subsection{Formation of Apparent Horizon} Whether the curvature singularity is visible to an exterior observer or not, depends on the formation of an apparent horizon. The condition for such a surface is given by \begin{equation} \label{app-hor} g^{\mu\nu}R,_{\mu}R,_{\nu}=0, \end{equation} where $R$ is the proper radius of the two-sphere, given by $r a(t)$ in the present case. Using the exact solution for collapse given by equations (\ref{solution}) and (\ref{app-hor}), we deduce the condition for formation of apparent horizon as \begin{equation} \frac{\sqrt{3}x}{12(3+n)\sqrt{(1+n)\xi_{0}}} + \frac{\sqrt{3 \xi_{0} (1+n)} a_{0} (3+n)}{x} - \lambda = 0, \end{equation} where $\lambda$ is a constant of separation and $x = e^{\frac{\sqrt{3(1+n)}}{2(3+n)\sqrt{\xi_{0}}}(t_{0}-t)}$. One can solve the above equation to exactly deduce the time of formtion of an apparent horizon as \begin{eqnarray} \label{tah} && t_{ap} = t_{0} \\&& \nonumber - \frac{2(3+n) \sqrt{\xi_{0}}}{\sqrt{3(1+n)}} ln \Big[2(3+n) \sqrt{3(1+n)\xi_{0}} (\lambda \pm \sqrt{{\lambda}^2 - a_{0}})\Big]. \end{eqnarray} Comparing equation (\ref{tah}) with the time of formation of the singularity (\ref{ts}) we arrive at the very important expression \begin{equation}\label{nakedcondition} (t_{s} - t_{ap}) = \frac{2 (3+n) \sqrt{\xi_{0}}}{\sqrt{3(1+n)}} ln \Big[\frac{(\lambda \pm \sqrt{{\lambda}^2 - a_{0}})}{\sqrt{a_{0}}} \Big]. \end{equation} To comment on the ultimate visibility of the collapse outcome, it is necessary to see if nonspacelike trajectories emanate from the singularity and reach a faraway observer. The singularity is at least locally naked if there are future directed nonspacelike curves that reach faraway observers. This is possible if the formation of apparent horizon is delayed or if there is no formation of horizon at all. In the present case the time of formation of singularity is independent of $r$ (given by equation (\ref{ts})) and therefore it is only natural that the entire collapsing system (scalar field and perfect fluid) would reach the singularity simulteneously at $t = t_{s}$. This kind of singularity is always expected to be covered by the formation of an apparent horizon as discussed by Joshi, Goswami and Dadhich \cite{naresh}. Here, from equation (\ref{nakedcondition}), it can be said that the formation of an ultimate covered singularity is dependent over initial conditions such as $\xi_{0}$, $\lambda$ and $a_{0}$ so that $(t_{s} - t_{ap}) > 0$. However, in principle, one can also ask about the possible end state if the initial conditions conspire to make the situation to end up otherwise, i.e., $(t_{s} - t_{ap}) < 0$. In such a case, there is no formation of apparent horizon at all, since all the collapsing shells labelled by different values of $r$, shrinks to zero proper volume and the physical quantities diverge at $t = t_{s}$, which is reached before $t_{ap}$. Therefore the singularity remains naked and the condition for such an end-state can be written from equation (\ref{nakedcondition}) as \begin{equation}\label{nakedcondition2} a_{0} = \frac{4 \lambda^2 \delta^2}{(1 + \delta)^2}, \end{equation} where $0 < \delta < 1$. \section{Matching with an exterior Vaidya Spacetime} For the sake of completeness, proper junction conditions are to be examined carefully which allow a smooth matching of an exterior geometry with the collapsing interior. First of all it was extensively shown by Goncalves and Moss \cite{gonca} that any sufficiently massive collapsing scalar field can be formally treated as collapsing inhomogeneous dust in general relativity. Moreover, astrophysical objects undergoing a gravitational collapse can be expected to be in an almost vacuum spacetime, and therefore the exterior spacetime around a spherically symmetric dying star is well described by the Schwarzschild geometry. From the continuity of the first and second differential forms, the matching of the sphere to a Schwarzschild spacetime on the boundary surface, $\Sigma$, is extensively worked out in literature \cite{santos, chan, kola2, maharaj}. \\ However, conceptually this leads to an inconsistency, since Schwarzschild has zero scalar field. Therefore, such a matching would lead to a discontinuity in the scalar field, and a delta function in the gradient of the scalar field. As a consequence, there will appear square of a delta function in the stress-energy, which is definitely an inconsistency. Since we have a scalar field distribution inside the collapsing sphere, it is more physical to match the interior with a Vaidya exterior solution (for a detailed analysis of vaidya matching in a scalar field collapse we refer to \cite{pankajritu1, pankajritu2}) across a boundary hypersurface defined by $\Sigma$. The metric just inside $\Sigma$ is, \begin{equation}\label{interior} d{s_-}^2=dt^2-a(t)^2dr^2-r^2 a(t)^2d{\Omega}^2, \end{equation} and the metric in the exterior of $\Sigma$ is given by \begin{equation}\label{exterior} d{s_+}^2=(1-\frac{2M(r_v,v)}{r_v})dv^2+2dvdr_v-{r_v}^2d{\Omega}^2. \end{equation} Matching the first fundamental form on the hypersurface we get \begin{equation}\label{cond1} \Big(\frac{dv}{dt} \Big)_{\Sigma}=\frac{1}{\sqrt{1-\frac{2M(r_v,v)}{r_v}+\frac{2dr_v}{dv}}} \end{equation} and \begin{eqnarray}\nonumber \label{cond2} && (r_v)_{\Sigma} = r a(t) \\&& \nonumber = r \Bigg[\frac{1}{6(1+n)} e^{\frac{\sqrt{3(1+n)}}{2(3+n)\sqrt{\xi_{0}}}(t_{0}-t)} \\&& \nonumber - 2a_{0}(3+n)^2 e^{-\frac{\sqrt{3(1+n)}}{2(3+n)\sqrt{\xi_{0}}}(t_{0}-t)}\Bigg]. \end{eqnarray} Matching the second fundamental form yields, \begin{equation}\label{cond3} \Big(r a(t) \Big)_{\Sigma} = r_v\left(\frac{1-\frac{2M(r_v,v)}{r_v}+\frac{dr_v}{dv}}{\sqrt{1-\frac{2M(r_v,v)}{r_v}+\frac{2dr_v}{dv}}}\right) \end{equation} Using equations (\ref{cond1}), (\ref{cond2}) and (\ref{cond3}) one can write \begin{equation}\label{dvdt2} \left(\frac{dv}{dt} \right)_{\Sigma} = \frac{3r a(t)^2 - r^2}{3(ra(t)^2 - 2Ma(t))}, \end{equation} where $a(t) = \frac{1}{6(1+n)} e^{\frac{\sqrt{3(1+n)}}{2(3+n)\sqrt{\xi_{0}}}(t_{0}-t)} - 2a_{0}(3+n)^2 e^{-\frac{\sqrt{3(1+n)}}{2(3+n)\sqrt{\xi_{0}}}(t_{0}-t)}$. From equation (\ref{cond3}) one obtains \begin{equation}\label{M} M_{\Sigma} = \frac{1}{4} \Bigg[ra(t) + \frac{r^3}{9 a(t)^3} + \sqrt{\frac{1}{r a(t)} + \frac{r^3}{81 a(t)^9} - \frac{2r}{9 a(t)^5}} \Bigg]. \end{equation} Matching the second fundamental form we can also write the derivative of $M(v,r_v)$ as \begin{equation}\label{dM} M{(r_v,v)}_{,r_v}=\frac{M}{r a(t)} - \frac{2r^2}{9 a(t)^{4}}. \end{equation} Equations (\ref{cond2}), (\ref{dvdt2}), (\ref{M}) and (\ref{dM}) completely specify the matching conditions at the boundary of the collapsing scalar field. \section{Conclusion} The present work has been entirely dedicated towards a deeper understanding of a self-interacting scalar field collapse in Scalar-Einstein-Gauss-Bonnet gravity. The scalar field couples non-minimally with the Gauss-Bonnet term by a term quadratic in the scalar field ($\xi_{0} \frac{\phi^2}{2}$). The possibility of the collapse reaching a zero proper volume singularity is seen to be dependent on the coupling parameter $\xi_{0}$, choice of self-interaction potential and most importantly the initial condition $a_{0}$ which can be related to the initial radius/volume of the collapsing star. We also comment on the allowed domain of the choice of the coupling parameter $\xi_{0}$ to have a real evolution of the collapse. It is observed that the collapse ends up in a curvature singularity, where the Kretschmann curvature scalar blows up, alongwith density and pressure of the constituent fluid and the scalar field. The strong energy condition is showed to be valid throughout the collapse. However, depending on certain initial conditions defining the initial distribution of the scalar field, the energy conditions may be violated, which can be attributed to the introduction of a non-minimal coupling of a scalar field in the lagrangian, which is known to be a possible classical system that can violate almost all of the energy conditions \cite{visser, barcelo}. \\ For the sake of completeness we match the interior collapsing solution with the an exterior Vaidya geometry on a boundary hypersurface, since the presence of a Gauss-Bonnet term non-minimally coupled to a scalar field generates a non-zero effective energy momentum tensor arising from spacetime curvature. Therefore matching with an exterior vacuum solution in the presence of Gauss-Bonnet term may lead to inconsistency. Very recently this has been investigated by Banerjee and Paul \cite{nbtp}. \\ The time of formation of the singularity is independent of $r$, which suggests that all the collapsing shells labelled by different values of $r$ collapses simulteneously when the zero proper volume is reached. Such a singularity is always expected to be covered by a horizon as far as similar studies under the domain of GR are concerned. We note here that in the present case of Scalar-Einstein-Gauss-Bonnet gravity with a polynomial coupling, the ultimate end-state of a covered singularity is conditionally consistent with the corresponding results in GR. For certain initial conditions (defined by equations (\ref{nakedcondition}) and (\ref{nakedcondition2})) there is a probability of an end state where there is no possibility of formation of horizon at all. \\ We also observe some interesting results, for instance, depending on the signature of the aforementioned parameter $a_{0}$, the evolution of scale factor suggests that all of the possible collapsing scenario may not lead to a curvature singularity. Rather, for $a_{0} < 0$, the fluid undergoes contraction only unto a minimum non-zero cut-off radius, after which it goes into a rapidly expanding phase. It is also seen that the scalar field itself in such cases, decreases monotonically with time, exhibits certain periodic behavior, before becoming negligibly small. This can be somewhat compared with the phenomena of collapse and dispersal for a scalar field in general relativity which have drawn considerable interest in recent years, mainly from a numerical perspective, subtly pointing towards the existence of a critical phenomena is the whole collapsing picture (For details, we refer to the monograph by Gundlach \cite{gund}). Recently Bhattacharya, Goswami and Joshi worked out a sufficient condition for the dispersal to take place for a collapsing scalar field in GR that initially begins with a contraction and showed that the transition of the collapsing body into expanding nature is crucially connected wiith the change of the gradient of the scalar field \cite{bgj}. In the present case we can note that the signature of the parameter $a_{0} > 0$ or $a_{0} < 0$ is the defining factor of the transition; i.e., whether or not the collapse will lead into a zero proper volume or a dispersal of scalar field shall take place after the scale factor reaches a minimum cut-off volume. Since $a_{0}$ comes from the expression defining the scale factor ($a(t) = \frac{1}{6(1+n)} e^{\frac{\sqrt{3(1+n)}}{2(3+n)\sqrt{\xi_{0}}}(t_{0}-t)} - 2a_{0}(3+n)^2 e^{-\frac{\sqrt{3(1+n)}}{2(3+n)\sqrt{\xi_{0}}}(t_{0}-t)}$), the initial volume of the collapsing system can be predicted as a defining factor connected to the value of $a_{0}$. \\ We conclude with the note that this is indeed a simple case, in the sense that we have considered spatial homogeneity in the metric components as well as the scalar field. However, the solution found is simple enough to encourage further allied investigations in this direction, such as, the possibility of a collapse even when the energy conditions are violated may subtly direct one's attention towards a possible clustering of dark energy distribution. Apart from that, this work also helps further expansion of the usefulness and scope of the particular method of integrability of anharmonic oscillator used here. The theorem which inspires this method is self sufficient as was discussed by Euler \cite{euler1}. The same has been proved in the context of a massive scalar field minimally coupled to gravity by Chakrabari and Banerjee \cite{scnb2}, where the solutions found by virtue of this theorem indeed solve the Klein Gordon type evolution equation once they are put back in the equation. In the current context, the solutions are far more complicated to say the least, as is evident from the expression of the scalar field, as worked out in details in section $V$. Since the criterion for integrability was investigated under the expectation that the proper volume would be very small and sharply decreasing in nature so that one can neglect the $\frac{1}{H^2}$ term, to justify such an arguement one can put the solutions (\ref{solution}) and (\ref{intescalar}) together into the equation (\ref{scalarKG1}) and study for different initial conditions defining the scalar field. It can be confirmed that for relevant cases, the solutions put into the Klein-Gordon type equation (\ref{scalarKG1}) yields a number very close to zero ($\sim 10^{-8}$ or lower). This approach of point transforming the klein-gordon type equation for the scalar field, to extract the solution out of a seemingly impossible non-linear system has worked really well in the past while investigating different setups of massive scalar field collapse and also inspired a handy reconstruction technique which helps one to assess the particular form of the lagrangian of a modified theory of gravity \cite{scjs}. We note here that, although the assumption of integrability of the scalar field evolution equation is inspired only from a mathematical perspective, solutions found by means of this assumption are by no means unphysical. Used properly, this method can potentially be useful for allied investigations as well, for instance, a detailed definition of the possible bounds over the choice of coupling function $\xi$ (somewhat similar to a possible Higgs-Kreschmann invariant coupling in white dwarfs and neutron stars, as shown by Wegner and Onofrio \cite{ono1, ono2}). Further investigation under the setup of a Scalar-Einstein-Gauss-Bonnet gravity in a more generalized scenario (inclusion of factors spatial inhomogeneity, pressure anisotropy, heat flux etc) can be done using this method properly and more rigorously and will be reported elsewhere in future. \section{Acknowledgements} The author would like to thank Professor Sayan Kar and Professor Narayan Banerjee for useful comments and suggestions. The author was supported by the National Post-Doctoral Fellowship (file number: PDF/2017/000750) from the Science and Engineering Research Board (SERB), Government of India.
{ "timestamp": "2018-04-11T02:12:37", "yymm": "1712", "arxiv_id": "1712.05149", "language": "en", "url": "https://arxiv.org/abs/1712.05149" }
\section{Introduction} Quantum simulators~\cite{feynman82} provide the opportunity for a controlled quantum system to emulate the behavior of another system whose properties we would like to better understand. Progress towards building quantum simulators is occurring rapidly, with demonstrations up to and beyond 50 qubits~\cite{houck12,ma14,omalley16,hensgens17,loredo16,monroe17,lukin17}. However, a key question remains: how do we test the behavior of such a system without fault tolerance while at the same time dealing with the exponential growth of classical simulation costs~\cite{vazirani14,nori14}? A variety of approaches are being considered in this domain, including comparison of classical versus quantum behavior~\cite{albash2015,kafri2015,hangleiter2017} or demonstrating so-called~\cite{wiesner17} `quantum supremacy'~\cite{calude17,neill17,miller2017,bermejo2017,aaronson2016,bremner2016,boixo2016}. Here we consider an approach for testing the performance of a spin-based quantum simulator that can be easily implemented in quantum dot computing systems~\cite{Hanson17,hensgens17,gray16}, as well as other systems that have nearest neighbor Heisenberg interactions~\cite{porras2004,salath15,grass2014,ma2011}. Starting with an intrinsically antiferromagnetic system, we show how time-dependent control of the exchange interaction enables us to make a Suzuki-Trotter-type simulation of a ferromagnetic system. This in turn allows us to propose two different tests for a linear chain of spins. In the first, one does a Loschmidt echo, propagating a single up spin through a chain of down spins with the ferromagnetic interaction, then back with the antiferromagnetic interaction. In the second, one transfers a spin singlet through a chain of spins following the protocol outlined in Ref.~\cite{StateTransfer}. Successful recovery of the singlet on the far end provides a test of the quantum channel capacity of the underlying quantum simulator. These techniques are ideal for quantum dot-based computer, where preparation and measurement of singlet states~\cite{petta05,taylor07} and antiferromagnetic Heisenberg interactions~\cite{loss98} are natural elements of the system. \section{Simulating ferromagnet with antiferromagnet} In this section, we describe how to simulate time evolution under a ferromagnetic interaction using an antiferromagnetic interaction. In our scenario, we assume that one can prepare $n$ spins and let them evolve for some time $t$ under antiferromagnetic nearest-neighbor interactions: \begin{align} H_{a,n} = \sum_{i=1}^{n-1} J_{i,i+1}(t) \bm S_i\cdot \bm S_{i+1}, \label{EQ_Ha} \end{align} where $\bm S_{i}$ is the spin operator vector of the $i$th qubit and the arbitrary positive $J_{i,i+1}(t)$ are tunable parameters. In what follows, we take $\hbar =1$. Using the above Hamiltonian, we would like to simulate a ferromagnetic Hamiltonian, \begin{align} H_{f,n} = -\sum_{i=1}^{n-1} \tilde J_{i,i+1}(t) \bm S_i\cdot \bm S_{i+1}, \label{EQ_Hf} \end{align} for arbitrary positive parameters $\tilde J_{i,i+1}(t)$. Let us first consider a two-spin system, i.e. $n = 2$. \subsection{Basic element: a two-spin system} \label{2a} In this case, the antiferromagnetic and ferromagnetic Hamiltonians in Eq.~\eqref{EQ_Ha} and Eq.~\eqref{EQ_Hf} are reduced to \begin{align} H_{a,2} &= J_{12}(t) \bm S_1\cdot \bm S_2, \\ H_{f,2} &= -\tilde J_{12}(t) \bm S_1\cdot \bm S_2.\label{2} \end{align} Let us first consider time-independent Hamiltonians, i.e. $J_1(t) = J$ and $\tilde J_1(t) = \tilde J$, for all $t$. In order to investigate the simulation of time evolution under the ferromagnetic and the antiferromagnetic Hamiltonians (Eq.~\eqref{EQ_Ha},~\eqref{EQ_Hf}), we first prepare an arbitrary two-spin initial state $\ket{\psi\left(0\right)}$ at $t=0$. Let it evolve under the ferromagnetic interaction $H_{f,2}$ in Eq.~\eqref{2} for time $ t $ and the time evolution operator would be $ U_f(t)=e^{-iH_{f,2}t} $ . Since we can always represent the state $\ket{\psi(0)}$ in the eigenstates of $H_{f,2}$, i.e. \begin{align} \ket{\psi(0)}=c_0\ket{s}+\sum_{m=1}^{3} c_m\ket{t_m}, \end{align} where $c_0$ and $c_m$ are coefficients, the state of the system at time $t$ will be \begin{align} \ket{\tilde\psi\left(t\right)}&=U_f(t)\ket{\psi\left(0\right)}\nonumber\\ \label{e1} &=c_0e^{i\tilde J\epsilon_st}\ket{s}+e^{i\tilde J\epsilon_tt}\sum_{m=1}^{3} c_m\ket{t_m}\\ \label{e2} &=e^{i\tilde J\epsilon_tt}\left(c_0e^{i\tilde J\Delta\epsilon t}\ket{s}+\sum_{m=1}^{3} c_m\ket{t_m}\right), \end{align} with \begin{align} \Delta\epsilon=\epsilon_s-\epsilon_t, \end{align} where $ \epsilon_t=\frac{1}{4} $ and $\epsilon_s=-\frac{3}{4} $ are the eigenvalues of two-spin system, the triplets $ \ket{t_m} $ and the singlet $\ket{s}$ respectively. The degeneracy of the triplets allows us to simplify Eq.~\eqref{e1} to~\eqref{e2}. Similarly, for the antiferromagnetic interaction, $ U_{af}(t')=e^{-iH_{a,2}t'} $, we have \begin{align} \label{e3} \ket{{\psi}(t')}=e^{-iJ\epsilon_tt'}\left(c_0e^{-iJ\Delta\epsilon t'}\ket{s}+\sum_{m=1}^{3} c_m\ket{t_m}\right).\end{align} In this two-spin case, we can describe the evolution by two phase terms. The first terms in Eq.~\eqref{e2},~\eqref{e3} are global phases. Meanwhile, the phase added to the singlet will cause the twist of the spin chain on which we lay more focus. Also, these evolutions as stated in Eq.~\eqref{e1} and Eq.~\eqref{e3} are periodic and their periods are correlated to $ J $ and $\tilde{J}$. Now we try to utilize this periodic property to realize our simulation of the ferromagnetic interaction by the antiferromagnetic one. The goal here is to find $t$ and $t'$ for the ferromagnetic and the antiferromagnetic time evolutions respectively which will bring about identical final states: \begin{align} \ket{\tilde\psi(t)}=\ket{{\psi}(t')},\label{psi} \end{align} up to a possibly a global phase. Here we state how evolution of different $t$ can make our desired simulation possible. Due to the fact that $e^{i\theta(t)}=e^{-i(2k\pi-\theta(t))}=e^{-i\theta'(t')}$, where $\theta(t)$ and $\theta'(t')$ refer to the phases in $\ket{\tilde\psi(t)}$ and $\ket{\psi(t')}$, we wish to find a proper relation between $t$ and $t'$ which will make Eq.~\eqref{psi} possible (Fig.~\ref{trick}). The rotation angle $ \theta(t)=\tilde J\Delta\epsilon t $ generated by the ferromagnetic interaction is anti-clockwise while the angle $ \theta'(t')=J\Delta\epsilon t' $ generated by the antiferromagnetic interaction is clockwise. A restriction for $ t $ and $ t' $ exists to enable $ \theta=2\pi-\theta' $. To translate these into equations, the times $t$ and $t'$ must satisfy \begin{align} \tilde J\Delta\epsilon t&=2k\pi-J\Delta\epsilon t', \end{align} for some integers $k$. Solving the equation, we have the relation between $ t $ and $ t' $ \begin{align} \label{t'} t'=\frac{\tilde J}{J}\left(\frac{2\pi}{\tilde J|\Delta \epsilon|}-t\right)\ \ \ (k=-1) .\end{align} The value of $k$ are chosen to give a minimal experimental time $t'$. According to this restriction, when we start with the same initial state, an evolution for $ t' $ under an antiferromagnetic interaction is equivalent to an time evolution for $ t $ under a ferromagnetic interaction. That is \begin{align} \label{T} U_f\left(t\right)\ket{\psi}=U_{af}\left(t'\right)\ket{\psi}, \end{align} up to a global phase for any two-spin states $\ket{\psi}$. Note that while our discussion is for a time-independent interaction, we can also use this technique to simulate a time-dependent $J$ by splitting into smaller time periods in each of which we assume $J$ to be a constant. \begin{figure}[t] \includegraphics[width=0.4\textwidth]{trick} \caption{The rotation angles $ \theta $ and $ \theta' $ generated by ferromagnetic (red) and antiferromagnetic (blue) interaction are of opposite clockwise direction. To let these two rotations end up with the same effect, there should be $ \theta=2\pi-\theta' $. We show how this allows us to simulate ferromagnet with antiferromagnet in section \ref{2a}.} \label{trick} \end{figure} \subsection{Trotterization for a larger spin chain} In order to simulate the Hamiltonian in Eq.~\eqref{EQ_Hf}, we start from simulating for three-spin case, where the contributing Hamiltonians are \begin{align} &H_{12}=-J_{1,2}\bm S_1\cdot\bm S_2,\\ &H_{23}=-J_{2,3}\bm S_2\cdot\bm S_3. \end{align} Using the protocol from the last subsection, we can simulate $U_{12}(t)=e^{-iH_{12}t}$ and $U_{23}(t)=e^{-iH_{23}t}$. However, since $H_{12}$ and $H_{23}$ do not commute, i.e. $ \left[H_{12},H_{23}\right]\neq0 $, a direct combination of these two time evolution operations is not equivalent to the system we intend to simulate: \begin{align} e^{-i(H_{12}+H_{23})t}\neq e^{-iH_{12}t}e^{-iH_{23}t}. \end{align} Instead, we use a Trotterization technique~\cite{trotter59}. The Trotter formula is a good way to approximate the time evolution Hamiltonian $H_{f,3}=H_{12}+H_{23}$, with the two-body interaction Hamiltonians. Here we use second-order Trotter expansion which gives a considerably small error term: \begin{align} e^{-iHt}=\left(e^{-i\frac{H_{12}}{2}\frac{t}{N}}e^{-iH_{23}\frac{t}{N}}e^{-i\frac{H_{12}}{2}\frac{t}{N}}\right)^N+O\left(\frac{t^3}{N^2}\right), \end{align} where $N$ is the number of Trotter steps which can be increased to reduce the approximation error for a given time $t$. Therefore, the time evolution $e^{-iH_{f,2}t}$ can be approximated by a series of alternative time evolutions under $H_{12}$ and $H_{23}$, each of which can be simulated using our technique described in II.A. Furthermore, we can use the same technique to simulate time evolution under the general Hamiltonian in Eq.~\eqref{EQ_Hf} with $n$ spins. To do that, we group the terms in Eq.~\eqref{EQ_Hf} in terms of time evolution under $H_o$ and $H_e$ (Fig.~\ref{eo}) \begin{align} \label{ho} H_{o}&=J_{1,2}\bm{S}_1\cdot\bm{S}_2+J_{3,4}\bm{S}_3\cdot\bm{S}_4+\dots, \\ \label{he} H_{e}&=J_{2,3}\bm{S}_2\cdot\bm{S}_3+J_{4,5}\bm{S}_4\cdot\bm{S}_5+\dots, \end{align} where $H_{o}+H_{e}=-H_{f,n}$. The terms in Eq.~\eqref{ho} (Eq.~\eqref{he}) mutually commute with each other. Therefore, we can further expand the time evolution under $H_{o}(H_{e})$ in terms of time evolutions under each interaction pair and simulate those within a same group simultaneously, i.e., \begin{align} e^{-iH_{o}t'}=e^{-iH_{12}t'}e^{-iH_{34}t'}\dots. \end{align} And with our technique in section \ref{2a}, each term on the right hand side can be simulated by the antiferromagnetic interaction and an appropriate choice of $t'$, that is, $e^{i H_{o}t}$ using Eq.~\eqref{t'}. \begin{figure}[t] \centering \includegraphics[scale=0.4]{oe} \caption{Illustration of the interactions in $ H_o $ and $ H_e $ in Eq.~\eqref{ho} and Eq.~\eqref{he}. The dots represent spins and the links between them refer to the included interactions.} \label{eo} \end{figure} Next, using Trotterization, we can then approximate the time evolution under the general Hamiltonian Eq.~\eqref{EQ_Hf} in terms of the time evolutions under $H_{o}$ and $H_e$ as \begin{align} \label{uft} U_{f,n}(t)&\equiv\left(e^{-i\frac{H_{o}}{2}\tau'}e^{-iH_{e}\tau'}e^{-i\frac{H_{o}}{2}\tau'}\right)^N\ \end{align} where $\tau'=\frac{\tilde{J}}{J} \left( \frac{2 \pi}{\tilde{J} |\Delta \epsilon|} - \frac{t}{N}\right)$, as per Eq.~\eqref{t'}. Note that we here assume that the $J_{i,i+1}$'s take the same value $J = \tilde{J}$ when turned on. One can adjust individual gate timings to correct for this if they are of different amplitudes. With these choices, we have $U_{f,n} \approx \exp(-i H_{f,n} t)$ up to a small Trotter error. Therefore, we have managed to simulate Eq.~\eqref{EQ_Hf} for $n$ spins using the underlying antiferromagnetic interaction. In order to demonstrate and verify our technique of simulating ferromagnetic interaction, we further propose two protocols in the following sections, namely the Loschmidt echo protocol and the perfect state transfer protocol. \section{Loschmidt echo} \subsection{Loschmidt echo} Here we are going to use our simulation technique to perform a Loschmidt echo~\cite{peres84,jala01}. With an initial state $\ket{\psi(0)}$ and some Hamiltonians $H_1$ and $H_2$, a Loschmidt echo process is defined as \begin{align} \ket{\psi(2t)}=e^{-iH_2t}e^{-iH_1t}\ket{\psi(0)}, \end{align} where the time evolution operations of $H_1$ and $H_2$ are successively applied to $\ket{\psi(0)}$ for a same time period $t$. When $H_1=-H_2=H$, the two processes $e^{-iH_1t}$ and $e^{-iH_2t}$ correspond to forward and backward evolutions under the same Hamiltonian. This time reversal process will result in a revival of the initial state $\ket{\psi(2t)}=\ket{\psi(0)}$. This is the Loschmidt echo. We notice that the relation between $H_2$ and $H_1$ corresponds well with the systems we are working on: \begin{align} H_{a,n}=-H_{f,n}\label{eH}. \end{align} Therefore, if we manage to simulate ferromagnetic and antiferromagnetic interactions that satisfy Eq.~\eqref{eH}, we can obtain a revival of the initial state within the Loschmidt echo protocol. Our Loschmidt echo protocol is as follows. We first prepared $n$ spins in the initial state \begin{align} \ket{\psi(0)}=\ket{s}\ket{000...0}, \end{align} where $ \ket{s} $ is a singlet for the first two spins and $ \ket{000...0} $ represents spin-ups for the other $n-2$ following spins. Experimentally, this state is easy to prepare and the choice of singlet for the first two spins help us not only confirm ferromagnet but also rule out a classical simulator. We choose to turn off the interaction between the first two spins in the following processes, leaving the first spin as a reference. Next, let the initial state $ \ket{\psi(0)} $ evolve for exactly the same time period for time $ t $ under $ H_{f,n} $ and $ H_{a,n} $ successively \begin{align} \ket{\psi(t)}&=U_{f,n}(t)\ket{\psi(0)},\\ \ket{\psi(2t)}&=U_{af,n}(t)\ket{\psi{(t)}}\\ &\label{ele}=U_{af,n}(t)U_{f,n}(t)\ket{\psi(0)}, \end{align} with \begin{align} \label{uuu} U_{af,n}(t)&=\left(e^{-i\frac{H_{o}}{2}\frac{t}{N}}e^{-iH_{e}\frac{t}{N}}e^{-i\frac{H_{o}}{2}\frac{t}{N}}\right)^N, \end{align} where we take all $J_{i,i+1},\tilde J_{i,i+1}$ $(1<i<n)$ equal to $J$, and $J_{1,2},\tilde J_{1,2}=0$ during the relevant steps. Here we note that although such $U_{af,n}(t)$ can in principle be implemented continuously by the simulator, using Trotterization for this unitary exactly cancels out the Trotterization errors introduced during the simulation of the ferromagnetic unitary $U_{f,n}(t)$. We later confirm this observation in our numerical result. Note also that, the interaction between the first two spins is turned off. Therefore, in our Loschmidt echo protocol, $ H_o $ does not contain the first term in Eq.~\eqref{ho}, \begin{align} \label{hto} H_o=J\sum_{i=2}^{2\lfloor\frac{n}{2}\rfloor-1} \bm{S}_{2i-1}\cdot\bm{S}_{2i}, \end{align} where $\lfloor\frac{n}{2}\rfloor$ is the floor function of $\frac{n}{2}$. $H_e$ is still the same as in Eq.~\eqref{he}. In Eq.~\eqref{ele}, $U_{af}(t)$, the time evolution under the antiferromagnetic interaction, can be directly applied due to our assumption and from the last section we know that we can simulate $U_{f,n}(t)$ by an antiferromagnet. Therefore, the whole Loschmidt echo process in Eq.~\eqref{ele} can be realized by experiment. \begin{figure}[t] \includegraphics[width=0.4\textwidth]{Echo.pdf} \caption{The echo fidelity of quantum (blue) and classical (orange) simulations for 10-spin chain, as a function of evolution time $ t $ ($J$ was taken to be 1).} \label{echo} \end{figure} To quantify the success of a Loschmidt echo evolution in our protocol, we define the (effective) fidelity of an echo process to be the projection of the first two spins on the singlet state at the final time $2t$: \begin{align} \label{fec} f_{ec}(t)\equiv\bra{\psi(2t)}P_{ec}\ket{\psi(2t)}, \end{align} where $P_{ec}=\ket{s}\bra{s}\otimes\mathbbm{1}$ is the projection operator. If Eq.~\eqref{eH} is satisfied by our simulation method, the revival of the singlet state will be achieved with $f_{ec}(t) = 1$ for all evolution time $t$. \subsection{Classical model} As we have mentioned, the choice of initializing the first two spins in a singlet state allows us to confirm certain quantum behaviors of the simulator. Indeed, since the singlet is entangled, revival is not guaranteed in a classical mean field approximation where the two-body interactions in Eq.~\eqref{EQ_Hf} are approximated by local Hamiltonians on individual spins, i.e. \begin{align} H_i(t) = \bm h_i(t)\cdot\bm S_{i}, \end{align} with $\bm{h}_i(t)$ being the mean field experienced by the $i$th spin. For the Hamiltonian in Eq.~\eqref{EQ_Hf}, the mean fields are: \begin{align} \label{hm1} \bm h_2(t) &=J_{2,3}\langle\bm S_3(t)\rangle, \\ \label{hm3} \bm h_i(t) &=J_{i-1,i}\langle\bm S_{i-1}(t)\rangle+J_{i,i+1}\langle\bm S_{i+1}(t)\rangle, \\ \label{hm2} \bm h_n(t)&=J_{n-1,n}\langle\bm S_{n-1}(t)\rangle, \end{align} where Eq.~\eqref{hm3} is for $i\in[3,n-1]$. Since the initial state is a product state of a singlet and $n-2$ spin-ups, the Hamiltonians in Eq.~\eqref{hm1}-\eqref{hm2} result in a system of $n-1$ time-dependent coupled differential equations. One of the equations is \begin{align} i\frac{\partial}{\partial t}\ket{\psi_{12}(t)} = H_{2}(t)\ket{\psi_{12}(t)}, \end{align} where $\ket{\psi_{12}(t)}$ is the state of the first two spins at time $t$ and $\ket{\psi_{12}(0)}=\ket{s}$. The other $n-2$ equations are \begin{align} i\frac{\partial}{\partial t}\ket{\psi_{i}(t)} = H_{i}(t)\ket{\psi_{i}(t)}, \end{align} with $\ket{\psi_{i}(t)}$ being the state of the $i$th qubit at time $t$ and $\ket{\psi_{i}(0)}=\ket{0}$. We numerically solve the coupled equations using Runge-Kutta method to find the state of the $n$ spins at time $t$. We then measure the same projection of the first two spins and obtain the fidelity as in Eq.~\eqref{fec}. \begin{figure}[t] \includegraphics[width=0.45\textwidth]{rb_e_1.pdf} \caption{Loschmidt echo infidelity $I_{ec}$ at $n = 25, t = \frac{\pi}{2}$ as a function of the standard deviation $v$ of gate error. The mean values (dots) and standard deviation of $I_{ec}$ are obtained by repeating the simulation 100 times for each value of $v$. Using the fit function $I_{ec}=\exp(a) v^b$, we can find the slope $ b(n) $ for each $n$.} \label{re1} \end{figure} \begin{figure}[t] \includegraphics[width=0.45\textwidth]{rb_e_2.pdf} \caption{The slope $ b(n) $ from the fit function in Fig.~\ref{re1} as a function of $n$.} \label{re2} \end{figure} \subsection{Numerical result} The numerical results for both the quantum and classical cases within the Loschmidt echo protocol are shown in Fig.~\ref{echo}. The fidelity in quantum condition maintains to be 1 for all evolution time t. This indicates that our simulation process satisfies Eq.~\eqref{eH} and our simulation of ferromagnet with antiferromagnet is a success. In comparison, when we apply the mean-field approximation as described in last section, there is a deviation from the perfect fidelity. This obvious difference between how the fidelities vary as evolution time grows provides a possible verification of a claimed quantum simulator. We also note the reason why there is no Trotterization error in the numerical fidelity of the quantum model is because Trotterization is applied to simulate both the ferromagnetic and the antiferromagnetic time evolution unitaries as we have discussed earlier. If we instead apply Trotterization on only the ferromagnetic unitary $U_f(t)$ and simulate the antiferromagnetic one continuously without Trotterization, the fidelity is expected to deviate from 1 as time grows due to Trotterization error. This deviation therefore serves as a possible measure of Trotterization error in the quantum simulator. \subsection{Robustness} In experiment, the gate error will be inevitably involved and effect the simulation. Here we consider the gate error as an additional term of exchange energy for $ J_{i,i+1} $ (Eq.~\eqref{EQ_Ha}), expressed by $ J_{i,i+1} (1+\eta_i) $ ($\eta_i\ll1$), where $ \eta_i $ is a random error sampled from a normal distribution \begin{align} \label{nn} p(\eta_i)=\frac{e^{-\frac{x^2}{2v^2}}}{\sqrt{2\pi v^2}}, \end{align} of a standard deviation $ v $. For each step of the time evolution in this series, a new $\eta_i$ is sampled from the distribution above. We ran the numerical experiments for 100 times for different $n$. Recall that in our simulation technique, the time evolution $U_{f,n}(t)$ (Eq.~\eqref{uft}) is approximated by a series of time evolutions under two-spin ferromagnetic Hamiltonians which are relatively long (nearly a $2\pi$ phase evolution) for each Trotter step. In contrast, the return under $U_{af,n}$ uses short steps. This discrepancy will lead to the potential for large errors under small variations of $J_{i,i+1}$. Here we define infidelity $I_{ec}=1-f_{ec}$ as a measure of imperfect revival. For each $v,n$, we repeat the numerical simulation 100 times to get an averaged infidelity. In Fig.~\ref{re1}, we plot this averaged infidelity for several choice of $v$ at fixed $n=25$. The plots shows that infidelity grows only polynomially with $v$ for a fixed $n$, i.e. $ I_{ec}\propto\sigma^b $ for some order $b$. By taking log-log plot and finding the linear fit of it as $\log (I_{ec})=a+b(n) \log(v) $, we obtained the slope $b(n)$ of Fig.~\ref{re1} and we further define order $b(n)$ as the robustness of the specific systems. In Fig.~\ref{re2}, the slope $b(n)$ is plotted as a function of $n$. We see that $b$ is largely independent of $n$, and thus we can conclude that this protocol is 'robust' in the sense that the fidelity does not decrease exponentially with increasing numbers of spins. Regarding the source of this robustness, we note that our protocol does not necessarily send the spin information through arbitrary distances in an infinite chain, possibly due Anderson localization in our one dimensional system. \section{Perfect state transfer} \subsection{State transfer} The Loschmidt echo protocol provides us with a verification of the existence of ferromagnetic interaction in our simulation. However, since Loschmidt echo only gives the measure result of the first two spins, there is no guarantee that the information is transferring throughout the whole spin chain, especially from one end to the other. A perfect state transfer from one end to the other can be achieved under the Hamiltonian \cite{StateTransfer} \begin{align} \label{ht} H_{\text{tr}}=-2 \sum_{i=1}^{n-1}J_{i,i+1}\bm{S}_i\cdot\bm{S}_{i+1} + \sum_{i=1}^{n} B_i\sigma^z. \end{align} Compared to the Heisenberg model for ferromagnetic interaction we used, the differences included are a non-uniform exchange interaction between the $ i $th and the $(i+1)$th spins \begin{align} \label{jj} J_{i,i+1}=\sqrt{i (n-i)}, \end{align} and a nonuniform magnetic field \begin{align} B_i=\frac{1}{2}(J_{i,i+1}+J_{i-1,i}), \end{align} on the $i$th spin. Such a nonuniform magnetic field can be engineered using the architecture illustrated in Fig.~\ref{B}. The spins can be realized as electrons in quantum dots placed in a magnetic field gradient. The magnetic field strength on each spin can be adjusted using an electrode which may pull or push the electron to a different magnetic field strength. With this extra magnetic field term in the state transfer Hamiltonian Eq.~\eqref{ht}, we split it into three Trotter elements, i.e. $H_o$, $H_e$ and the magnetic field term $H_B=\sum_{i=1}^{n} B_i\sigma^z$. \begin{figure}[t] \centering \includegraphics[scale=0.35]{B.pdf} \caption{This is the illustration of the architecture of magnetic field gradient in experiment. Spins are aligned on a equal field line of the magnetic field of a magnetic dipole and each spin is manipulated by separated electrodes so that they can move in the field and finally line up in a gradient field.} \label{B} \end{figure} Under the Hamiltonian in Eq.~\eqref{ht}, the initial state $ \ket{\psi(0)}=\ket{s}\ket{000...0} $ would evolve to $\ket{ \psi(\frac{\pi}{2})}=\ket{000...0}\ket{s} $ after a time $t=\frac{\pi}{2}$. \begin{figure}[t] \centering \includegraphics[scale=0.5]{StateTransfer.pdf} \caption{The figure shows the fidelity in our state transfer protocol as a function of the evolution time $ t $. The state is perfectly transfered through the chain at $ t=\frac{\pi}{2} $.} \label{st} \end{figure} \begin{figure}[t] \includegraphics[width=0.45\textwidth]{rb_st_1.pdf} \caption{Perfect state transfer infidelity $I_{tr}$ at $n = 25, t = \frac{\pi}{2}$ as a function of the standard deviation $v$ of gate error. The mean values (dots) and standard deviations of $I_{tr}$ are obtained by repeating the simulation 100 times for each value of $v$. Using the fit function $I_{tr}=\exp(a)v^b $, we can find the slope $ b(n) $ for each $n$.} \label{rs1} \end{figure} \begin{figure} \includegraphics[width=0.43\textwidth]{rb_st_2.pdf} \caption{The slope $ b(n) $ from the fit function in Fig.~\ref{rs1} as a function of $n$. $b(n)$ for even $n$ (orange) and odd $n$ (blue) show different dependence of $n$.} \label{rs2} \end{figure} Note that when we apply our method of simulating ferromagnet with antiferromagnet, the different $J_i$ between spins will cause different simulation time interval $t'$ under the antiferromagnetic interactions. We therefore simulate each pair separately. These simulations can be done in parallel and hence do not change our Trotterization choice. After evolving for the time $ t=\frac{\pi}{2} $ under $ H_{\text{tr}} $, we are expecting a final state $\ket{ \psi(\frac{\pi}{2})}=\ket{000...0}\ket{s} $. We measure the projection of the last two spins onto $ \ket{s} $ using \begin{align} P_{tr}=\mathbbm{1}\otimes\ket{s}\bra{s}. \end{align} We further define the fidelity $f_{tr}$ of this protocol to be \begin{align} f_{tr}\equiv\bra{\psi\left(\frac{\pi}{2}\right)}P_{tr}\ket{\psi\left(\frac{\pi}{2}\right)}. \end{align} The numerical result in Fig.~\ref{st} shows that the state is transferred throughout the whole chain perfectly at $ t=\frac{\pi}{2} $. This provides us with the confidence to say that the interaction we introduced to the simulation is the nearest-neighbor two-body interactions we mean to simulate. \subsection{Robustness} Similar to the robustness for Loschmidt echo discussed above, we add the gate error term to Eq.~\eqref{jj} and get $ J_{i,i+1}(1+\eta_i) $($\eta_i\ll1$), where $ \eta_i $ is a random error sampled from the normal distribution in Eq.~\eqref{nn}. The time evolution under $H_{tr}$ (Eq.~\eqref{ht}) is approximated by a series of time evolutions under two-spin ferromagnetic Hamiltonians. For each time evolution in this series, a new $\eta_i$ is sampled from the distribution above. We define infidelity $I_{tr}=1-f_{tr}$ as the deviation of the fidelity from the perfect value 1. For each $v$ and $n$, we repeated the numerical simulation 100 times to get an averaged infidelity. Fig.~\ref{rs1} describes how infidelity changes with the standard deviation $v$ at $n=25$. The linear fit $\log I_{tr}=a+b \log v$ of a log-log plot gives the slope $b(n)$ which tells us how fast the infidelity grows with the error strength. In Fig.~\ref{rs2}, we plot how $b(n)$ changes with $n$. It indicates that, the robustness in state transfer protocol takes on different pattern in response to $n$. For odd $n$, the system appears to be robust, i.e., $b(n)$ does not have exponentially bad performance as a function of $n$. However, for even $n$ this is no longer the case. We attribute this to the role a single, bad link plays in the even case right in the center of the chain --- it could be this worst case scenario that dominates the success or failure of the transfer protocol. In contrast for odd $n$, two links are equally strong in the center, leading to multiple failure pathways and (possibly) our source of exponentially decreasing fidelity. \section{Outlook} In this paper we consider two tests of a quantum simulator with Heisenberg interactions and the ability to prepare and measure singlet states of spins. Starting with a technique to simulate ferromagnetic interactions using antiferromagnetic interactions, our protocols are surprisingly robust to parametric errors in operation of the simulator. However, properties of the system in the middle of these protocols have not been investigated, nor have scenarios in which depolarizing noise or state preparation error play a key role. We also do not yet have a way to estimate how much and how fast entanglement entropy grows in the system as a function of time. Answering these questions are intriguing future directions of research. On the other hand, our state transfer protocol has demonstrated the ability to transport quantum entanglement between subsystems. We suspect that a more complicated quantum computation tasks can be also implemented, which might be able to lower bound the computational power of our proposed simulator. \acknowledgements We thank X.~Wu and A.~M.~Childs for helpful discussions. This research was supported in part by the NSF funded Physics Frontier Center at the Joint Quantum Institute. \bibliographystyle{apsrev4-1}
{ "timestamp": "2017-12-15T02:09:14", "yymm": "1712", "arxiv_id": "1712.05282", "language": "en", "url": "https://arxiv.org/abs/1712.05282" }
\section{INTRODUCTION} The study of small bodies of the solar system was changed forever in 1977, with the discovery of a large icy object moving on an orbit between those of Saturn and Uranus \citep{1979IAUS...81..245K}. That object was subsequently named Chiron. It was soon realised that its orbit was dynamically unstable, with a mean half-life of 0.2 Myr, which is far shorter than the age of the solar system \citep[e.g.][]{1979AJ.....84..134O,1990Natur.348..132H}. For over a decade, 2060 Chiron was an oddity - but following the discovery of 5145 Pholus in 1992, a growing population of such objects in the outer solar system has been discovered - a population now known as the Centaurs. \par Over the years, a number of different schemes have been proposed to define Centaurs \citep[e.g.][]{2003MNRAS.343.1057H,2005AJ....129.1117E,2007prpl.conf..895C,2008ssbn.book...43G}. Across all these schemes, it can be generally said that Centaurs have orbits between the giant planets Jupiter and Neptune. For this work, we follow the definition used by the Minor Planet Center, which considers objects to be Centaurs if they move on orbits with perihelia beyond the orbit of Jupiter and with semi-major axes within the orbit of Neptune\footnote{http://www.minorplanetcenter.net/iau/lists/Unusual.html (accessed 17th December 2016)}. Those objects in this region that are trapped in 1:1 resonance with one of the giant planets (the Trojans) are excluded from the list, and are not considered to be Centaurs. Using this definition, over 220 objects can presently be classified as Centaurs\footnote{http://www.minorplanetcenter.net/iau/lists/t\_centaurs.html (Accessed October 8, 2017)}. \par The Centaurs move on highly chaotic orbits which are frequently perturbed by the gravitational influence of the four giant planets. The strongest perturbations typically occur as a result of close approaches between the Centaurs and those planets \citep[e.g.][]{1962ASPL....8..375M,2004MNRAS.355..321H}. The instability of the Centaur region is exemplified by the fact that Centaurs have dynamical lifetimes and half-lives much less than the age of the solar system, with values typically $\ll$100 Myr \citep{1996ASPC..107..233D,1997Icar..127...13L,2003AJ....126.3122T, 2004MNRAS.354..798H,2007Icar..190..224D,2009Icar..203..155B,2015A&A...583A..93P}. \par It is therefore clear that these objects are ephemeral in nature, and that their ranks must be replenished over time from other sources. Proposed source populations for the Centaurs include the Oort Cloud \citep{2005MNRAS.361.1345E,2012MNRAS.420.3396B,2014Ap&SS.352..409D,2014Icar..231...99F}, the Jupiter Trojans \citep{2004MNRAS.354..798H,2006MNRAS.367L..20H,2012MNRAS.423.2587H}, the Neptune Trojans \citep{2010MNRAS.402...13H,2010MNRAS.405.1375L,2012MNRAS.422.2145H}, the Scattered Disk \citep{2007Icar..190..224D,2008ApJ...687..714V} and other populations in the Edgeworth-Kuiper Belt \citep{1997Icar..127...13L,2008ApJ...687..714V}. Of these many source regions, it is thought that the majority of Centaurs originate within the Scattered Disk \citep{2007Icar..190..224D,2008ApJ...687..714V}. \par After these small bodies escape from one of the more stable source populations into the Centaur region, they will typically spend on the order of $\sim10^6$ years as a Centaur before diffusing out of that region \citep{2003AJ....126.3122T}. The final fates of Centaurs are varied - some will collide with the Sun or one of the planets, or will be torn apart by tidal forces during a planetary close encounter, whilst others will be thrown onto orbits beyond Neptune or be ejected from the solar system entirely \citep{1994PlR....14f...8N,2004MNRAS.354..798H,2008ApJ...687..714V,2016DPS....4812023W}. \par During the course of their evolution, studies have shown that at least one-third of the Centaurs will evolve onto cometary orbits with perihelia in the inner solar system \citep{2004MNRAS.355..321H,2004come.book..659J,2009Icar..203..155B}. As such, the Centaurs are generally regarded as the principal parent population for the short period comets \citep{2003AJ....126.3122T,2004A&A...413.1163G,2004MNRAS.354..798H,2008ApJ...687..714V,2009Icar..203..155B,2009AJ....137.4296J,2011AstSR...7..230K}. \par Indeed, several Centaurs (including Chiron) have been observed exhibiting cometary activity \citep[e.g.][]{2009AJ....137.4296J,2015MNRAS.454.3635S,2017AJ....153..230W}. Given the extreme dynamical instability exhibited by the Centaurs, coupled with the frequent close encounters they experience with the giant planets, the discovery in 2013 of a system of rings orbiting the Centaur 10199 Chariklo came as a huge surprise \citep{2014Natur.508...72B}. Those rings, revealed by unexpected dimmings of a star occulted by Chariklo prior to, and immediately after the occultation event, are narrow and dense, and lie at radii of $\sim$ 391 and $\sim$ 405 km.\par It is still unknown whether the rings formed recently, or pre-date Chariklo's injection into the Centaur region, though rings have also recently been discovered around the dwarf planet Haumea \citep{2017Natur.550..219O} which orbits beyond Neptune. This suggests that rings around small bodies could form in the Trans-Neptunian region.\par Furthermore, a recent dynamical study has shown that such rings could readily survive with Chariklo through its entire evolution in the Centaur region, since sufficiently close encounters to disrupt the rings are rare \citep{2017AJ....153..245W}.\par \par The chance discovery of Chariklo's ring system prompted a reanalysis of stellar occultation data obtained for 2060 Chiron in 1993, 1994 and 2011 by \citet{2015A&A...576A..18O}. The original analysis of that occultation data found dips in the light curve that, it was thought, corresponded to regions outside the nucleus which were then interpreted as comet-like dust jets \citep{1995Natur.373...46E,1996Icar..123..478B} or symmetrical jet-like features \citep{2015Icar..252..271R}. The recent reanalysis of this data suggests that it might also be interpreted as evidence for a ring system similar to that of Chariklo, with a mean radius of 324 $\pm$ 10 km \citep{2015A&A...576A..18O}.\par The origin of this proposed ring structure could be the result of a tidal disruption of Chiron due to a close encounter with a planet \citep{HyodoR:2016}, a collision between Chiron and another body \citep{2017A&A...602A..27M}, a collision between an orbiting satellite and another body \citep{2017A&A...602A..27M}, the tidal disruption of an orbiting satellite \citep{ElMoutamidM:2014} or debris ejected from Chiron itself due to cometary activity \citep{PanM:2016}.\par Over time, rings can widen due to viscous spreading \citep{2017ApJ...837L..13M}. This process can occur on timescales as short as hundreds of years. However, the extent of the rings can be constrained, keeping them far more narrow, if shepherd satellites are present \citep{FrenchRG:2003, JacobsonRA:2004,ElMoutamidM:2014,2017ApJ...837L..13M}. At the present time no shepherd satellites are known to exist orbiting any Centaur and hence their possible dynamical role will not be considered in this study.\par Given the extreme dynamical instability exhibited by Chiron, it is interesting to consider whether its ring system could survive through the entirety of its life as a Centaur. If deep close encounters with the giant planets are sufficiently frequent, then it might be possible to place a constraint on the age of any rings around Chiron on the basis of its past dynamical history.\par As a result, in this work, we follow \citet{2017AJ....153..245W}, and examine the dynamical history of Chiron and its proposed ring system. In doing so, we explore the likelihood that its rings could be 'primordial' (i.e. could date back to before the object was captured as a Centaur) barring ring dispersal by viscous spreading. Our results also allow us to explore the likely source population of Chiron, and to confirm its status as one of the most dynamically unstable Centaurs.\par In section 2, we present the physical and orbital properties of 2060 Chiron. In section 3, we discuss the means by which we can measure the severity of close encounters between ringed small bodies and planets, and in section 4, we discuss the two dynamical classes that have been proposed for the Centaurs. We present our methodology in section 5, then present and discuss the results of our numerical integrations of Chiron in section 6. Finally, in section 7, we present our conclusions, and discuss possible directions for future work. \section{THE PROPERTIES OF 2060 CHIRON} \subsection{ORBITAL PROPERTIES} After Chiron was discovered, pre-discovery images dating back as early as the late 19th century allowed its orbit to be well constrained {\citep{1977IAUC.3151....2L,1979IAUS...81..245K}. It was soon found that the orbit of Chiron was unlike the orbit of any known small body at the time. Its aphelion lay between Saturn and Uranus while its perihelion lay just interior to Saturn's orbit.\par Since its discovery, more observations of Chiron have allowed its orbit to be even further refined. The current best-fit orbital properties of Chiron are shown in Table~\ref{chiron_orbital} and were taken from the Asteroids Dynamic site \citep{2012IAUJD...7P..18K}.\par Using the semi-major axis, $a$, and eccentricity, $e$, from Table ~\ref{chiron_orbital}, the perihelion and aphelion distances are found to be 8.4 au and 18.86 au respectively. The semi-major axis is about 0.01 au away from the interior 5:3 mean motion resonance of Uranus located at about 13.66 au. The eccentricity of Chiron's orbit lies in the middle of the eccentricity range for the orbits of the known Centaurs, 0.01 - 0.73\footnote{http://www.minorplanetcenter.net/iau/lists/Centaurs.html (accessed 9 August, 2017)} and is high enough to cause Chiron to cross the orbits of both Saturn and Uranus. These giant-planet perturbations and close-approaches have a significant effect on the dynamical evolution of Chiron's orbit \citep{1979AJ.....84..134O,1979Icar...40..345S,2002EM&P...90..489K} which is reflected in the relatively short dynamical lifetime of $\sim$ 1 Myr \citep{1990Natur.348..132H,2004MNRAS.354..798H}.\par Furthermore, the half-life of its orbit is 1.03 Myr in the forward direction and 1.07 Myr in the backward direction \citep{2004MNRAS.354..798H}. Both times are much less than the age of the solar system.\par The instability of Chiron's current orbit makes it highly unlikely that its orbit is primordial. Instead, the general consensus is that Chiron follows a chaotic orbit and originated in the Kuiper Belt \citep{1979AJ.....84..134O,1990Natur.348..132H,1996P&SS...44.1547L,2001P&SS...49.1325S,2002Icar..160...44D,2002EM&P...90..489K}. \par Using the taxonomy of \citet{2003MNRAS.343.1057H}, Chiron is classified as an object in the SU$_{\textnormal{IV}}$ class. This means that its dynamics are controlled by Saturn at perihelion and by Uranus at aphelion. The subscript IV means that the Tisserand parameter with respect to Saturn is $>$ 2.8 \citep{2003MNRAS.343.1057H}. The Tisserand parameter, $T_{p}$, is a quantity calculated from the orbital parameters of a small body and those of a planet it could encounter. It is defined by: \begin{equation} T_p = \frac{a_p}{a} + 2\textnormal{cos}(i-i_p) \sqrt{\frac{a}{a_p}(1-e^2)} \end{equation} \noindent{\citep[e.g.][]{MurrayCD:1999}}. Here, $a_p$ is the semi-major axis of a planetary orbit, $i$ the inclination of the small body orbit and $i_p$ the inclination of the planetary orbit.\par To first order, the Tisserand parameter of an orbit with respect to a given planet is expected to be conserved through an encounter with that planet, with the precise value giving an indication of the maximum strength of encounters that are possible with that planet.\par Broadly, if $T_{p} > 3$, then particularly close encounters are not possible between the two objects, whilst for $2.8 \le T_{p} \le 3$, then extremely close encounters can occur that might lead to the object being ejected from the solar system in a single pass \citep{2003MNRAS.343.1057H}. \begin{table} \caption{The orbital elements of Chiron for epoch 2457600.5 JD, based on an observational arc length of 44,305.9 days taken from the Asteroids Dynamic site (accessed 31st December, 2015). Here, $a$ is the semi-major axis, $e$ the eccentricity and $i$ the inclination of the orbit. $\Omega$, $\omega$ and $M$ are the longitude of ascending node, argument of perihelion and Mean anomaly respectively. Each uncertainty is the standard deviation around the best-fit solution.}\label{chiron_orbital} \begin{tabular} {|c|c|c|c|} \hline Property&Value&Units\\ \hline $a$&13.639500 $\pm$ $(1.48\times 10^{-6})$&au\\ $e$&0.38272700 $\pm$ $(9.62\times 10^{-8})$&\\ $i$&6.947000 $\pm$ $(6.67\times 10^{-6})$&deg\\ $\Omega$&209.21600 $\pm$ $(6.05\times 10^{-5})$&deg\\ $\omega$&339.53700 $\pm$ $(6.19\times 10^{-5})$&deg\\ $M$&145.97800 $\pm$ $(2.97\times 10^{-5})$&deg\\ \hline \end{tabular} \label{chiron_orbital} \end{table} \subsection{DENSITY, SIZE AND MASS} Unlike the relatively high precision with which the orbital parameters of Chiron are known, the physical properties remain much more poorly constrained. The diameter of Chiron has had to be estimated based on an assumed albedo. Though a strong effort to determine the size of Chiron has been made over the past two decades, efforts have been hampered by the interference from possible material located outside the nucleus, cometary activity and Chiron's elongated shape \citep{2013A&A...555A..15F,2015A&A...576A..18O}.\par Radius measurements ranging from 71 km \citep{2004A&A...413.1163G} to a constraint of $<$ 186 km \citep{1991Sci...251..777S} have been reported. \citet{2015A&A...576A..18O} report an overall average effective spherical radius of 90 km which we adopt for this work. \par Because of the large uncertainty in the size and mass of Chiron, Chiron's overall density is also poorly known. \citet{1997AJ....113..844M} in their study of a coma around Chiron report a bulk density in the range 500 - 1,000 kgm$^{-3}$. Using a spherical radius of 90 km this corresponds to a mass range of 1.53$\times 10^{18}$ kg - 3.05$\times 10^{18}$ kg.\par \section{Measuring the Severity of Close Encounters with Planets} Currently, it is unknown what role, if any, the sporadic activity of Chiron played in the formation of any ring structure around the body. Rings could have formed either before or after Chiron entered the Centaur region. But given that Chiron presently lies in a chaotic and unstable orbit prone to planetary close encounters, it is of interest to determine the likelihood that such encounters could severely damage or destroy any orbiting ring structure.\par To accomplish this, a method to gauge the severity of such an encounter is needed. Primarily, the severity of a close encounter between a ringed small body and a planet is determined by the minimum approach distance between the small body and planet, $d_{min}$.\par If the small body is in a parabolic or hyperbolic orbit relative to the planet (it hasn't been captured as a satellite), then the velocity at infinity of the small body relative to the planet also plays a role in determining the encounter severity, albeit to a lesser extent than the depth of the encounter.\par \citet{2017AJ....153..245W} ignored velocity effects and developed a severity scale based on $d_{min}$ relative to the Hill radius, $R_H$, tidal disruption distance, $R_{td}$, the ring limit, $R=10R_{td}$, and Roche limit, $R_{roche}$. This scale is shown in Table~\ref{CE_severity}. \begin{table} [h] \caption{{A scale ranking the severity of a close encounter between a ringed small body and a planet based on the minimum distance obtained between the small body and the planet, $d_{min}$, during the close encounter. $R_H,R=10R_{td},R_{td}$ and $R_{roche}$ are the Hill radius of the planet with respect to the Sun, ring limit, tidal disruption distance and Roche limit respectively.}}\label{CE_severity} \begin{tabular} {|c|c|} \hline $d_{min}$ Range&Severity\\ \hline $d_{min} \ge R_H$&Very Low\\ $R\le d_{min}< R_H$&Low\\ $R_{td}\le d_{min}< R$&Moderate\\ $R_{roche}\le d_{min}< R_{td}$&Severe\\ $d_{min}<R_{roche}$&Extreme\\ \hline \end{tabular} \end{table} The Hill radius defines a sphere of influence centered on a secondary body of mass $m_{s}$ in an orbit with orbital radius $R_{radial}$ around a primary body of mass $M_p$ in the planar problem. The Hill radius is approximately given by: \begin{equation} \Large{R_{H}\approx R_{radial}(\frac{m_{s}}{{3M_p}})^{\frac{1}{3}}} \label{hilleqn} \end{equation} \noindent{\citep[e.g.][]{MurrayCD:1999}}. For non-circular orbits, $R_{radial}$ is approximated using the semi-major axis of the orbit. Loosely defined, the Hill radius is the distance around a secondary body (relative to a primary body) within which satellites can orbit without their orbits being completely disrupted by tidal forces due to the primary body. In the case where the secondary body is a planet and the primary body the Sun, it is found that all known planetary satellites follow this rule, being contained well within the Hill sphere's of their host planets. For other objects moving in the system, the Hill radius of a planet can be used to indicate the region of space around its orbit into which other objects move at their peril.\par Typically, encounters at a distance greater than $\sim$ 3 Hill radii will have only a limited effect on the long term stability of an object, whilst orbits that approach within this distance are typically dynamically unstable, unless close approaches are prevented by mutual mean-motion resonances between the objects concerned \citep[e.g.][]{1971AJ.....76..167W,1995AJ....110..420M,HUAqr,2012ApJ...754...50R,2012ApJ...761..165W}.\par The ring limit is a relatively new critical distance introduced by \citet{AraujoRAN:2016} and used by \citet{2017AJ....153..245W} to examine the stability of Chariklo's ring system against close encounters. It is loosely defined as lying at ten tidal disruption distances from a given planet, and represents an upper limit on the minimum approach distance for close encounters for which the effect on a ring of a minor body is just noticeable (meaning the maximum change in orbital eccentricity of the orbit of any ring particle = 0.01). Here, we apply the ring limit to study the influence of close encounters between Chiron and the giant planets.\par Given a typical solar system small body, the tidal disruption distance, $R_{td}$, lies well within the Hill radius for a given planet. When the separation between a small body and a planet is closer than $R_{td}$, a secondary body-satellite binary pair of total mass $m_{s}+m_{sat}$ and semi-major axis $a_B$ can be permanently disrupted by tidal forces in one pass. It should be noted, in passing, that, defined in this manner, the ring limit and tidal disruption distances have no meaning for close encounters between planets and small bodies with no rings or satellites.\par $R_{td}$ can be approximated as the secondary-primary body separation at which a satellite orbiting the secondary body would lie at the outer edge of the secondary body's Hill sphere. $R_{radial}$ in Equation~\ref{hilleqn} is then by definition $R_{td}$, and $R_{H}$ is approximated by $a_B$. Solving for $R_{td}$ yields: \begin{equation} \Large{R_{td}\approx a_B(\frac{3M_p}{m_{s}+m_{sat}})^{\frac{1}{3}}}\label{tidal_disrupteqn} \end{equation} \noindent{\citep[e.g.][]{PhilpottCM:2010}}. Closer still to the primary body, the Roche limit is the distance from the primary within which a secondary body held together only by gravity would be torn apart by tidal forces. For a rigid secondary body, the equation for the Roche limit with respect to a primary body is approximately: \begin{equation} \Large{R_{roche} = 2.44R_{p}(\frac{\rho_p}{\rho_{s}})^{\frac{1}{3}}}\label{rocheeqn} \end{equation} \noindent{\citep{1849Roche...1..243,MurrayCD:1999}}. Here, $R_p$ is the physical radius of the primary body, $\rho_p$ the density of the primary body and $\rho_s$ the density of the secondary body.\par Now that a severity scale for close encounters has been established, it can be used to study simulated close encounters between ringed Centaurs and the giant planets. \section{The Two Dynamical Classes of Centaurs} Throughout its lifetime as a Centaur, the frequency and severity of close encounters between Chiron and the giant planets will affect the stability of any ring structure around Chiron. The frequency of close encounters can be affected by a Centaur's so called dynamical class.\par Previously it was shown that small bodies including Centaurs can be classified based on their perihelion, aphelion and Tisserand parameter \citep[as detailed in][]{2003MNRAS.343.1057H}.\par However, as \citet{2009Icar..203..155B} showed, Centaurs may also be classified into one of two classes based on their long-term dynamical behavior. The first type consists of those Centaurs that randomly wander from orbit to orbit. The semi-major axes of these Centaurs' orbits increase and decrease in time with no particular pattern. These Centaurs are known as random-walk Centaurs.\par Centaurs of the other type spend most of their time temporarily trapped in mean motion resonances of the giant planets and typically jump from one resonance to the other. A small body is in a mean motion orbital resonance with a planet if the ratio of the orbital period of the planet to the orbital period of the small body equals a ratio of two small integers \citep{MurrayCD:1999}.\par Becoming temporarily trapped in a resonance is a behavior known as resonance sticking \citep{2007Icar..192..238L}. While trapped in a resonance, the semi-major axes of these Centaurs' orbits oscillate about a constant value which corresponds to the resonance location. These Centaurs are known as resonance hopping Centaurs. Since it is possible that resonance sticking can protect small bodies from close encounters with planets \citep{1995AJ....110..420M}, the dynamical class of a Centaur can have consequences for any ring structure around it.\par The two types can also be more rigorously defined mathematically. As the semi-major axes of random-walk Centaurs wander aimlessly and those of resonance hopping Centaurs remain more constant, we would expect that on average the standard deviation of semi-major axis values of random-walk Centaurs would increase in time more predictably than those of resonance hopping Centaurs.\par Mean standard deviation then, can be used as a tool to distinguish between the two dynamical types. Random-walk Centaurs are those Centaurs whose mean square standard deviation of semi-major axis, $\langle \sigma^2 \rangle$, varies as a power law in time. It is said that these Centaurs display generalized diffusion. This can be expressed mathematically as: \begin{equation} \Large{\langle \sigma^2 \rangle = Dt^{2H}} \label{gen_diff} \end{equation} Here, $t$ is time, $D$ is the generalized diffusion coefficient and $H$ is the Hurst exponent with $0 < H < 1$. Random-walk Centaurs can then be generally defined as those Centaurs for which the semi-major axis behavior is well described by generalized diffusion. Conversely it then goes that the behavior of the semi-major axis of resonance hopping Centaurs is not well described by generalized diffusion.\par Centaurs of both types may also display both random walking and resonance sticking during their lifetime. To determine if a Centaur is in fact trapped in a particular mean motion resonance, care must be taken.\par Resonances do not exist at a single point but have widths in phase space. For example, for any particular resonance, a Centaur can be trapped in the resonance over a range of semi-major axis values.\par To positively determine if a small body is trapped in a resonance, two behaviors must be displayed. First, the semi-major axis of the small body orbit must oscillate about the resonance location, and second, the primary resonance angle must librate in time \citep{2013Icar..222..220S}.\par The primary resonance angle is defined by $p\lambda -q\lambda_p - (p-q)\bar{\omega}$ where $p$ and $q$ are integers, $\lambda_p$ is the mean longitude of the planet's orbit, $\lambda$ is the mean longitude of the small body's orbit, and $\bar{\omega}$ is the longitude of perihelion of the small body's orbit \citep{MurrayCD:1999,2002MNRAS.335..417R,2009Icar..203..155B,2013Icar..222..220S}.\par This angle is related to the perturbation of the orbit of a small body around a central body (like the Sun) by a third body (like a planet) in the planar 3-body problem. The reader is referred to \citet{MurrayCD:1999} for details. \section{METHOD} To study the dynamical history of Chiron and its ring system, a suite of numerical integrations were performed using the $n$-body dynamics package {\sc Mercury} \citep{ChambersJE:1999}.\par 35,937 massless clones of Chiron were integrated backwards in time for 100 Myr in the six-body problem (Sun, four giant planets, and clone). The integration time is justified as it is at least 100 times longer than the approximate half-life of Chiron \citep{1990Natur.348..132H,2004MNRAS.354..798H}.\par The orbital elements of the individual clones were chosen from a range of three standard deviations below to three standard deviations above the accepted value of each orbital parameter of Chiron for epoch 2457600.0 JD taken from the Asteroid Dynamic Site \citep{2012IAUJD...7P..18K}.\par To create our cloud of clones for Chiron, we varied each of the orbital elements, as follows. First, we sampled the $\pm 3 \sigma$ uncertainty range in semi-major axis, $a$. We tested eleven unique values of semi-major axis, ranging from $a - 3 \sigma$ to $a + 3 \sigma$, in even steps. At each of these unique semi-major axes, we tested eleven orbital eccentricities, which were again evenly distributed across the $\pm 3 \sigma$ uncertainty in that variable. At each of these $121 a-e$ pairs, we tested eleven unique inclinations also evenly spaced in the range $\pm 3 \sigma$. This gave a grand total of 1331 potential $a-e-i$ combinations for Chiron. At each of these values, we tested 27 unique combinations of $\Omega$, $\omega$ and $M$, creating a 3$\times$3$\times$3 grid in these three elements. The three values chosen for each of these three variables were the best-fit solution, and the two values separated by $3 \sigma$ from that value. In total, this gave us a sample of 35,937 unique orbital solutions for Chiron.\par The time step was chosen to be 40 days which is approximately one-hundredth an orbital period of Jupiter - the innermost planet included in this study. Similar time steps have been used before in integrations of both Centaurs and Main Belt asteroids \citep{2000Icar..146..240T,2003AJ....126.3122T}.\par Clones were removed from the simulation upon colliding with a planet, colliding with the Sun, achieving an orbital eccentricity $\ge 1$, or reaching a barycentric distance $>$ 1,000 au.\par The masses and initial orbital elements of the four giant planets were found using the NASA JPL HORIZON ephemeris\footnote{http://ssd.jpl.nasa.gov/horizons.cgi?s\_body=1$\#$top (accessed 31st December 2015)} for epoch 2451544.5 JD. Inclinations and longitudes for both Chiron and the planets were relative to the ecliptic plane.\par In order to set their starting orbital parameters for the simulation, the planets were integrated (within the heliocentric frame) to the epoch 2457600.0 JD - the epoch of the Chiron clones using the \textit{Hybrid} integrator within the \textsc{Mercury} $n$-body dynamics package \citep{ChambersJE:1999}. The accuracy parameter was set to 1.d-12, and the hybrid handover radius was set to three Hill radii. Statistics on the close encounters were then taken by small body population of the solar system membership of the clone at the time of the encounter and by encounter severity. The different small body populations of the solar system used are defined in Table~\ref{ss_populations}. \begin{table} [h] \caption{Some different small body populations of the solar system. Here, $a$ is the semi-major axis of the clone during the close encounter. The semi-major axis and other orbital values of the clone's orbit just before the close encounter were not recorded. $a_J$ and $a_N$ are the semi-major axis of Jupiter and Neptune respectively; and $q$ is the perihelion distance of the clone. Inner SS means inner solar system, SP Comet means short period comet, TNO means Trans-Neptunian Object and Ejection means the clone was being ejected from the solar system at the time of the encounter.} \begin{tabular} {|c|c|} \hline Name&Definition\\ \hline Inner SS&$a\le a_J$\\ SP Comet&$a>a_J$ and $q<a_J$\\ Centaur&$a_J<a<a_N$ and $q>a_J$\\ TNO&$a\ge a_N$\\ Ejection&$e\ge 1$\\ \hline \end{tabular} \label{ss_populations} \end{table} Physical properties of the planets were taken from NASA\footnote{$https://ssd.jpl.nasa.gov/?planet\_phys\_par$ (accessed June 16, 2017)}. The mass of the Sun was also taken from NASA\footnote{$https://nssdc.gsfc.nasa.gov/planetary/factsheet/sunfact.html$ (accessed June 17 2017)}. For Chiron we selected a bulk density of 1,000 kgm$^{-3}$, which along with our selected radius of 90 km yielded a mass of 3.05$\times 10^{18}$ kg. This mass was used in equation \ref{tidal_disrupteqn} to determine the tidal disruption distance between Chiron and each planet. The density was used in equation \ref{rocheeqn} to determine the Roche Limit between Chiron and each planet. \subsection{Determining the Half-Life and Origin of Chiron} To determine the likely origin of Chiron, the chronologically earliest close encounter with a giant planet was analyzed for each clone, and the small body population of which the clone was a member at the time of the close encounter was found using the orbital parameters of the clone's orbit at the time of the encounter.\par This then allowed the fraction of injection events from the various small body populations shown in Table~\ref{ss_populations} to be determined (in other words, it allowed us to determine the likely source population of Chiron). Note that Trojans could overlap with the Centaur small body population the way we have defined it. However, in order to have a close encounter, a small body must have already exited the Trojan region. \par Furthermore, though the Jupiter and Neptune Trojans are possible feeder populations to the Centaurs \citep[e.g.][]{2006MNRAS.367L..20H,2010MNRAS.402...13H}, our study is unable to yield any information on the likelihood as either of these being the source of Chiron. Therefore Trojans were omitted as separate populations in Table~\ref{ss_populations}.\par To determine the half-life of Chiron against removal from the simulation moving backwards in time, the number of clones remaining at a time $t$ was recorded as a function of time throughout the entire integration. Given $N_o$ as the initial number of clones at a time $t = 0$, the half-life can be determined by fitting the data to the standard radioactive decay equation: \begin{equation} \Large{N=N_oe^{ \frac{-0.693}{\tau}t }} \label{half_life_eqn} \end{equation} \noindent{where $\tau$ is the half-life}. The time interval over which the decay of clones was exponential was obtained by the fit of the data to equation~\ref{half_life_eqn}. Then the fit was used to calculate the half-life.\par Once the half-life was determined, it was used in equation~\ref{half_life_eqn} to determine the time at which 99.99\% of clones would be removed from the simulation assuming a constant half-life. This time was then set as the upper limit on the time at which Chiron entered the Centaur region.\par \subsection{Finding the Dynamical Class} A separate set of integrations was made using the IAS15 integrator in the \textsc{Rebound} $n$-body simulation package \citep{2012A&A...537A.128R,2015MNRAS.446.1424R} using the orbital values from a set of 1,246 Chiron clones from the previous integrations. \par Three different samples of clones of $\sim$400 clones each were used - the first sample was taken from the first 1,000 clones, the second from the middle 1,000 clones and the third from the last 1,000 clones in the entire data set. The middle sample included the currently accepted orbital values of Chiron.\par It was not necessary to find the dynamical class of every clone since the objective of these integrations is to compare and contrast the two dynamical classes and to explore specific examples of the behavior of clones in each class. Just a sampling of clones is sufficient for these purposes.\par The output time was set to 300 years, and the time step to 0.1 year. In these integrations, clones were removed from the simulation upon colliding with the Sun, colliding with a planet, achieving an eccentricity $\ge 1$ or by leaving the Centaur region. Any clone which did not remain in the Centaur region for at least 100,000 years was not used. The dynamical class of each remaining clone was found using the method of \citet{2009Icar..203..155B}: \begin{enumerate} \item{Determine the time at which the clone was injected into the Centaur region, $T_{Centaur}$. Determine the number of data points in the time interval [0, $T_{Centaur}$].} \item{Create a logarithmic interval of data points using [log(10), log(Data Points)].} \item{Divide the interval into 16 equal logarithmic increments. Call the length of one of these increments $j_s$.} \item{Create a window length of ten data points in units of time. Set this equal to the smallest window length.} \item{Create each $z_{th}$ additional window length in units of data points, $w(z)_{datapts}$, by converting a logarithmic window into a window of data points using $w(z)_{datapts}=10^{1+z(j_s)}+1$ where $z\ge 1$.} \item{Convert each window length from units of data points into units of time using $w(z)_{time}=w(z)_{datapts} \times $(output time). The interval each window covers is closed on one end and open on the other. For example, the first window time interval would be [0, $w(z)_{time})$.} \item{Discard any window lengths more than 25\% of the data set.} \item{Using the smallest window length, partition the time interval [0, $T_{Centaur}$] into equal windows of time and allow each window to overlap adjacent windows by half a window length.} \item{Within each window determine the standard deviation, $\sigma$, of the semi-major axis, $a$.} \item{Calculate the mean standard deviation, $\bar{\sigma}$, over all windows.} \item{Repeat the process for all the window lengths.} \item{Perform a linear regression on log($\bar{\sigma}$) vs. log($w(z)_{time}$).} \item{The slope obtained from this regression is an approximation of the Hurst exponent.} \item{A residual is the difference between an actual value and its expected value from the best-fit line. In this case, a residual of a particular value of log($\bar{\sigma}$) is found by finding the absolute value of the vertical distance from a value of log($\bar{\sigma}$) from the best-fit line. A Centaur is classified as being resonance hopping if the maximum value of any one residual is $\ge$ 0.08. Otherwise, the Centaur is classified as random-walk. This method is based on the results of \citet{2009Icar..203..155B}, and the reader is referred to that work for more details.} \end{enumerate} Selected resonance hopping clones were studied in more detail by examining intervals of time in which the semi-major axis oscillated about a nearly constant value.\par The semi-major axis values for these intervals of time were then smoothed using the technique of \citet{2010MNRAS.404..837H} to determine if the clone was trapped in a mean motion resonance of a giant planet. The method is as follows:\par \begin{enumerate} \item{Qualitatively inspect graphs of semi-major axis vs. time for resonance hopping Centaurs and identify intervals of time, $\Delta T_{res}$, in which the semi-major axis seems to oscillate about a nearly constant value.} \item{Select one of these intervals of time for study. Create a set of all semi-major axis data points during this time interval.} \item{Initially, set the smoothed data set equal to the original data set.} \item{By inspection, decide on a time window in units of data points. Set the window length to an odd number of data points and call this $w_N$.} \item{Apply the window to the original data set at the first data point.} \item{Evaluate the mean value of the semi-major axis over all data points within the window.} \item{Set the value of the middle data point in this window (the $j_{(w_N-1) \times 0.5}$ data point) in the smoothed data set to this mean value.} \item{Slide the window ahead by one data point in the original data set and set the value of the middle data point in this window in the smoothed data set equal to the mean semi-major axis over the entire window in the original data set.} \item{Continue this process until the window ends on the last data point. If $j_{last}$ is the last data point then in the smoothed data set the $j_{last} - j_{(w_N-1) \times 0.5}$ data point is set to the mean value of the semi-major axis in the window in the original data set. Any data points before the $j_{(w_N-1) \times 0.5}$ data point and after the $j_{last} - j_{(w_N-1) \times 0.5}$ data point in the smoothed data set remain unchanged.} \item{Try various window lengths until the smoothed data is as close to a cosine or sine wave in time as can be obtained by inspection.} \item{Set the nominal location of the mean motion resonance equal to the mean value of the semi-major axis over the time interval $\Delta T_{res}$ in the smoothed data set.} \item{Compare this location to known locations of mean motion resonances of the giant planets for identification. If the mean value is within 0.1 au of a resonance location then consider that resonance as a possible candidate.} \item{Examine the primary resonance angle associated with each candidate resonance for librating behavior over the time interval. If the angle librates then consider the clone to be trapped in the resonance over the time of libration.} \end{enumerate} The locations of mean motion resonances of the giant planets, $a_{res}$, were found using: \begin{equation} \Large{a_{res} = (\frac{j_1}{j_2})^{\frac{2}{3} }a_p} \end{equation}\label{mmr_loc} \noindent{\citep{MurrayCD:1999}}. Here, $a_p$ is the semi-major axis of a planet; and $j_1$ and $j_2$ are integers. In this work $j_1$ and $j_2$ were limited to values between 1 and 20. \subsection{MEGNO and Lifetime Maps} The chaoticity and chaotic lifetime of Chiron's orbital evolution were studied by means of calculating global MEGNO and lifetime maps over a given parameter region. The MEGNO (Mean Exponential Growth of Nearby Orbits) \citep{2000A&AS..147..205C,2001A&A...378..569G,2003PhyD..182..151C,2004A&A...423..745G,2010MNRAS.404..837H} factor is a quantitative measure of the degree of chaos and has found wide-spread applications within problems of dynamical astronomy. The time averaged MEGNO parameter, $\langle Y \rangle$, is related to the maximum Lyapunov Characteristic Exponent, $\gamma$, by: \begin{equation} \Large{\langle Y \rangle = t\frac{\gamma}{2}} \end{equation} \noindent{as $t \rightarrow \infty$}. For more on Lyapunov characteristic exponents, we direct the interested reader to \citet{1995Icar..115..347W}. \par The detection of chaotic dynamics is always limited to the integration time period. Quasi-periodic or regular motion could in principle develop into chaotic motion over longer time scales. The calculation of $\langle Y \rangle$ involves the numerical solution of the associated variational equations of motion.\par Following the definition of MEGNO, the quantity $\langle Y \rangle$ asymptotically approaches 2.0 for $t \rightarrow \infty$ if the orbit is quasi-periodic. For chaotic orbits, $\langle Y \rangle$ rapidly diverges far from 2.0. In practice, the limit $t\rightarrow \infty$ is not feasible and $\langle Y \rangle$ is only computed up to the integration time (eventually ended by some termination criterion such as the event of an escape or collision).\par A MEGNO map is created using the technique of numerical integration of a number of massless test particles starting on initial orbits which cover a rectangular grid in $a-e$ space, with other orbital parameters held constant. In this work, the Gragg-Bulirsh-St$\ddot{o}$er \citep{HairerE:1993} method was used to integrate 300,000 test particles for 1 Myr in the region of $a-e$ space bound by $13$ au $\le a \le 14$ au and $0 \le e \le 0.5$. The other orbital parameters were set to those of Chiron.\par The resolution of the map was $600 \times 500$ ($a-e$). One test particle was integrated for each $a-e$ pair for a total of 300,000 $a-e$ pairs.\par The time step varied and was determined using a relative and absolute tolerance parameter both of which were set to be close to the machine precision. A test particle was removed from the simulation if it collided with a planet or the Sun, was ejected from the solar system, or if $\langle Y \rangle > 12$ (indicating a strong degree of chaos).\par When a test particle was removed, the time of removal and the $\langle Y \rangle$ value were recorded. If a test particle survived the entire simulation then its removal time was recorded as 1 Myr. We will call the removal time the ``chaotic lifetime'' which is not the same as dynamical lifetime. However, it can be said that the dynamical lifetime is equal to or greater than the chaotic lifetime.\par A chaotic lifetime map was then generated in conjunction with the MEGNO map by color coding the lifetimes in the same $a-e$ grid used to create the MEGNO map. In the lifetime map the shortest removal times were color coded black and the longest yellow. The resulting lifetime and MEGNO maps can be seen in Figure~\ref{lifetime_map} and Figure~\ref{megno_map} respectively. \section{RESULTS} \subsection{Half-Life and Origin of Chiron} The percentage of first close encounters by clone small body population membership is shown in Table~\ref{chiron_origin}. The TNO population has the highest percentage of first close encounters making it the most likely source population of Chiron.\par 34\% of clones were in a hyperbolic or parabolic orbit during their first close encounter, which indicates a potential origin within the Oort cloud. The Centaur and Inner solar system populations combined contributed just 3\% of the first close encounters. \par The short period comet population claims 2\% of first close encounters. These three populations combined likely illustrate potential final destinations for Chiron in the future, since dynamical evolution that takes no account of the influence of non-gravitational forces is entirely time-reversible. \begin{table} [h] \caption{The percentage of first close encounters by clone small body population membership. The TNO population has the highest percentage of first close encounters making it the most likely source population of Chiron.} \begin{tabular} {|c|c|c|} \hline Region&\% CE\\ \hline Inner SS&1\\ SP Comet&2\\ Centaur&2\\ TNO&60\\ Ejection &34\\ \hline \end{tabular} \label{chiron_origin} \end{table} Figure~\ref{half_life} shows the natural log of the fraction of remaining clones vs. time over the last 2.5 Myr. The decay is exponential for the time interval [0.12 Myr, 0.5 Myr]. By 1 Myr ago, the decay curve departs markedly from this initial exponential decay. \par This is typical and results from clones which have evolved onto more stable orbits. Because of this, these clones are no longer sampling the original phase space at the start of the decay.\par To maximize the fit, the half-life during the exponential decay was determined on the interval [0.12 Myr, 0.367 Myr] and found to be about 0.7 Myr. Other larger intervals were tried and yielded the same result. This value is comparable to, but slightly shorter than, the value of 1.07 Myr reported by \citet{2004MNRAS.355..321H} for this quantity.\par Our smaller value is not surprising because \citet{2004MNRAS.355..321H} found their half-life using the longer time interval of 3 Myr which included a longer tail over which the half-life was markedly different from its initial value. \begin{figure*} [h] \begin{center} \includegraphics[]{half_life.pdf} \caption{The natural log of the fraction of remaining clones vs. time over the last 2.5 Myr. The decay is exponential through the interval [0.12 Myr, 0.50 Myr]. The half-life during the interval [0.12 Myr, 0.367 Myr] was found to be about 0.7 Myr. The solid line is the best-fit line for this interval and fits the data with a linear regression coefficient of 0.9999. By 1 Myr ago it can be seen that the decay is no longer exponential.} \end{center} \label{half_life} \end{figure*} 786 clones, just 2$\%$ of the total population, survived the entire integration time. 96\% of clones were ejected from the solar system on hyperbolic or parabolic orbits which again points to an origin for Chiron beyond Neptune. Approximately 1\% hit Jupiter, and the remaining 1\% hit the Sun, Saturn, Uranus or Neptune. \par Using the best-fit line we find that if the decay had remained exponential then 99.99\% of the clones would have been gone by 8.5 Myr ago. We use this time as the upper limit to the time at which Chiron first entered the Centaur region. \subsection{Close Encounters} The total number of close encounters between Chiron clones and the giant planets was 24,196,477. 15,130,506 of these occurred while clones were in the Centaur region.\par During their time in the Centaur region, clones experienced a close encounter on average every 5 kyr. Table~\ref{CE_centaur} shows the number of these close encounters by planet.\par As expected, clones had the highest numbers of close encounters with Saturn and Uranus, followed by Neptune and then Jupiter. \par Table~\ref{CE_severity2} lists the percentage of close encounters which occurred in the Centaur region by severity. It can be seen that the lower the severity, the greater the number of close encounters. There were only 48 severe and exactly zero extreme close encounters. These results show that encounters close enough to tidally disrupt Chiron or any ring system around Chiron are extremely rare events.\par Thus, it is unlikely that any ring structure around Chiron was created by tidal disruption due to a planetary close encounter, and barring ring dispersal by viscous spreading, it is possible that any ring structure around Chiron has survived its journey through the Centaur region and is in fact primordial. \begin{table} [h] \caption{Close encounters of Chiron clones with each giant planet while clones were in the Centaur region.} \begin{tabular} {|c|c|c|} \hline Planet&Number\\ \hline Jupiter&553182\\ Saturn&6978716\\ Uranus&4567440\\ Neptune&3031168\\ \hline \end{tabular} \label{CE_centaur} \end{table} \begin{table} [h] \caption{The percentage of close encounters of Chiron clones with the giant planets by severity while clones were in the Centaur region.} \begin{tabular} {|c|c|} \hline Severity&Percent\\ \hline Very Low&89\\ Low&11\\ Moderate&0.03\\ Severe&0\\ Extreme&0\\ \hline \end{tabular} \label{CE_severity2} \end{table} \subsection{Dynamical Class of Chiron} The dynamical classes of 1,246 clones were determined. Table~\ref{dynamical} shows the percentage of clones in each dynamical class, and the mean Centaur lifetime of clones in each class. 95\% of the sampled clones were classified as random-walk Centaurs, with the remaining 5\% being classified as resonance hopping Centaurs.\par The difference in mean Centaur lifetime between the two classes is stark. The mean Centaur lifetime for the resonance hopping clones was approximately twice as long as that of random-walk clones.\par We hypothesise that the large difference is caused by resonance sticking in mean motion resonances of resonance hopping clones having the effect of prolonging their dynamical lifetimes. This is supported by the work of \citet{2009Icar..203..155B}. The top of Figure~\ref{res_hopper_clone_1} shows the behavior of the semi-major axis of the orbit of one of the longest lived resonance hopping clones. In the figure, the semi-major axis spends about 5 Myr oscillating about the 2:3 mean motion resonance of Saturn centered at 12.5 au. Notice the horizontal band feature which covers this period of time. A shorter band centered at 15.1 au is caused by the exterior 1:2 mean motion resonance of Saturn.\par Examination of other resonance hopping clones also showed relatively long periods of time for which each clone was trapped in one or more mean motion resonances. We conclude that resonance sticking acts to significantly prolong the lives of resonance hopping clones. Other notable resonances entered into by clones include the exterior 3:4, 4:7, and 1:3 resonances of Saturn; the Trojan or 1:1 resonance of Saturn, the interior 3:2 resonance of Uranus; and the interior 3:2 and 4:3 resonances of Neptune.\par The bottom diagram in Figure~\ref{res_hopper_clone_1} shows the log-log plot used to classify the clone. It can be seen that it only takes one data point with a relatively large residual to cause a clone to be classified as resonance hopping.\par The top diagram in Figure~\ref{res_hopper_clone_579} shows another example of a resonance hopping clone. In contrast to the clone in Figure~\ref{res_hopper_clone_1} which spends most of its time in one resonance, this clone spends most of its time hopping between mean motion resonances of the giant planets. Two of these resonances were positively identified as the 4:3 and 3:2 mean motion resonances of Neptune by observing the libration of their primary resonance angles.\par The bottom diagram shows a close up of the time spent in the 4:3 mean motion resonance of Neptune before and after data smoothing. The smoothed data set has a mean semi-major axis value that is only 0.07 au away from the 4:3 mean motion resonance of Neptune, located at 24.89 au.\par Figure~\ref{res_angle_4_3} shows the primary resonance angle associated with the 4:3 mean motion resonance of Neptune for the clone in Figure~\ref{res_hopper_clone_579} over the same time interval. The angle is defined by $4\lambda_N - 3\lambda - \bar{\omega}$ where $\lambda_N$ is the mean longitude of Neptune. It can be seen that this angle librates. \begin{table} [h] \caption{The percentage of clones and mean Centaur lifetime by dynamical class. Random-walk dominates in quantity, but resonance hopping clones have about twice the mean Centaur lifetime as random-walk clones due to resonance sticking.} \begin{tabular} {|c|c|c|} \hline Class&Percent&Avg. Centaur Life (Myr)\\ \hline Resonance Hopping&5&1.1\\ Random-Walk&95&0.52\\ \hline \end{tabular} \label{dynamical} \end{table} \begin{figure*} [h] \begin{center} \includegraphics[width = \columnwidth]{res_hopper_clone_1.pdf} \includegraphics[width = \columnwidth]{dynamical_class1.pdf} \caption{Top - an example of a resonance hopping clone. Note the long horizontal band feature. This clone spends about 5 Myr oscillating about the 2:3 mean motion resonance of Saturn located at 12.5 au. A shorter band centered at 15.1 au is caused by the exterior 1:2 mean motion resonance of Saturn. Bottom - the log-log plot used to identify the dynamical class of the clone in the top diagram. Notice the one data point at a larger distance from the trendline than the others. This is characteristic behavior for resonance hopping Centaurs. The Hurst exponent for this clone was 0.193, and its linear regression coefficient was 0.971. The maximum residual was 0.08.} \end{center} \label{res_hopper_clone_1} \end{figure*} \begin{figure*} [h] \begin{center} \includegraphics[]{tp_579_4_3_Nep_coarse.pdf} \includegraphics[]{tp_579_4_3_Nep_smoothed.pdf} \caption{Top - another example of a resonance hopping clone. This clone spends most of its time trapped in various mean motion resonances of the giant planets. Two resonances were positively identified as the 4:3 and 3:2 mean motion resonances of Neptune. These are labeled in the figure. The Hurst exponent was 0.534, the linear regression coefficient 0.9937, and the maximum residual was 0.08. Bottom - a close up of the time spent in the 4:3 mean motion resonance of Neptune before and after data smoothing. The mean value of the smoothed data set was 24.89 au which is about 0.07 au away from the 4:3 mean motion resonance of Neptune.} \end{center} \label{res_hopper_clone_579} \end{figure*} \begin{figure*} [h] \begin{center} \includegraphics[]{tp_579_res_angle_4_3_MMR_Neptune.pdf} \caption{The primary resonance angle of the 4:3 mean motion resonance of Neptune defined by $4\lambda_N - 3\lambda - \bar{\omega}$ librates in time.} \end{center} \label{res_angle_4_3} \end{figure*} Figure~\ref{tp_18018_a_v_t_random_walk} shows an example of a random-walk clone. This clone does not spend the majority of its life trapped in mean motion resonances as can be seen by the lack of long horizontal bands in the figure. \begin{figure*} [h] \begin{center} \includegraphics[width=\columnwidth]{tp_18018_a_v_t_random_walk.pdf \includegraphics[width=\columnwidth]{Dynamical_Class_18018.pdf} \caption{Left - An example of a random-walk clone. Notice how the long horizontal bands are absent. Right - the log-log plot used to identify the dynamical class of the clone in the top diagram. Notice the good fit. The linear regression coefficient was 0.9998, and the Hurst exponent for this clone was 0.4514. The maximum residual was 0.008.} \end{center} \label{tp_18018_a_v_t_random_walk} \end{figure*} The mean Hurst exponent of the random-walk clones is 0.4664 $\pm$ 0.0782 and that of the resonance hopping clones is 0.3572 $\pm$ 0.1530. Here, the error is given by the standard deviation of the mean. It can be seen that Hurst exponents of random-walk clones are more well defined than those of resonance hopping clones as the standard deviation of the mean of the Hurst exponents of random-walk clones is about half that of the resonance hopping clones.\par Hurst exponents ranged from -0.1764 to 0.6416 for resonance hopping clones and from 0.1446 to 0.7462 for random-walk clones. 0.85 was the lowest regression coefficient for a random-walk clone, and resonance hopping clones had regression coefficients ranging from -0.33 to 0.99.\par \citet{2009Icar..203..155B} reported that random-walk Centaurs display Hurst exponents in the range 0.22 - 0.95. We found that only five of our random-walk clones had Hurst exponents outside this range - all of them $<0.22$.\par Qualitative inspection showed that four of these five could be classified as resonance hopping Centaurs as they spent the majority of their lives in mean motion resonances. The fifth clone displayed both random walk and resonance hopping behavior, but spent most of its time experiencing random-walk evolution. The fit of that clone's log-log plot had a regression coefficient of only 0.85, which is more than three standard deviations away from the mean value of 0.9947 $\pm$ 0.0089 for random-walk clones.\par Furthermore, the outliers also had another thing in common - of the total time spent in resonances, each spent the majority of that time in only one strong resonance and did not jump into any other strong resonances. An example of one of these five outliers is shown in Figure~\ref{tp_18372_a_v_t}.\par This particular clone spends 66$\%$ of its life in the 2:3 mean motion resonance of Saturn and never jumps to another strong resonance. It was classified as a random-walk clone because its residuals never exceeded 0.0601, but since it spent more time in a resonance than random walking, one could argue that this clone is resonance hopping even though our method classifies it as random-walk. The linear regression coefficient of its log-log plot was 0.88, and its Hurst exponent 0.19.\par We conclude that our results are in good agreement with those of \citet{2009Icar..203..155B}, but that our technique occasionally misclassifies a clone. A refinement of this technique may be to consider the regression coefficients as well as the residuals as part of the classification procedure.\par For example, if the regression coefficient of a random-walk clone falls below some critical value, then the clone should be classified manually. That is, classify it using qualitative inspection of the clone's semi-major axis behavior over time. The exact critical value to use we will leave open for now. \par Another factor to consider is the distance of the Hurst exponent from the mean. All five of the outlying random-walk clones had Hurst exponents more than three standard deviations away from the mean. A refinement of the technique may be to manually classify any clones with outlying Hurst exponents. It remains to be seen if all outliers spend most of their lives in just one strong resonance or if this is just coincidental.\par \begin{figure*} [h] \begin{center} \includegraphics[]{tp_18372_a_v_t.pdf} \caption{A random-walk clone that spent most of its life in the 2:3 mean motion resonance of Saturn located at 12.5 au. Though its residuals were $\le 0.0601$, one could argue that it is a resonance hopping clone.} \end{center} \label{tp_18372_a_v_t} \end{figure*} \subsection{MEGNO and Lifetime Maps} Figure~\ref{lifetime_map} shows the chaotic lifetimes of orbits in the region bound by $13 \textnormal{ au} \le a \le 14 \textnormal{ au}$ and $0 \le e \le 0.5$. It can be seen that most orbits with $e\ge 0.23$ have lifetimes typically $\le 0.01$ Myr which are noticeably shorter than the lifetimes of orbits of much lower eccentricity.\par Chiron, located at the point (13.64 au, 0.38) lies in this region of relatively short lifetimes. Orbits with $a=13\textnormal{ au}$ and eccentricity of 0.23 just begin to cross the orbit of Saturn. All orbits with eccentricities above about 0.28 are Saturn crossing. This allows strong close encounters between objects on those orbits and the giant planet to occur immediately which explains why most orbits with $e \ge 0.28$ have lifetimes $\le 0.01$ Myr - the lowest in the map.\par One exception to this is the bump-like feature centered at 13.4 au, with a width of about 0.2 au. Orbits within the bump with eccentricities as high as $0.35$ have lifetimes noticeably greater than 0.01 Myr.\par For example, there are orbits in the bump with $e\ge 0.28$ with lifetimes of 0.1 Myr which is an order of magnitude longer than most other orbits in the map with $e\ge 0.28$. Also of note is a cluster of orbits within the bump near $e=0.1$ for which lifetimes can reach as high as 1 Myr - the longest in the map. We hypothesise that the bump feature is caused by resonance sticking in the 3:5 mean motion resonance of Saturn located at 13.4 au.\par Small objects which get stuck in this resonance could have their chaotic lifetimes extended in the same way that the Centaur lifetime was extended for a clone stuck in the 2:3 mean motion resonance of Saturn as seen in Figure~\ref{res_hopper_clone_1}. It should be noted, however, that most orbits located at 13.4 au with eccentricities below 0.06 have lifetimes noticeably shorter than 1 Myr.\par This implies that small objects in this region of phase space are either not being captured in the resonance or are staying in the resonance for shorter times which results in lower lifetimes. This may be caused by the decreasing width of the resonance for smaller eccentricities.\par Such behavior of resonances has been seen before. For example, \citet{MurrayCD:1999} observed the same behavior for the 3:1 and 5:3 interior mean motion resonances of Jupiter located in the main asteroid belt.\par Another bump of longer lifetimes which reach as high as 1 Myr is found between 13.9 au and 14 au with $e\le 0.05$. The low eccentricity of orbits in this bump help insulate them from destabilising close encounters with Saturn and Uranus. Though their lifetimes of 1 Myr are relatively long compared to other orbits in the figure, this is still much shorter than the age of the solar system and so these orbits should be viewed as being only relatively stable.\par Figure~\ref{megno_map} is the MEGNO map of the same region of phase space. Almost the entire region, including the current orbit of Chiron, is highly chaotic. Two features of relatively lower chaos stand out. One island centered around 13.4 au with $0.1 \le e \le 0.15$ and a pair of islands between 13.9 au and 14 au with $e<0.04$. Here, the MEGNO parameter reaches as low as 2.5. Two tinier islands can be seen between 13.7 au and 13.9 au.\par By comparison of the two maps, it can be seen that these islands are also embedded within regions of relatively long lifetimes which can reach as high as 1 Myr making these islands regions of lower chaos and longer lifetimes.\par It can also be seen that the two bumps of relatively long lifetimes found in the lifetime map also contain some orbits with lifetimes of 1 Myr which are also highly chaotic. Orbits which are chaotic but have a relatively long lifetime are said to display stable chaos.\par Chiron, however, cannot be shown to display stable chaos as it has a highly chaotic orbit and relatively short lifetime. \newpage \begin{figure*} [h] \begin{center} \includegraphics[width=10cm,angle =270 ]{ChironMap001_LifeTime.pdf} \caption{The chaotic lifetime map in $a-e$ space. Chaotic lifetime is the time to be removed from the simulation and not dynamical lifetime. However, the dynamical lifetime is greater than or equal to the chaotic lifetime. Chiron is shown as the star at the point (13.64 au, 0.38). A feature which stands out is the bump centered at 13.4 au which has a width of about 0.2 au and a height of about 0.35. We hypothesise that the cause of the bump is resonance sticking in the 3:5 mean motion resonance of Saturn which prolongs the lifetimes of test particles which get trapped in the resonance. A smaller bump can be seen between 13.9 au and 14 au with $e\le 0.05$. There is also a tiny bump in lifetimes up to 1 Myr between 13.7 au and 13.75 au.} \end{center} \label{lifetime_map} \end{figure*} \newpage \begin{figure*} [h] \begin{center} \includegraphics[width=10cm,angle = 270]{ChironMap001_MEGNO_BW.pdf} \caption{The MEGNO map in $a-e$ space. Chiron is shown as the star at the point (13.64 au, 0.38). Nearly the entire region is highly chaotic. There are a few small islands of orbits with relatively low chaos. One is centered near 13.4 au with $0.1 \le e \le 0.15$ where the MEGNO parameter can reach as low as 3.5. Two others can be seen between 13.9 au and 14 au in which the MEGNO parameter reaches as low as 2.5. Two tinier islands can be seen between 13.7 au and 13.9 au.} \end{center} \label{megno_map} \end{figure*} \newpage \section{CONCLUSIONS} Using the technique of numerical integration of nearly 36,000 clones of the Centaur Chiron, we found the backwards half-life of Chiron's orbit to be 0.7 Myr and showed that Chiron likely entered the Centaur region from somewhere beyond Neptune within the last 8.5 Myr.\par Close encounters between Chiron and the giant planets severe enough to tidally disrupt Chiron or any ring system in a single pass were found to be extremely rare, and thus the origin of any ring structure is unlikely the result of tidal disruption of Chiron due to a planetary close encounter. \par This led us to conclude that any supposed ring system around Chiron could be primordial barring ring dispersal by viscous spreading. Our results are similar to those of \cite{2017AJ....153..245W} and \citet{AraujoRAN:2016} for the ringed Centaur Chariklo. In those studies, close encounters severe enough to severely damage or destroy the ring structure around Chariklo were also found to be very rare.\par We also showed that the orbit of Chiron lies in a region of phase space that is both unstable and highly chaotic and that the chaotic lifetime of Chiron is likely to be $\le 0.01$ Myr. Resonance sticking was shown to have the ability to prolong the Centaur lifetime of Chiron clones by up to two orders of magnitude beyond its chaotic lifetime. Resonance sticking in the 2:3 exterior mean motion resonance of Saturn was cited as a strong example of this.\par The dynamical classes of a sample of 1,246 clones were determined while these clones were in the Centaur region. It was found that 95\% of clones in the sample were categorized as random-walk Centaurs, and the remaining 5\% categorized as resonance hopping Centaurs. Because of resonance sticking, the mean Centaur lifetime of resonance hopping clones was about twice that of random-walk clones.\par MEGNO and lifetime maps were made of the region in phase space bound by $13 \textnormal{ au}\le a \le 14\textnormal{ au}$ and $e\le 0.5$ which included the orbit of Chiron. It was found that nearly the entire region is highly chaotic with relatively small islands of lower chaos. Other small islands of stable chaos (high chaos and relatively long lifetime) were found.\par Most orbits with eccentricities $\ge 0.28$ had the lowest chaotic lifetimes in the map of $\le 0.01$ Myr due to the crossing of Saturn's orbit. However, some test particles in orbits with $e\ge 0.28$ and semi-major axes within about 0.1 au of the exterior 3:5 mean motion resonance of Saturn located at 13.4 au were shown to have lifetimes up to 0.1 Myr even for orbits with eccentricities up to about 0.35.\par More research is needed to determine conclusively if the structure around Chiron is a ring system. It is not known if rings around small bodies are rare or commonplace. If future discoveries reveal that ringed Centaurs are common, it would suggest a common mechanism for the creation of the rings. \par If on the other hand ringed Centaurs are found to be rare then this would suggest a more serendipitous origin for rings. The authors encourage more searches for rings around other small bodies to help answer this question. \newpage
{ "timestamp": "2017-12-15T02:01:02", "yymm": "1712", "arxiv_id": "1712.04995", "language": "en", "url": "https://arxiv.org/abs/1712.04995" }
\section{Introduction}\label{sec:1} It has been noted for almost two decades \citep{1998Natur.395..670G} that many long-duration GRBs show the presence of an associated unusually energetic supernova (SN) of type Ic (hypernova, HN) as well as of a long-lasting X-ray afterglow \citep{Costa1997}. Such HNe are unique in their spectral characteristics; they have no hydrogen and helium lines, suggesting that they are members of a binary system \citep{2009ARA&A..47...63S}. Moreover, these are broad-lined HNe suggesting the occurrence of energy injection beyond that of a normal type Ic SN \citep{2016MNRAS.457..328L}. This has led to our suggestion \citep[e.g.][]{2001ApJ...555L.117R,2012A&A...548L...5I} of a model for long GRBs associated with SNe Ic. In this paradigm, the progenitor is a carbon-oxygen star (CO$_{\rm core}$) in a tight binary system with a neutron star (NS). As the CO$_{\rm core}$ explodes in a type Ic SN it produces a new NS (hereafter $\nu$NS) and ejects a remnant of { a} few solar masses, some of which is accreted onto the companion NS \citep{2012ApJ...758L...7R}. The accretion onto the companion NS is hypercritical, i.e. highly super-Eddington, reaching accretion rates of up to a tenth of solar mass per second, for the most compact binaries with orbital periods of { a} few minutes \citep{2014ApJ...793L..36F}. The NS gains mass rapidly, reaching the critical mass, within { a} few seconds. The NS then collapses to a black hole (BH) with the consequent emission of the GRB \citep{2015PhRvL.115w1102F}. In this picture the BH formation and the associated GRB occurs some seconds {\it after} the initiation of the SN. The high temperature and density reached during the hypercritical accretion and the NS collapse lead to a copious emission of $\nu\bar\nu$ pairs which form an $e^+e^-$ pair plasma that drives the GRB \citep[see e.g.][]{2015ApJ...812..100B,2016ApJ...833..107B,2016ApJ...832..136R}. The expanding SN remnant is reheated and shocked by the injection of the $e^+e^-$ pair plasma from the GRB explosion \citep{2018ApJ...852...53R}. The shocked-heated SN, originally expanding at $0.2 c$, is transformed into an HN reaching expansion velocities up to {$0.94 c$} (see Sec.~\ref{sec:3}). A vast number of totally new physical processes are introduced that must be treated within a correct classical and quantum general relativistic approach \citep[see e.g.][and references therein]{2018ApJ...852...53R}. The ensemble of these processes, addressing causally disconnected phenomena, each characterized by specific world lines, ultimately leads to a specific Lorentz $\Gamma$ factor. This ensemble comprises the binary-driven hypernova (BdHN) paradigm \citep{2016ApJ...832..136R}. In this article we extend this novel approach to the analysis of the BdHN afterglows. The existence of regularities in the X-ray luminosity of BdHNe, expressed in the observer cosmological rest-frame, has been previously noted leading to the Muccino-Pisani power-law behavior \citep{Pisani2013,2014A&A...565L..10R}. The aim of this article is to now explain the origin of these power-law relations and to understand their physical origin and their energy sources. The kinetic energy of the mildly relativistic expanding HN at {$0.94 c$} following the $\gamma$-ray flares and the X-ray flares, as well as the overall plateau phase, appears to have a crucial role \citep{2014A&A...565L..10R}. Equally crucial appears to be the contribution of the rotational energy electromagnetically radiated by the $\nu$NS. As we show in this article, the power-law luminosity in the X-rays and in the optical wavelengths, expressed as a function of time in the GRB source rest-frame, could not be explained without their fundamental contribution. We here indeed assume that the afterglow originates from the synchrotron emission of relativistic electrons injected in the magnetized plasma of the HN, using both the kinetic energy of expansion and the electromagnetic energy powered by the rotational energy loss of the $\nu$NS~{(see Sec.~\ref{sec:4})}. As an example, we apply this new approach to the afterglow of GRB 130427A associated to the SN 2013cq, in view of the excellent data available in X-rays, optical and radio wavelengths. We fit the spectral evolution of the GRB from 604 to $5.18 \times 10^6$~s and over the observed frequency bands from 10$^9$~Hz to 10$^{19}$~Hz. We present our simulations of the afterglow of GRB 130427A suggesting that a total energy { of order $\simeq 10^{53}$~erg has been injected into the electrons confined within the expanding magnetized HN. This energy derives from the kinetic energy of the HN and the rotational energy of the $\nu$NS with a rotation period 2~ms, containing a dipole or quadrupole magnetic field of $(5$--$7)\times 10^{12}$ G or $10^{14}$~G.} The article is organized as follows. In Sec.~\ref{sec:2} we summarize how the BdHN treatment compares and contrasts with the traditional collapsar-fireball model of the GRB afterglow which is based on a single ultra-relativistic jet. In Sec.~\ref{sec:3} we present the data reduction of GRB 130427A. In Sec.~\ref{sec:4} we examine the basic parameters of the $\nu$NS relevant for this analysis such as the rotation period, the mass, the rotational energy, and the magnetic field structure. We introduce in Sec.~\ref{sec:5} the main ingredients and equations relevant for the computation of the synchrotron emission of the relativistic electrons injected in the magnetized HN. In Sec.~\ref{sec:6} we set up the initial/boundary conditions to solve the model equations of Sec.~\ref{sec:5}. In Sec.~\ref{sec:7} we compare and contrast the results of the numerical solution of our synchrotron model, the theoretical spectrum and light-curve, with the afterglow data of GRB 130427A at early times $10^2~s \lesssim t \lesssim 10^6$~s. We also show the role of the $\nu$NS in powering the late, $t \gtrsim 10^6$~s, X-ray afterglow. Finally, we { present} our conclusions in Sec.~\ref{sec:8} outlining some possible further observational predictions of our model. \section{On BdHNe versus the traditional collapsar-fireball approach}\label{sec:2} In \citet{2016ApJ...832..136R} it was established that there exist seven different GRB subclasses, all with binary systems as progenitors composed of various combinations of white dwarfs (WDs), CO$_{\rm cores}$, NSs and BHs, and that in only in three of these subclasses are BHs formed. Far from being just a morphological classification, the identification of these systems and their properties has been made possible by the unprecedented quality and { extent} of the data ranging from X-ray, to the $\gamma$-ray, to the GeV emission as well as in the optical and in the radio. A comparable effort has been progressing in the theoretical field by introducing new paradigms and developing consistently the theoretical framework. The main {insight} gained from BdHN paradigm, one of the most numerous of the above seven subclasses, has been the successful identification, guided by the observational evidence, of a vast number of independent processes {of the GRB}. For each process the corresponding field equations have been integrated, obtaining their Lorentz $\Gamma$ factors as well as their space-time evolution. This is precisely what has been done in the recent publications for the {{ultrarelativistic prompt emission (UPE) in the first 10 seconds with Lorentz factor $\Gamma \sim 500$--$1000$, the hard X-ray flares (HXF) with $\Gamma \sim 10$ and for the mildly relativistic soft X-ray flares (SXF) with $\Gamma \sim 2-3$ \citep{2018ApJ...852...53R} with the extended thermal X-ray emission (ETE) { signaling} the transformation of a SN into a HN \citep{transition}.}} Here we extend the BdHN model to { the study of the afterglow. As a prototype we utilize the data of GRB 130427A. { We point out for the first time:} \begin{enumerate} \item The role of the hypernova ejecta and of the rotation of the binary system in creating the condition for the occurrence of synchrotron emission, rooted in the pulsar magnetic field (see Sec.~\ref{sec:4}). \item The fundamental role played by the pulsar like behavior of the $\nu$NS (see Fig.~\ref{fig:Lpulsar}) and its magnetic field to explain the fit of { a} synchrotron model based on the optical and X-ray data (see Fig.~\ref{fig:movinglimitsmodelb05e2min1e3max5e5}). \item To develop a model of the afterglow consistent with the mildly relativistic expansion velocity measured in the afterglows following a model-independent procedure (see Eq.(\ref{betaNew}) and Fig.~\ref{pobb} in Sec.~\ref{sec:3}). \end{enumerate} } {{In the current afterglow model \citep[see, e.g.,][and references therein]{1999PhR...314..575P,Meszaros2002,Meszaros2006,2015PhR...561....1K} it is tacitly assumed that a {\it single} ultra-relativistic regime extends all the way from the prompt emission, to the plateau phase, all the way to the GeV emission and to the latest power-law of the afterglow. This approach is clearly in contrast with the point 3 above.}} \section{GRB 130427A data}\label{sec:3} GRB 130427A is well-known for its high isotropic energy $E_{iso} \simeq 10^{54}$~erg, SN association and multi-wavelength observations \citep{2015ApJ...798...10R}. It triggered \textit{Fermi}-GBM at 07:47:06.42 UT on April 27 2013 \citep{2013GCN.14473....1V}, when it was within the field of view of \textit{Fermi}-LAT. A a long-lasting ($\sim 10^4$~s) burst of ultra-high energy ($100$ MeV--$100$ GeV) radiation was observed \citep{2014Sci...343...42A}. \textit{Swift} started to follow from 07:47:57.51 UT, $51.1$~s after the GBM trigger, observing a soft X-ray ($0.3$--$10$~keV) afterglow for more than $100$~days \citep{2014Sci...343...48M}. NuStar joined the observation during three epochs, approximately $\sim 1.2$, $4.8$ and $5.4$~days after the \textit{Fermi}-GBM trigger, providing rare hard X-ray ($3$--$79$~keV) afterglow observations \citep{2013ApJ...779L...1K}. Ultraviolet, optical, infrared, { and }radio observations were also performed by more than $40$ satellites and ground-based telescopes, within which \textit{Gemini-North}, NOT, \textit{William Herschel}, and VLT confirmed the redshift of $0.34$ \citep{2013GCN.14455....1L,2013GCN.14478....1X,2013GCN.14617....1W,2013GCN.14491....1F}, and NOT found the associated supernova SN 2013cq \citep{2013ApJ...776...98X}. We adopt the radio, optical and the GeV data from various published articles and GCNs \citep{2014ApJ...781...37P,2014Sci...343...48M,2013GCN.14473....1V,2013GCN.14475....1S,2013ApJ...776...98X,2015ApJ...798...10R}. The soft and hard X-rays, which are one { of} the main subjects of this paper, { were} analyzed from the original data downloaded from \textit{Swift} repository\footnote{\noindent \url{http://www.swift.ac.uk}} and \textit{NuStar} archive\footnote{\noindent \url{https://heasarc.gsfc.nasa.gov/docs/nustar/nustar_archive.html}}. We { followed} the standard data reduction procedure Heasoft 6.22 with relevant calibration files\footnote{\noindent \url{http://heasarc.gsfc.nasa.gov/lheasoft/}}, and { the spectra were generated} by XSPEC 12.9 \citep{Evans:2007iz,Evans:2009kx}. During the data reduction, the pile-up effect in the \textit{Swift}-XRT { were} corrected for the first $5$ time bins (see Fig.~\ref{fig:Lx}) before $10^5$~s \citep{Romano:2006kt}. The NuStar spectrum at $388800$~s is inferred from the closest first $10000$~s of the \textit{NuStar} third epoch at $\sim 5.4$~days, by assuming { that} the spectra at these two times have the same cutoff power-law shape but different amplitudes. The amplitude at $388800$~s { was} computed by fitting the \textit{NuStar} light-curve. A K-correction { was} implemented for transferring observational data to the cosmological rest frame \citep{2001AJ....121.2879B}. {The GRB afterglow emission in the BdHN model originates from a mildly relativistic expanding supernova ejecta. This has been confirmed by measuring the expansion velocity $\beta \sim 0.6-0.9$ (corresponding to the Lorentz gamma factor $\Gamma < 5$) within { the} early hunderds of seconds after the trigger from the observed thermal emission in the soft X-ray. For instance, \citet{2014A&A...565L..10R} finds a velocity of $\beta \sim 0.8$ for GRB 090618, and in \citep{2018ApJ...852...53R}, GRB 081008 is found { to have} a velocity $\beta \sim 0.9$. The optical signal at tens of days also implies a mildly relativistic velocity $\beta \sim 0.1$ \citep{1998Natur.395..670G,2006ARA&A..44..507W,2017AdAst2017E...5C}.} {The expanding velocity can be directly inferred from the observable X-ray thermal emission and is summarised from \citet{2018ApJ...852...53R}:} \begin{multline} \frac{\beta^5}{4 [ \ln (1+ \beta) - (1-\beta) \beta]^2} \left(\frac{1+\beta}{1-\beta}\right)^{1/2}= \\ \frac{D_L(z)}{1+z} \frac{1}{t_2-t_1} \left(\sqrt{\frac{F_\mathrm{bb,obs} (t_2)}{\sigma T_\mathrm{obs}^4(t_2)}} - \sqrt{\frac{F_\mathrm{bb,obs}(t_1)}{\sigma T_\mathrm{obs}^4(t_1)}}\right) , \label{betaNew} \end{multline} \begin{figure} \centering \includegraphics[width=\hsize]{pobb} \caption{{ Spectral fitting \citep{2015ApJ...798...10R}} of three time intervals (196s - 246s, 246s - 326s, 326s - 461s) in the \textit{Swift}-XRT band (0.3 keV - 10 keV). Black points presents the spectral data with H absorption, green dashed line is the fitted thermal component, blue long-dashed line is the power-law component, and red line is the sum of two components. Clearly the temperature and the thermal flux drop along the time.} \label{pobb} \end{figure} {The left term is a function of velocity $\beta$, the right term is from observables, $D_L(z)$ is the luminosity distance for redshift $z$. From the observed thermal flux $F_\mathrm{bb,obs}$ and temperature $T_\mathrm{obs}$ at time $t_1$ and $t_2$, the velocity $\beta$ can be inferred. This model independent equation valid in Newtonian and relativistic regimes is general. The results inferred do not agree with the ones of the fireball model \citep{2002MNRAS.336.1271D,2007ApJ...664L...1P}, coming from a ultra-relativistic shockwave.} { Indeed, GRB 130427A is a well-known example of a GRB associated with SN \citep{2013ApJ...776...98X}. For this GRB an X-ray thermal emission has been found between $196$--$461$~s \citep{2015ApJ...798...10R}. The spectral evolution of this source is presented in Figure \ref{pobb}. From the best fit, we obtain a temperature in the observer's frame that drops in time from $0.46$~keV to $0.13$~keV. The thermal flux also diminishes in time. } { From Eq.~ (\ref{betaNew}), we obtain} a radius in the laboratory frame that increases from $1.67^{+0.43}_{-0.28} \times 10^{13}$~cm to $1.12^{+0.49}_{-0.33} \times 10^{14}$~cm. The velocity inferred from the first and second spectra is $\beta = 0.85^{+0.06}_{-0.10}$, from the second and third spectra increases to $\beta = 0.96^{+0.02}_{-0.03}$. The average velocity of the entire duration of thermal emission is $\beta = 0.94^{+0.03}_{-0.05}$, corresponding to a Lorentz factor $\Gamma = 2.98^{+1.20}_{-0.79}$, at an average radius $3.50^{+1.46}_{-0.97} \times 10^{13}$~cm. At later observer's time around $16.7$ days after the GRB trigger, the mildly relativistic velocity $\sim 32,000 ~ \mathrm{km \, s^{-1}}$ ($\beta \sim 0.1$) of the afterglow is measured from the line of Fe II 5169 \citep{2013ApJ...776...98X}. { Both the mildly relativistic velocities and the small radii are inferred directly from the observations and agree with the required properties of the} BdHN model. The above data are in contrast with the traditional fireball model [e.g.~\citep{1999PhR...314..575P},] which involves a shockwave with a high Lorentz factor $\Gamma \sim 500$ continuously expanding and generating the prompt emission at { a radius of} $\sim 10^{15}$~cm, and then the afterglow at { a lab-frame radius of} $>10^{16}$~cm. Therefore, any model of the afterglow with ultra relativistic velocity following after { the }UPE does not conform to the stringent observational constraints. { One is left, therefore, with the task of developing a consistent afterglow model with a mildly relativistic expansion that is compatible with this clear observational evidence that the afterglow arises from mildly relativistic ejecta. That is the purpose of the present work} \section{Role of the new fast-rotating NS in the energetics and properties of the GRB afterglow}\label{sec:4} { Angular momentum conservation} implies that the $\nu$NS should be rapidly rotating. For { example}, the gravitational collapse of an iron core of radius $R_{\rm Fe}\sim 5\times 10^8$~cm of a carbon-oxygen { progenitor} star leading to a SN Ic, rotating with { an initial period of $P\sim 5$~min, implies a rotation period $P = (R_{\rm NS}/R_{\rm Fe})^2 P_{\rm CO} \sim 1$~ms for the newly formed neutron star. Thus, one expects the $\nu$NS to have } a large amount of rotational energy available to power the SN remnant. In order to evaluate { such a} rotational energy we need to know the { structure of fast rotating NSs. This} we adopt from \citet{2015PhRvD..92b3007C}. The structure of NSs in uniform rotation is obtained by numerical integration of the Einstein equations in axial symmetry and the stability sequences are described by two parameters, e.g.: the baryonic mass (or the gravitational mass/central density) and the angular momentum (or the angular velocity/polar to equatorial radius ratio). The stability of the star is bounded by (at least) two limiting conditions \citep[see e.g.][for a review]{2003LRR.....6....3S}. The first is the mass-shedding or Keplerian limit: for a given mass (or central density) there is a configuration whose angular velocity equals { that} of a test particle in circular orbit at the stellar equator. Thus, the matter at the stellar surface is marginally { bound { so that} any small perturbation causes mass loss bringing the star back to stability or to a point of dynamical instability}. The second is the secular axisymmetric instability: in this limit the star becomes unstable against axially symmetric perturbations and is expected to evolve first quasi-stationarily { toward} a dynamical instability point where gravitational collapse { ensues}. This instability sequence thus leads to the NS critical mass and it can be obtained via the turning-point method by \citet{1988ApJ...325..722F}. { In} \citet{2015PhRvD..92b3007C} the values of the critical mass were obtained for the NL3, GM1 and TM1 { equations of state (EOS)} and the following fitting formula was found to describe them with a maximum error of 0.45\%: \begin{equation}\label{eq:Mcrit} M_{\rm NS}^{\rm crit}=M_{\rm crit}^{J=0}(1 + C j_{\rm NS}^a), \end{equation} where $j_{\rm NS}\equiv c J_{\rm NS}/(G M_\odot^2)$ is a dimensionless angular momentum parameter, $J_{\rm NS}$ is the NS angular momentum, $C$ and $a$ are parameters that depend on the nuclear EOS, and $M_{\rm crit}^{J=0}$ is the critical mass in the non-rotating case (see Table \ref{tb:StaticRotatingNS}). \begin{table*} \centering \caption{Critical mass (and corresponding radius) obtained in \citet{2015PhRvD..92b3007C} for selected parameterizations of the nuclear EOS.}\label{tb:StaticRotatingNS} \begin{tabular}{cccccccc} \hline \hline EOS & $M_{\rm crit}^{J=0}$~$(M_{\odot})$ & $R_{\rm crit}^{J=0}$~(km) & $M_{\rm max}^{J\neq 0}$~$(M_{\odot})$ & $R_{\rm max}^{J\neq 0}$~(km) &$a$&$C$ & $P_{\rm min}$~(ms) \\ \hline NL3 & $2.81$&$13.49$ &$3.38$ & 17.35 & $1.68$&$0.006$ & $0.75$\\ GM1 & $2.39$&$12.56$ &$2.84$& 16.12&$1.69$&$0.011$ & $0.67$\\ TM1 & $2.20$ &$12.07$ &$2.62$ & 15.98 &$1.61$&$0.017$ & $0.71$\\ \hline \end{tabular} \tablecomments{In the last column we list the rotation period of the fastest possible configuration which corresponds to that of the critical mass configuration (i.e. secularly unstable) that intersects the Keplerian mass-shedding sequence.} \end{table*} The configurations lying along the Keplerian sequence are also the maximally rotating ones (given a mass or central density). The fastest rotating NS is the configuration at the crossing point between the Keplerian and the secular axisymmetric instability sequences. Fig.~\ref{fig:ErotvsM} shows the minimum rotation period and the rotational energy as a function of the NS gravitational mass for the NL3 EOS. \begin{figure} \centering \includegraphics[width=\hsize,clip]{ErotvsM} \caption{Rotational energy and period of NSs along the Keplerian sequence for the NL3 EOS.}\label{fig:ErotvsM} \end{figure} We turn now to the magnetosphere properties. Within the traditional model of pulsars \citep{1969ApJ...157..869G}, in a rotating, highly magnetized NS, a corotating magnetosphere is enforced up to a maximum distance $R_{\rm lc}=c/\Omega=c P/(2\pi)$, where $c$ is the speed of light and $\Omega$ is the angular velocity of the star. This defines the so-called light cylinder since corotation at larger distances { implies} superluminal velocities of the magnetospheric particles. The last $B$-field line closing within the corotating magnetosphere is located at an angle $\theta_{\rm pc} = \arcsin(\sqrt{R_{\rm NS}/R_{\rm lc}})\approx \sqrt{R_{\rm NS}/R_{\rm lc}}=\sqrt{R_{\rm NS} \Omega/c}=\sqrt{2\pi R_{\rm NS}/(c P)}$ from the star's pole. The $B$-field lines that originate in the region between $\theta=0$ and $\theta=\theta_{\rm pc}$ (referred to as { the} \emph{magnetic polar caps}) cross the light cylinder and are called ``open'' field lines. Charged particles leave the star moving along the open field lines and { escape} from the magnetosphere passing through the light cylinder. At large distances from the light cylinder the magnetic field lines becomes radial. { Thus, } the magnetic field geometry is dominated by the toroidal component which decreases with the inverse of the distance. For typical pulsar magnetospheres it is expected to be related { to} the poloidal component of the field at the surface, $B_s$, as \citep[see][for details]{1969ApJ...157..869G} \begin{equation}\label{eq:BtBs} B_t\sim \left(\frac{2\pi R_{\rm NS}}{c P}\right)^2 \left(\frac{R_{\rm NS}}{r}\right)B_s, \end{equation} up to a factor of order unity. Thus, as the SN remnant expands it finds a magnetized medium with a different value of the $B$-field. We adopt a magnetic field of the form \begin{equation}\label{eq:Bt} B(t)= B_0 \left(\frac{R_0}{r}\right)^{-m}, \end{equation} with $1\leq m \leq2$. We then seek the value of $m$ which fits best the data (see Secs.~\ref{sec:5}--\ref{sec:7}). According to the previous agreement we have found between our model and GRB data \citep[see e.g.][]{2016ApJ...833..107B,2018ApJ...852...53R}, we shall adopt values for $R_0$ and the expansion velocity $\dot{R}$ (see below Secs.~\ref{sec:5}--\ref{sec:7}) and leave the parameter $B_0$ to be set by the fit of the afterglow data. We then compare and contrast the results with that expected from the NS theory. \section{Model for the Optical and X-ray Spectrum of the Afterglow}\label{sec:5} The origin of the observed afterglow emission is interpreted { here} as due to the synchrotron emission of electrons accelerated in an expanding magnetic HN ejecta.\footnote{We note that synchrotron emission of electrons in fast cooling regime has been previously applied in GRBs but to explain the prompt emission \citep[see e.g.][]{2014NatPh..10..351U}.} A fraction of the kinetic energy of the ejecta is converted, through a shockwave, to accelerated particles (electrons) above GeV and TeV energies --- enough to emit photons up to the X-ray band by synchrotron emission. Depending on the shock speed, number density, magnetic field, etc., different initial energy spectra of particles can be formed. In the most common cases, the accelerated particle distribution function can be described by a power law in the form of \begin{equation} Q(\gamma,t)=Q_0(t)\gamma^{-p}\theta(\gamma_\mathrm{max}-\gamma)\theta(\gamma-\gamma_\mathrm{min}) \, , \label{PL} \end{equation} where $\gamma = E/{m c^2}$ is the electron Lorentz factor, $\gamma_\mathrm{min}$ and $\gamma_\mathrm{max}$ are the minimum and maximum Lorenz factors, respectively. $Q_0(t)$ is the number of injected particles per second per energy, originating from the remnant impacted by the $e^+e^-$ pair plasma of the GRB. After the electrons are injected with { a} spectrum given by Eq.~(\ref{PL}), the evolution of the particle distribution at a given time can be determined from the solution of the kinetic { equation of} the electrons taking into account the particle energy losses \citep{1962SvA.....6..317K} \begin{equation} \frac{\partial N(\gamma,t)}{\partial t} = \frac{\partial}{\partial \gamma} (\dot{\gamma}(\gamma,t) \, N(\gamma,t))-\frac{N(\gamma,t)}{\tau}+Q(\gamma,t) \, , \label{Neq} \end{equation} where $\tau$ is the characteristic escape time and $\dot{\gamma}(\gamma,t)$ is the cooling rate. In the present case { the escape time for electrons} is much longer than the characteristic { cooling time scale} (fast cooling regime). The term $\dot{\gamma}(\gamma,t)$ includes various electron energy loss processes, such as synchrotron and inverse-Compton cooling as well as adiabatic losses due to the expansion of the emitting region. For the magnetic field considered here, the dominant cooling process for higher energy electrons is synchrotron emission (the electron cooling timescale due to inverse-Compton scattering is significantly longer) while adiabatic cooling can dominate for the low energy electrons at later phases. By introducing the expansion velocity of the remnant $\dot{R}(t)$ and its radius $R(t)$, the energy loss rate of electrons can be written as \begin{equation} \dot{\gamma}(\gamma,t)=\frac{\dot{R}(t)}{R(t)}\gamma+\frac{4}{3} \frac{\sigma_\mathrm{T}}{m_\mathrm{e}c}\frac{B(t)^2}{8\pi}\gamma^2 \, , \end{equation} where $\sigma_\mathrm{T}$ is the Thomson cross section and $B(t)$ is the magnetic field strength. From the early X-ray data we find that the initial expansion velocity of GRB 130427A at times $\sim 10^2$~s is $0.8c$ \citep{2015ApJ...798...10R}, which then decelerates to $0.1c$ at $10^6$~s, as inferred from the SN optical data \citep{2013ApJ...776...98X}. { Supernova or hypernova remnants like the one considered here generally evolve through three stages \citep[see][]{1997ApJ...490..619S}. These are the free expansion phase, the Sedov phase, and the radiative cooling phase. The free expansion phase roughly ends when the total mass of gas swept up by the shock equals the initial supernova ejecta mass. During this phase, the shock velocity remains nearly constant at its initial velocity $v_0$ and the outer radius $R$ of the ejecta evolves linearly in time after the explosion. This phase ends \citep{1997ApJ...490..619S} when \begin{equation} t \approx 50{\rm~ yr} \times \biggl[ \biggl (\frac{M_{\rm ej}}{5 M_\odot}\biggr) \times \biggl(\frac{1~{\rm cm^{-3}}}{ n_{\rm ISM}}\biggr) \times \biggl(\frac{v_0}{0.1~c}\biggr)^3\biggr]^{1/3}~~, \end{equation} where $M_{\rm ej}$ is the HN ejected mass and $n_{\rm ISM}$ is the hydrogen density in the local interstellar medium. For a mildly relativistic ejecta ($v/c \sim 0.9$, $\Gamma \sim 3$) in a typical ISM of $n_{\rm ISM} \approx 1$ cm$^-3$ this phase lasts for 450 years. Even if the ISM is 1000 times more dense due to past mass loss of the progenitor star, this phase still lasts for 45 years. Since we only consider times much less than a year (out to $10^7$ sec) we are completely justified in treating the expansion as a ``ballistic'' constant velocity rather than a Sedov expansion. Nevertheless, we allow for an initial linearly decelerating eject as observed in the thermal component (cf. Sec. 3)) until $10^6$~s. After which it is allowed to expand with a constant velocity of $0.1c$. Thus, the expansion velocity of { the }ejecta is written as} \begin{eqnarray}\label{eqn:expansion} \dot{R}(t) & = & \begin{cases} v_0 - a_0 \, t & t\leq 10^6 \mathrm{s} \\ v_f & t > 10^6 \mathrm{s} \end{cases} \, , \\ R(t) & = & \begin{cases} v_0 \, t - a_0 \, t^2/2 & t\leq 10^6 \mathrm{s} \\ 1.05 \times 10^{16}\,\mathrm{cm}+ v_f\, t & t > 10^6 \mathrm{s} \end{cases} \, , \end{eqnarray} where $v_0 = 2.4 \times 10^{10}$~cm~s$^{-1}$, $a_0=2.1 \times 10^{4}$~cm~s$^{-2}$, and $v_f = 3 \times 10^{9}$~cm~s$^{-1}$. Due to the above decelerating expansion of the emitting region, the magnetic field decreases. Therefore we adopt a magnetic field that scales as $B(t)=B_0 \left(\frac{R(t)}{R_0}\right)^{-m}$ with $1\leq m \leq2$. We shall show below (see Sec.~\ref{sec:7}) that the data { are} best fit with $m=1$. This corresponds to conservation of magnetic flux for the longitudinal component. The initial injection rate of particles, $Q_0(t)$, depends on the energy budget of ejecta and on the efficiency of converting from kinetic to non-thermal energy. This can be defined as \begin{equation} L(t)=Q_0(t) m_e c^2\int_{\gamma_\mathrm{min}}^{\gamma_\mathrm{max}}\gamma^{1-p} d\gamma \, , \end{equation} where it is assumed that $L(t)$ varies in time, based on the recent analyses of BdHNe which show that the X-ray light curve of GRB 130724A decays in time following a power-law of index $\sim -1.3$ (\citealp{2015ApJ...798...10R}; see Fig.~\ref{fig:slope}). In our interpretation, the emission in the optical and X-ray bands is produced from synchrotron emission of electrons: if one assumes the electrons are constantly injected ($L(t)=L$), this will produce { a } constant synchrotron flux. Thus, we assume that the luminosity of { the electrons changes} from an initial value $L_0$ as follows: \begin{equation}\label{eq:Lt} L(t)=L_0 \times \left(1+\frac{t}{\tau_0}\right)^{-k}, \end{equation} { where the $L_0$ and $k$ are fixed by the observed afterglow light curve (see Eq.~\ref{eq:Lsyn}) (see details below in Secs.~\ref{sec:6} and \ref{sec:7})}. The kinetic equation given in Eq.~(\ref{Neq}) has been solved numerically. The discretized electron continuity equation (\ref{Neq}) is re-written in the form of a tridiagonal matrix which is solved using the implementation of the ``tridiag'' routine in \citet{1992nrca.book.....P}. We have carefully tested our code by comparing the numerical results with the analytic solutions given in \citet{1962SvA.....6..317K}. The synchrotron luminosity temporal evolution is calculated using $N(\gamma,t)$ with \begin{equation}\label{eq:Lsyn} L_{syn}(\nu,t)=\int_{1}^{\gamma_\mathrm{max}}{N(\gamma,t)P_{syn}(\nu,\gamma,B(t))d\gamma}, \end{equation} where $P_{syn}(\nu,\gamma,B(t))$ is the synchrotron spectra for a single electron which is calculated using the parameterization of the emissivity function of synchrotron radiation presented in \citet{2010PhRvD..82d3002A}. \section{Initial Conditions for GRB 130724A}\label{sec:6} In \citet{2018ApJ...852...53R} an analysis was completed for seven subclasses of GRBs including 345 identified BdHNe candidates, one of which is GRB 130724A that was seen in the {\it Swift}-XRT data and analyzed in detail in \citet{2015ApJ...798...10R}. From the host-galaxy identification it is known that this burst occurred at a redshift $z = 0.334$. After transforming to the cosmological rest-frame of the burst and properly correcting for effects of the cosmological redshift and Lorentz time dilation, one can infer a time duration $t_{90} = 162.8$~s for 90\% of the GRB emission. The isotropic energy emission in the range of $1$--10$^4$~keV in the cosmological rest-frame of the burst is also deduced to be $E_{iso} = (9.3 \pm 1.3) \times 10^{53}$~erg and the total emission in the power-law afterglow can be inferred \citep{2015ApJ...798...10R}. { This fixes $L_0$ in Eq.~(\ref{eq:Lt}).} Fig.~\ref{fig:slope} shows the slope of the light-curve, defined by the logarithmic time derivative of the luminosity: slope = $d \log_{10}(L)/ d \log_{10}(t)$. This slope is obtained by fitting the luminosity light-curve in the cosmological rest-frame, using a machine learning, locally weighted regression (LWR) algorithm. We have made publicly available the corresponding technical details and codes to perform this calculation at: \url{https://github.com/YWangScience/AstroNeuron}. The green line is the slope of the soft X-ray { emission}, in the $0.3$--$10$~keV range, and the blue line corresponds to the optical R-band, centered at $658$~nm. The solid line covers the time when the data are well observed, while the dashed line, corresponds to an epoch in which observational data are missing. The rapid change of the slope implies variations of the energy injection, different emission mechanisms or different emission phases. The slope of the soft X-ray { emission varies dramatically} at early times when various complicated GRB components (prompt emission, gamma-ray flare, X-ray flare) are occurring. Hence, we do not attempt to explain this early part with the synchrotron emission model defined above. We only consider times later than $10^3$~s. Also we note that, at times later than $10^5$~s, the slopes of the X-ray and R bands reach a common value of $-1.33$, indicated as a red line. \begin{figure} \centering \includegraphics[width=1.1\hsize,clip]{XRT_R.pdf} \caption{The slope of the afterglow light-curve of BdHN 130427A, defined by the logarithmic time derivative of the luminosity: slope = $d \log_{10}(L)/ d \log_{10}(t)$. This slope is obtained by fitting the luminosity light-curve in the cosmological rest-frame, using a machine learning, locally weighted regression (LWR) algorithm. For the corresponding technical details and codes we refer the reader to: \url{https://github.com/YWangScience/AstroNeuron}. The green line is the slope of the soft X-ray emission, in the $0.3$--$10$~keV range, and the blue line corresponds to the optical R-band, centered at $658$~nm.} \label{fig:slope} \end{figure} Furthermore, we are not interested in explaining the GeV emission observed in most of BdHNe (when LAT data are available) with the synchrotron radiation model proposed here. Such emission has been explained in \citet{2015ApJ...798...10R} as originating from the further accretion of matter onto the newly-formed BH. This explanation is further reinforced by the fact that a similar GeV emission, following the same power-law decay with time, is also observed in the authentic short GRBs (S-GRBs; short bursts with $E_{iso} \gtrsim 10^{52}$~erg; see \citealp{2016ApJ...832..136R}) which are expected to be produced in NS-NS mergers leading to BH formation (\citealp{2016ApJ...832..136R}; Aimuratov et al., in preparation). Regarding the model parameters, the initial velocity of the expanding ejecta is expected to be $v_0=2.4 \times 10^{10}$~cm~s$^{-1}$ \citep{2015ApJ...798...10R} from the thermal black body emission. Similarly, the radius at the beginning of the X-ray afterglow should be $R_0 \approx 2.4 \times 10^{12}$~cm. This corresponds to an expansion timescale { of $t_0 = \tau_0 = 100$~s. }These values are consistent with our previous theoretical simulations of BdHNe \citep{2016ApJ...833..107B}. For our simulation of this burst we include all expected energy losses (synchrotron and adiabatic energy losses). However, the escape timescale was assumed to be large so that its effect could be neglected. \section{Results}\label{sec:7} Our modeling { of} the broadband spectral energy distribution (SED) of GRB 130724A for different periods is shown in Fig.~\ref{fig:movinglimitsmodelb05e2min1e3max5e5}. The corresponding parameters are given in Table~\ref{tab:parameters}. { However, as noted above the 8 parameters in Table~\ref{tab:parameters} are not all ``free'' and independent. For example, $R_0$ and $t_0 = \tau_0$ are fixed by the observed thermal component. Also, $\gamma_{\rm min}$ and $\gamma_{\rm max}$ are fixed once $B$ is given. $L_0$ is fixed by a normalization of the observed source luminosity. The synchrotron index $p$ is not varied, but kept fixed at 1.5 as typical of synchrotron emission. The parameter $ k$ is fixed by the slope of the late time X-ray afterglow. Hence, the only ``free parameter'' is $B_0$. This parameter then provides an excellent fit to the observed spectra and light curves over a broad range of wavelengths and time scales for a single plausible value.} The radio emission is due to low-energy electrons that accumulate for longer periods. That is why the radio data are not included in the model. Only the optical and X-ray emissions are interpreted as due to { the }synchrotron emission of electrons. Such emission, for instance at 604~s, is produced in a region with a radius of $1.4\times10^{14}$~cm and a magnetic field of $B=8.3 \times 10^4$~G. For this field strength synchrotron self-absorption can be significant as estimated following \citet{1979rpa..book.....R}. At the initial phases, when the system is compact and the magnetic field is large, synchrotron-self absorption can be neglected for the photons with frequencies above $10^{14}$~Hz. { Otherwise,} it is important. Thus, it is effective in reducing the radio flux predicted by the model, but not the optical and X-ray emission. The optical and X-ray data can be well fit by a single power-law injection of electrons with $Q\propto \gamma^{-1.5}$ and with initial minimum and maximum energies of $\gamma_{\rm min}=4\times10^3$ ($E_{\rm min}=2.0$~GeV) and $\gamma_{\rm max}=5\times10^5$ ($E_{\rm max}=255.5$~GeV), respectively. Due to the fast synchrotron cooling, the electrons are cooled rapidly forming a spectrum of $N(\gamma,t)\sim\gamma^{-2}$ for $\gamma \leq \gamma_{\rm min}$ and $N(\gamma,t)\sim\gamma^{-2.5}$ for $\gamma \geq \gamma_{\rm min}$. The slope of the synchrotron emission ($\nu F_{\nu}\propto \nu^{1-s}$) below the frequency defined by $\gamma_{\rm min}$ (e.g., $h\:\nu_{\rm min}\simeq3\:e\:h\:B(t)\:\gamma_{\rm min}^2/4\:\pi\:m_{e}\:c$) { is $s=(2-1)/{2}=0.5$.} This explains well both the optical and X-ray data. For frequencies above $\nu_{\rm min}$, the slope is $\nu F_{\nu}\propto \nu^{0.25}$ which continues up to $h\:\nu_{\rm max}\simeq3\:e\:h\:B(t)\:\gamma_{\rm max}^2/(4 \pi m_e c)$. Since $\nu_{\rm min}$ and $\nu_{\rm max}$ depend on the magnetic field, they decrease with time, e.g. at $t=5.2\times10^6$~s, $\nu_{\rm min}\simeq6.5\times10^{14}$~Hz and $\nu_{\rm max}\simeq1.0\times10^{19}$~Hz. Due to the changes in the initial particle injection rate and magnetic field, the synchrotron luminosity also decreases. This is evident from Fig.~\ref{fig:Lx}, where the observed optical and X-ray light-curves of GRB 130427A are compared with the theoretical synchrotron emission light-curve obtained from Eq.~(\ref{eq:Lsyn}). In this figure we also show the electron injection power $L(t)$ given by Eq.~(\ref{eq:Lt}). Here, it can be seen how the synchrotron luminosity fits the observed decay of the afterglow luminosity with the correct power-law index $~-1.3$ (see also Fig.~\ref{fig:slope}). \begin{deluxetable}{c|c} \tablecaption{Parameters used for the simulation of GRB 130724A.\label{tab:parameters}} \tablehead{\colhead{Parameter} & \colhead{Value}} \startdata $B_0$ & $5.0 (\pm 1) \times 10^5 \; \mathrm{G}$ \\ $R_0$ & $2.4\times 10^{12}\; \mathrm{cm}$ \\ $L_{0}$ & $2.0 \times10^{51} \; \mathrm{erg/s}$ \\ $k$ & $1.58$ \\ $\tau_0$ & $1.0 \times 10^2 \; \mathrm{s}$\\ $p$ & $1.5$\\ $\gamma_\mathrm{min}$ & $4.0 \times 10^3$\\ $\gamma_\mathrm{max}$ & $5.0 \times 10^5$\\ \enddata \end{deluxetable} \begin{figure} \centering \includegraphics[width=\hsize,clip]{GRB_fit_linacc} \caption{Model evolution (lines) of synchrotron spectral luminosity at various times compared with measurements (points with error bars) in various spectral bands for GRB 130724A.} \label{fig:movinglimitsmodelb05e2min1e3max5e5} \end{figure} \begin{figure} \centering \includegraphics[width=\hsize,clip]{Luminosity} \caption{{X-ray light-curve of GRB 130427A (points with error bars) together with the optical and X-ray} theoretical synchrotron light-curve (lines) from Eq.~(\ref{eq:Lsyn}). We also show the electron injection power $L(t)$ given by Eq.~(\ref{eq:Lt}).} \label{fig:Lx} \end{figure} The SN ejecta is expected to become transparent to the $\nu$NS radiation at around $10^5$~s. Thus, we now discuss the pulsar emission that might power the late ($t\gg 10^5$~s) X-ray afterglow light-curve. The late X-ray afterglow also shows a power-law decay of index $\sim -1.3$ which, as we show below, if powered by the pulsar implies the presence of a quadrupole magnetic field in addition to the traditional dipole one. Thus, we adopt a dipole+quadrupole magnetic field model \citep[see][for details]{2015MNRAS.450..714P}. The luminosity from a pure dipole ($l=1$) is \begin{equation} L_{dip} = \frac{2}{3 c^3} \Omega^4 B_{dip}^2 R_{\rm NS}^6 \sin^2\chi_1, \end{equation} where $\chi_1$ = 0 degrees gives the axisymmetric mode $m = 0$ alone whereas $\chi_1$ = 90 degrees gives the $m = 1$ mode alone. The braking index, following the traditional definition $n \equiv \Omega \ddot{\Omega}/\dot{\Omega}^2$, is in this case $n = 3$. On the other hand, the luminosity from a pure quadrupole field ($l=2$) is \begin{equation} L_{quad} = \frac{32}{135 c^5} \Omega^6 B_{quad}^2 R_{\rm NS}^8 \sin^2\chi_1(\cos^2\chi_2+10\sin^2\chi_2), \end{equation} where the different modes are easily separated by taking $\chi_1$ = 0 and any value of $\chi_2$ for $m = 0$, ($\chi_1$, $\chi_2$) = (90, 0) degrees for $m = 1$ and ($\chi_1$, $\chi_2$) = (90, 90) degrees for $m = 2$. The braking index in this case is $n=5$. Thus, the quadrupole to dipole luminosity ratio is: \begin{equation} R^{quad}_{dip} = \eta^2 \frac{16}{45} \frac{R_{\rm NS}^2 \Omega^2}{c^2}, \label{eq:ratio} \end{equation} where \begin{equation} \eta^2 = (\cos^2\chi_2+10\sin^2\chi_2) \frac{B_{quad}^2}{B_{dip}^2}. \label{eq:eta} \end{equation} It can be seen that $\eta = B_{quad}/B_{dip}$ for the $m=1$ mode, and $\eta = 3.16 \times B_{quad}/B_{dip}$ for the $m=2$ mode. For a $1$~ms period $\nu$NS, if $B_{quad} = B_{dip}$, the quadrupole emission is about $\sim 10\%$ of the dipole emission, if $B_{quad} = 100 \times B_{dip}$, the quadrupole emission increases to $1000$ times the dipole emission; and for a $100$~ms pulsar, the quadrupole emission is negligible when $B_{quad} = B_{dip}$, or only $\sim 10\%$ of the dipole emission even when $B_{quad} = 100 \times B_{dip}$. From this result one infers that the quadrupole emission dominates in the early fast rotation phase, then the $\nu$NS spins down and the quadrupole emission drops faster than the dipole emission and, after tens of years, the dipole emission becomes the { dominant} component. The evolution of the $\nu$NS rotation and luminosity are given by \begin{eqnarray} \frac{dE}{dt} &=& -I \Omega \dot{\Omega } = - (L_{dip} + L_{quad}) \nonumber \\ &=& - \frac{2}{3 c^3} \Omega^4 B_{dip}^2 R_{\rm NS}^6 \sin^2\chi_1 \left(1+\eta^2 \frac{16}{45} \frac{R_{\rm NS}^2 \Omega^2}{c^2}\right), \end{eqnarray} where $I$ is the moment of inertia. The solution is \begin{equation} t = f(\Omega) - f(\Omega_0) \label{eq:tOmega} \end{equation} where \begin{equation} f(\Omega) = \frac{3 I c \{\frac{16}{45} \eta^2 R_{\rm NS}^2 \Omega ^2 [2 \ln \Omega-\ln (c^2+ \frac{16}{45}\eta^2 R_{\rm NS}^2 \Omega ^2)]+c^2\}}{4 B_{dip}^2 \sin^2\chi_1 R_{\rm NS}^6 \Omega ^2} \end{equation} and \begin{equation} f(\Omega_0) = \frac{3 I c \{\frac{16}{45} \eta^2 R_{\rm NS}^2 \Omega_0 ^2 [2 \ln \Omega_0-\ln (c^2+ \frac{16}{45}\eta^2 R_{\rm NS}^2 \Omega_0 ^2)]+c^2\}}{4 B_{dip}^2 \sin^2\chi_1 R_{\rm NS}^6 \Omega_0 ^2} \end{equation} The first and the second derivative of the angular velocity are \begin{equation} \dot{\Omega } = -\frac{2 B_{dip}^2 \sin^2\chi_1 R_{\rm NS}^6 \Omega^3}{3 I c^3} (1+\eta^2 \frac{16}{45 c^2} R_{\rm NS}^2 \Omega^2) \label{eq:dotOmega} \end{equation} \begin{equation} \ddot{\Omega } = - \frac{2 B_{dip}^2 \sin^2\chi_1 R_{\rm NS}^6 \Omega^2 \dot{\Omega}}{I c^3}(1+\eta^2 \frac{16}{27 c^2} R_{\rm NS}^2 \Omega^2 ) \end{equation} Therefore the braking index is \begin{equation} n = \frac{\Omega \ddot{\Omega }}{\dot{\Omega }^2} = \frac{135c^2+80\eta^2 R_{\rm NS}^2 \Omega^2}{45c^2+16\eta^2 R_{\rm NS}^2 \Omega^2} \end{equation} that in the present case ranges from $3$ to $5$. From Eqs.~(\ref{eq:tOmega}--\ref{eq:dotOmega}) we can compute the evolution of total pulsar luminosity as \begin{equation} L_{tot}(t) = I \Omega \dot{\Omega }. \end{equation} Figure~\ref{fig:Lpulsar} shows the luminosity obtained from the above model for a $1.5~M_\odot$ pulsar with a radius of $1.5\times10^6$~cm, $B_{dip} = 5\times10^{12}$~G, an initial rotation period $P_0 = 2$~ms, and for selected values of the parameter $\eta$. This figure shows that the theoretical luminosity of { the }pulsar is close to the soft X-ray luminosity observed in GRB 130427A when $\eta$ is around $100$. This means, if choosing the harmonic mode $m=2$, the quadrupole magnetic field is about $30$ times stronger than the dipole magnetic field. The luminosity of the pulsar before $10^6$~s is mainly powered by the quadrupole emission, which is tens of times higher than the dipole emission. At about $10$ years the dipole emission starts to surpass the quadrupole emission and continues to dominate thereafter. \begin{figure} \centering \includegraphics[width=1.1\hsize,clip]{B5E12_2ms_luminosity} \caption{The observed luminosity of GRB 130427A in the $0.3$--$50$~keV band (grey points), and the theoretical luminosity from a pulsar for selected quadrupole to dipole magnetic field ratio and quadrupole angles in color lines. Other parameters of the pulsar are fixed: initial spin period $P_0 = 2$~ms, dipole magnetic field $B_{dip} = 5 \times 10^{12}$~G, inclination angle $\chi_1 = \pi/2$, mass $M = 1.5~M_\odot$, radius $R_{\rm NS} = 1.5 \times 10^6$~cm.}\label{fig:Lpulsar} \end{figure} It is important to check the self-consistency of the estimated $\nu$NS parameters obtained first from the early afterglow via synchrotron emission and then from the late X-ray afterglow via the pulsar luminosity. We can obtain from Eqs.~(\ref{eq:Bt}) and (\ref{eq:BtBs}), via the values of $B_0$ and $R_0$ from Table~\ref{tab:parameters} and for $P_0 = 2$~ms, an estimate of the dipole field at the $\nu$NS surface from the synchrotron emission powering the early X-ray afterglow, $B_s \approx 6.7\times 10^{12}$~G. This value is to be compared with the one we have obtained from the pulsar luminosity powering the late afterglow, $B_{dip} = 5\times 10^{12}$~G. The self-consistency of the two estimates is remarkable. In addition, the initial rotation period $P_0 = 2$~ms for the $\nu$NS is consistent with our estimate in Sec.~\ref{sec:4} based upon angular momentum conservation during the gravitational collapse of the iron core leading to the $\nu$NS. It can also be checked from Fig.~\ref{fig:ErotvsM} that $P_0$ is longer than the minimum period of a $1.5~M_\odot$ NS, which guarantees the gravitational and rotational stability of the $\nu$NS. \section{Conclusions}\label{sec:8} We have constructed a model for a broad frequency range of the observed spectrum in the afterglow of BdHNe. We have made a specific fit to the BdHN 130427A as a representative example. We find that the parameters of the fit are consistent with the BdHN interpretation for this class of GRBs. We have shown that the optical and X-ray emission of the early ($ 10^2$~s$\lesssim t\lesssim 10^6$~s) afterglow is explained by the synchrotron emission { from} electrons expanding in the HN threading the magnetic field of the $\nu$NS. At later times the HN becomes transparent and the electromagnetic radiation from the $\nu$NS dominates the X-ray emission. We have inferred that the $\nu$NS possesses an initial rotation period of 2~ms and a dipole magnetic field of (5--7)$\times 10^{12}$~G. It is worth mentioning that we have derived the strength of the magnetic dipole independently by the synchrotron emission model at early times ($t\lesssim 10^6$~s) and by the magnetic braking model powering the late ($t\gtrsim 10^6$~s) X-ray afterglow and show that they are in full agreement. In this paper we proposed a direct connection between the afterglow of a BdHN and the physics of a newly born fast-rotating NS. This establishes a new self-enhancing understanding both of GRBs and young SNe which could be of fundamental relevance for the understanding of ultra-energetic cosmic rays and neutrinos as well as new ultra high energy phenomena. It appears to be now essential to extend our comprehension in three different directions: 1) understanding of the latest phase of the afterglow; 2) the possible connection with historical supernovae; as well as 3) to extend observations from space of the GRB afterglow in the GeV and TeV energy bands. These last observations are clearly additional to the current observations of GRBs and GRB GeV radiation, originating from a Kerr-Newman BH and totally unrelated to the { astrophysics} of afterglows. One of the major verifications of our model can come from observing, in still active afterglows of historical GRBs, the pulsar-like emission from the $\nu$NS we here predict, and the possible direct relation of the Crab Nebula to a BdHN is now open to further examination. \acknowledgments We acknowledge the continuous support of the MAECI. This work made use of data supplied by the UK Swift Science Data Center at the University of Leicester. M.K., is supported by the Erasmus Mundus Joint Doctorate Program Grant N.2014--0707 from EACEA of the European Commission. J.A.R. acknowledges the partial support of the project N. 3101/GF4 IPC-11, and the target program F.0679 0073-6/PTsF of the Ministry of Education and Science of the Republic of Kazakhstan. Work at the University of Notre Dame (G.J.M.) is supported by the U.S. Department of Energy under Nuclear Theory Grant DE-FG02-95-ER40934. N.S. acknowledges the support of the RA MES State Committee of Science, in the frames of the research project No 15T-1C375.
{ "timestamp": "2018-10-25T02:02:39", "yymm": "1712", "arxiv_id": "1712.05000", "language": "en", "url": "https://arxiv.org/abs/1712.05000" }
\section In Quantum field theory (QFT) existence of a given theory means, that we can control its behavior at some scales (short or large distances) by renormalization theory \cite{Collins}. If the theory exists, than we want to solve it, which means to determine what happens on other scales. This is the problem (and content) of {\bf Renormdynamics} \cite{Makhaldiani17}. The result of the Renormdynamics, the solution of its discrete or continual motion equations, is the effective QFT on a given scale (different from the initial one). We will call Renormdynamic Functions (RDF) functions $g_n=f_n(t)$ which are solutions of the RD motion equations \ba \dot{g}_n=\beta_n(g), 1\leq n\leq N. \ea In the simplest case of one coupling constant (e.g. in Quantum electrodynamics (QED), Quantum chromodynamics (QCD)) the function $g=f(t)$ is constant, $g=g_c$ when $\beta(g_c)=0,$ or is invertible (monotone). Indeed, \ba \dot{g}=f'(t)=f'(f^{-1}(g))=\beta(g). \ea Each monotone interval ends by Ultraviolet (UV) and Infrared (IR) fixed points and describes corresponding phase of the system. Note that the simplest case of the classical dynamics, the Hamiltonian system with one degree of freedom, is already two dimensional, so we have no analog of one charge renormdynamics. There are different parameterizations $a=f(A).$ The values of the critical points in different parameterizations differ, but the scale is same: \ba \dot{a}=b(a)=f'B(A),\ b(a)=0\Leftrightarrow B(a)=0,\ f'\neq0. \ea In general case of $N$ coupling constants we have the similar result, \ba &&a_n=f_n(A_1,A_2,...A_N),\ 1\leq n\leq N,\cr &&\dot{a}_n=b_n(a)=f'_{nm}B_m(A),\ b_n(a)=0\Leftrightarrow B_n(a)=0,\ detf'\neq0,\ f'_{nm}=\frac{\partial f_n}{\partial A_m}. \ea This is important observation because it helps us not only identify such an important quantities as phase transition temperature, hadronization scale, valence quark scale,..., but also control quality of parametrization and systematic errors of approximations. In string theory, the connection between conformal invariance of the effective theory on the parametric world sheet and the motion equations of the fields on the embedding space is well known \cite{GreenString}, \cite{Ketov}. A more recent topic in this direction is AdS/CFT Duality \cite{Maldacena}. In this approach for QCD coupling constant the following expression was obtained \cite{Brodsky} \ba \alpha_{AdS}(Q^2)=\alpha(0)e^{-Q^2/4k^2}. \ea A corresponding $\beta$-function is \ba \beta(\alpha_{AdS})=\frac{d\alpha_{AdS}}{\ln Q^2}=-\frac{Q^2}{4k^2}\alpha_{AdS}(Q^2) =\alpha_{AdS}(Q^2)\ln\frac{\alpha_{AdS}(Q^2)}{\alpha(0)} \ea So, this renormdynamics of QCD interpolates between the IR fixed point $\alpha(0),$ which we take as $\alpha(0)=2,$ and the UV fixed point $\alpha(\infty)=0.$ For the QCD running coupling considered in \cite{Diakonov} \ba \alpha(q^2)=\frac{4\pi}{9\ln(\frac{q^2+m_g^2}{\Lambda^2})}, \ea where $m_g=0.88 GeV,\ \Lambda=0.28 GeV,$ the $\beta-$function of renormdynamics is \ba\label{mgb} && \beta(\alpha)=-\frac{\alpha^2}{k}(1-c\exp(-\frac{k}{\alpha})), \cr && k=\frac{4\pi}{9}=1.40,\ c=\frac{m_g^2}{\Lambda^2}=(3.143)^2=9.88, \ea for a nontrivial (IR) fixed point we have \ba \alpha_{IR}=k/\ln c =0.61 \ea For $\alpha(m)=2,$ at valence quark scale $m$ we predict the gluon (or valence quark) mass as \ba m_g=\Lambda e^{\frac{k}{2\alpha(m)}}=1.42\Lambda= m_N/3,\ \Lambda=220 MeV. \ea From the nonperturbative $\beta-$functions we see that besides perturbative phase, with asymptotic freedom, there is also nonperturbative phase with infrared fixed point and rising coupling constant at higher energies. At small scales in QCD we have perturbative, small coupling, phase and nonperturbative, strong coupling, phase. The phases unify at the IR fixed point beyond of which we have hadronic phase. It is nice to have a nonperturbative $\beta-$function like (\ref{mgb}), but it is more important to see which kind of nonperturbative corrections we need to have a phenomenological coupling constant dynamics. It was noted \cite{Voloshin} that in valence quark parametrization $\alpha_s(m)=2,\ $at a valence quark scale $m.$ The theory of analytic functions of a complex variable occupies a central place in analysis. Riemann considered the unique continuation property to be the most characteristic feature of analytic functions. GPF do possess the unique continuation property, and each class of GPF has almost as much structure as the class of analytic functions. In particular, the operations of complex differentiation and complex integration have meaningful counterparts in the theory of GPF and this theory generalizes not only the Cauchy-Riemann approach to function theory but also that of Weierstrass. Such functions were considered by Picard and by Beltrami, but the first significant result was obtained by Carleman in 1933, and a systematic theory was formulated by Lipman Bers \cite{Bers} and Ilia Vekua (1907-1977) \cite{Vekua}. For more resent results see \cite{Giorgadze}. Analytic function $f=u+iv$ satisfy the partial differential equation $\partial_{\overline{z}}f=0,$ where complex differential operators are defined as \ba \partial_{\overline{z}}=\frac{\partial}{\partial \overline{z}}:=\frac{1}{2}(\partial_x+i\partial_y),\ \partial_{z}=\frac{\partial}{\partial z}:=\frac{1}{2}(\partial_x-i\partial_y) \ea Generalized analytic functions $f=u+iv$ satisfy the following generalized Cauchy-Riemann equation \cite{Vekua} \ba \partial_{\overline{z}}f=Af+B\bar{f}+J,\ A=A_0+iA_1,\ B=B_0+iB_1,\ J=j_1+ij_2 \ea or in terms of the real $u$ and imaginary $v$ components canonical form of the elliptic systems of partial differential equations of the first order \ba &&u_x-v_y=au+bv+j_1,\ a=A_0+B_0,\ b=-A_1+B_1,\cr &&u_y+v_x=cu+dv+j_2,\ c=A_1+B_1,\ d=A_0-B_0, \ea or in matrix form \ba\label{EGPF} && D\psi=E\psi+J,\ D=\left( \begin{array}{cc} \partial_x &-\partial_y \\ \partial_y & \partial_x \\ \end{array} \right)=\partial_x-i\sigma_2\partial_y,\cr && E=\left( \begin{array}{cc} a & b \\ c & d \\ \end{array} \right),\ \psi=\left( \begin{array}{c} u \\ v \\ \end{array} \right),\ J=\left( \begin{array}{c} j_1 \\ j_2 \\ \end{array} \right). \ea In the classical sense by a solution of the system of equations (\ref{EGPF}) we understand a pair of real continuously differentiable functions $u(x,y),\ v(x,y)$ of the real variables $x$ and $y$ which satisfy this system everywhere in a domain $G$. Such solutions, however, exist only for a comparatively narrow class of equations. The formal solution of the canonical equation for GPF (\ref{EGPF}) is \ba \psi=\psi_0+RJ,\ R=(D-E)^{-1},\ (D-E)\psi_0=0. \ea Let us introduce a length parameter $l=h^{-1}$, which is of order of the source $J$ size,\ $x_n\rightarrow lx_n.$ Then, for the resolvent $R,$ we will have the longwave and shortwave expansions, \ba &&R_{LW}:=(lD-E)^{-1}=-E^{-1}\sum_{n\geq0}l^n(DE^{-1})^n,\cr &&R_{ShW}:=(lD-E)^{-1}=hD^{-1}\sum_{n\geq0}h^n(ED^{-1})^n,\cr &&E^{-1}=\left( \begin{array}{cc} d & -b\\ -c & a \\ \end{array} \right)/\Delta_E,\ \Delta_E=ad-bc,\cr &&D^{-1}=\Delta_D^{-1}\left(\begin{array}{cc} \partial_x & \partial_y \\ -\partial_y & \partial_x \end{array} \right)=\Delta_D^{-1}(\partial_x+i\sigma_2\partial_y),\ \Delta_D=\partial_x^2+\partial_y^2 \ea There is a fairly complete theory of generalized analytic functions; it represents an essential extension of the classical theory preserving at the same time its principal features \cite{Vekua}.
{ "timestamp": "2017-12-15T02:02:37", "yymm": "1712", "arxiv_id": "1712.05056", "language": "en", "url": "https://arxiv.org/abs/1712.05056" }
\section{Introduction} College sports organized The National Collegiate Athletic Association(NCAA) are very popular in the United States. And the most popular sports include football and men's basketball. According to a survey conducted by Harris Interactive, NCAA's inter-college competition is attracting around 47 percent of the U.S. Americans. In 2016, about 31 million of the followers have attended a college sports event. In the meanwhile, the enormous number of the followers indicates a big business behind the sport events which generated 740 million U.S. dollars in revenue from television and marketing rights in 2016. The fans and media have paid a lot of attention to the head coaches' salaries and the coaching changing each year. However, the extent of the analysis is limited to unofficial news comments and folk discussion. A quantitative analysis of the head coaches' hiring is lacking. Most of the head coaches used to be excellent players in his/her college team. Therefore, the defense/offense technique that he/she learnt and was familiar with during the college serving time will have an important impact on his/her later coach career. When a school $u$ hires a graduate from school $v$ as its head coach, $u$ implicitly makes a positive assessment of the quality of $v$’s sports program. By collecting this kind of pairwise assessment, we built the coach hiring networks. we use network-based methods to analyze the head coach hiring networks during the years. Contributions of our work are as follows: 1.We find high inequality in the coach hiring networks, which means that most of the head coaches graduated from a small proportion of the schools. 2.Based on optimal modularity, We find geographic communities of the schools. A graduate from one community is more likely to be hired as a head coach of a school in the same community. 3.Our coach production rankings have shown general correlation to the authoritative Associated Press(AP) rankings while some disparity do exist. 4. We find a common within-division flows pattern from the division-level movements of the coach. \section{Background} A. Clauset, S. Arbesman, and D.B Larremore’s research article Systematic inequality and hierarchy in faculty hiring networks~\cite{clauset2015systematic} analyzed the academic faculty hiring networks across three disciplines in computer science, history, and business. We are curious about if this kind of inequality and hierarchy also exist in the sports coach hiring network. Fast, Andrew and Jensen, David's work~\cite{fast2006nfl} use the NFL coaching network to identify notable coaches and to learn a model of which teams will make the playoffs in a given year. To identify notable coaches, their networks focused on the work-under relationship between the coaches. Although there have been some papers researched on ranking the sports teams based on their game results~\cite{park2005network,callaghan2007random}, none of them utilized the coach hiring network which is actually an assessment network built of those professional sports experts' view. \section{Data Description} From the official site of the NCAA~\cite{CoachData}, we collect a list of the head coaches(including those retired ones) of the NCAA men's basketball and football teams as well as their coaching career data which includes the coaches' alma maters, graduation year and the school they worked for during their head coaching career. Here we only take the head coaches into account for other positions' data, like the assistant coach, are mostly incomplete which means that the alma maters, the graduation year are difficult to identify. At the same time, we also have removed the head coaches with missing alma mater and graduation time. \begin{table}[h] \centering \begin{tabular}{|p{2.3cm}|p{1.4cm}|p{1.4cm}|} \hline & Football & Basketball \\ \hline schools & 857 & 1214\\ \hline head coach & 5744 & 6906\\ \hline mean degree & 6.70 & 5.69\\ \hline self-loops & 18.35\% & 18.98\%\\ \hline mean hiring years & 6.33 & 7.03\\ \hline data period & 1880-2012 & 1888-2013\\ \hline \end{tabular} \caption{Coach hiring networks data summary} \label{table:NetData} \end{table} For each school $u$ that a coach has worked for, a directed edge was generated from the coach's alma mater $v$ to the school $u$. We then extract those schools connected by the edges. A brief networks data summary is listed as Table~\ref{table:NetData}. Almost one-fifth of the head coaches would finally have a chance to serve at their alma maters. This result is much higher than the one in faculty hiring network~\cite{clauset2015systematic}. Besides, we collect the division attribute of the schools~\cite{DivData}, the authoritative ranking: the Associated Press(AP) rankings data of the two sports~\cite{APRankingData}. \section{Experimental Methods and results} \subsection{Head Coach Production Inequality} We measure the inequality of the coach hiring networks since the two sports were introduced to colleges. Table~\ref{table:InequlData} summaries the basic inequality measurements of our experiment. \begin{table}[h] \centering \begin{tabular}{|p{2.3cm}|p{1.4cm}|p{1.4cm}|} \hline & Football & Basketball \\ \hline vertices & 857 & 1214\\ \hline edges & 5744 & 6906\\ \hline 50\% coach from & 14.24\% & 15.16\%\\ \hline Gini, $G(k_o)$ & 0.59 & 0.58\\ \hline Gini, $G(k_i)$ & 0.39 & 0.35\\ \hline $k_o/k_i>1$ & 33.96\%(291) & 33.20\%(403)\\ \hline \end{tabular} \caption{measures of inequality in coach hiring networks: percentage of schools required to cover 50\% of the head coaches; Gini coefficient of production(out-degree) and hiring(in-degree); percentage of schools produced more coaches than the number of the coaches it hired} \label{table:InequlData} \end{table} The Gini coefficient is the most commonly used measure of inequality. A Gini coefficient of zero means a perfect equality. On the opposite, a maximal inequality will lead to a Gini coefficient of one. Here we respectively calculate the Gini coefficient of the coach production(out-degree) and the coach "consumption"(in-degree). We find that the Gini coefficients of coach production are close to 0.60 which indicate a strong inequality. The income distribution Gini index of South Africa estimated by World Bank in 2011 is 0.63. Figure~\ref{fig:Lorenz} is the Lorenz Curve of the coach production. From the curve, to cover 50\% of the head coaches, it only need around 15\% of the schools, which means that a small proportion of the schools have produced a lot of head coaches to all the NCAA members. \begin{figure}[htbp] \centering \includegraphics[height = 7cm, width = 8cm]{Lorenz.eps} \caption{Coach Production Lorenz Curve} \label{fig:Lorenz} \end{figure} \subsection{Community structure of Coach Hiring Networks} Community structure is an important property of complex networks. Here we use the modularity optimization algorithm~\citep{blondel2008fast} to detect communities in the coach hiring networks. Both in the football and men's basketball coach hiring networks, the average modularity of the whole networks are beyond 0.40, which indicates a significant community structure~\cite{newman2004fast}. Both the networks consist of 6 big communities which includes 97.8\% of schools in the football network and 98.6\% of schools in men's basketball network. We make a visualization of the networks as Figure~\ref{fig:FootballGeoGraph} and Figure~\ref{fig:BasketballGeoGraph} in which we place each school according to its longitude and latitude and set the size of node proportional to its coach production(out-degree). In the figures, the top 6 biggest communities are assigned several specific colors(To better present the figures in a proper size, we remove several schools which locate in Hawaii, Alaska and Puerto Rico). From the figures, we find that the community distribution indeed is influenced by the geographic factor. Besides, both the biggest communities in the two networks locate in the northeast part of America(in purple). \begin{figure*}[htbp] \centering \includegraphics[height = 9.6cm, width = 16.8cm]{1.pdf} \caption{American Football Coach Hiring Network} \label{fig:FootballGeoGraph} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[height = 9.6cm, width = 16.8cm]{2.pdf} \caption{American Men's Basketball Coach Hiring Network} \label{fig:BasketballGeoGraph} \end{figure*} \subsection{Correlation with the Authoritative Rankings} \subsubsection{Temporal characteristic of coach hiring networks} Our data set includes the head coach hiring data roughly from 1980 to 2010. Figure~\ref{fig:GraDensity} is the counts of the coach according to their graduating year. The distribution of the coach records in each period is not even. In fact, the national headquarters of the NCAA was established in Kansas City, Missouri in 1952. So it is not surprised that there are much more head coach records after 1950. \begin{figure}[htb] \centering \includegraphics[height = 6cm, width =8.6cm]{gradensity.eps} \caption{Counts of Coach in Graduating Year} \label{fig:GraDensity} \end{figure} \begin{figure}[htb] \centering \includegraphics[height = 6cm, width =8.6cm]{growtime.eps} \caption{Histogram of the time needed to become head coach} \label{fig:GrowTime} \end{figure} Figure~\ref{fig:GrowTime} shows the counts of coach by the "growing" time needed from graduating to head coach position. The average time needed is 11.5 years in basketball and 14.6 years in football, which means that it takes more time for a graduate to become a football head coach than a basketball head coach. What's more, it also illustrate the scarceness of the head coaches who graduate after 2000 for most of the potential head coach haven't grown into head coach. \subsubsection{Coach Production Ranking and Authoritative Rankings} We want to find out whether the strong teams will foster potential future head coaches. So we generate the coach production rankings from our dataset and calculate the correlation coefficient between the rankings and the authoritative rankings along the years. Due to the temporal characteristic of the coach hiring records, we extract subnetworks from the whole coach hiring network which only include the coaches graduated in the period as interval $[t_s,t_e]$. Because the number of records in each year is not evenly distributed. We enumerate the $t_e$ from the latest graduating time to the older ones and calculate the corresponding $t_s$ of each interval, ensuring that each interval contains only 30\% of the coaches. Finally, we have 55 subnetworks for men's basketball and 62 subnetworks for football. Based on these subnetworks, we try 4 different network-based methods: Out-degree, MVRs~\cite{clauset2015systematic}, PageRank~\cite{brin1998anatomy} and LeaderRank~\cite{lu2011leaders} to rank the schools. The most simple one is based on the out-degree(coach production) of each school. A minimum violation ranking (MVR) is a permutation $\pi$ that induces a minimum number of edges that point “up” the ranking~\cite{clauset2015systematic}. This method try to produce a ranking to minimize the number of coaches who go downward the hierarchy ranking of the schoos. Besides, PageRank is a widely used node ranking method. To apply this algorithm on our dataset, we make the direction of the edges backward. The backward directed edges represent the "votes" from the schools to decide which school's graduates are more welcome. And more importantly, the algorithm also takes into account some schools which produce very few coaches to those powerful schools with a high in-degree. LeaderRank is an improved version of PageRank in recent year. The Pearson correlation coefficients between the four rankings are all above 0.84, which indicates that these rankings are highly similar to each other. Considering that the PageRank could also find out those important schools with a low degree, we simply choose the traditional PageRank(PR) rankings as the coach production rankings. We choose the authoritative Associated Press(AP) Poll rankings to compare with our coach production rankings for the reason that polls voting system is also based on subjective opinions of experts. In addition, AP Poll is of long history which is suitable for comparing with our temporal dataset. We aggregate the AP rankings in every 20 years using Median rank aggregation~\citep{fagin2003efficient}. Firstly, we build a school list of the teams which have received votes in every 20 years. Then, In a certain year of the 20-year period, an average rank $(m + 1 + n)/2$ is assigned to the schools not received votes(m represents the number of schools which received votes in the certain year). Finally, we aggregated the 20 rankings into a ranking using Median Rank Aggregation. \subsubsection{Results} We use Kendall's $\tau$~\citep{kendall1938new} to measure the correlation between the coach production rankings and the aggregated AP rankings. The Kendall correlation between two variables will be high when observations have a similar rank. We calculate the Kendall's tau between an aggregated AP ranking $X$ and $Y$---- the corresponding coach production rank of each school in X. Figure~\ref{fig:FB_Corr} and Figure ~\ref{fig:BK_Corr} are the correlation results. The color of the points in the grid represents the value of Kendall's tau between an aggregated AP ranking(as $x$ coordinate) and an coach production ranking(as $y$ coordinate). \begin{figure}[htb] \centering \includegraphics[height = 6.5cm, width =8.3cm]{FB_APPR_Corr.eps} \caption{Correlation Graph Between Aggregated Football AP and PR Rankings} \label{fig:FB_Corr} \end{figure} \begin{figure}[htb] \centering \includegraphics[height = 6.5cm, width =8.3cm]{BK_APPR_Corr.eps} \caption{Correlation Graph Between Aggregated Men's Basketball AP and PR Rankings} \label{fig:BK_Corr} \end{figure} We find that: 1.The aggregated AP rankings before 1960 and the ones around 1970-1980 are more correlated to the contemporary coach production rankings(with $\tau>0.5$). 2.In Figure~\ref{fig:FB_Corr}, before 1955, the aggregated AP rankings also show some correlation with the coach production rankings in more recent time. But the correlation gradually decreased by years. Probably because some strong teams in the old days, like Yale and the Ivy League, gradually get insulated from the national spotlight and finally moved down into I-AA(now as Football Championship Subdivision) starting with the 1982 season. 3.In 1973, the NCAA was divided into three legislative and competitive divisions – I, II, and III. And at the same time, both in football and men's basketball, there is an increased correlation between the contemporary aggregated AP rankings and the coach production rankings roughly after 1970. \subsection{Division-level Movements} To show the movements between different divisions, we use data of the coach graduated after 1973 when NCAA was first divided into three divisions. Table~\ref{table:BK_Div} and Table~\ref{table:FB_Div} show the movements from the coach graduating school's division(the row) to the school they work for(the column). \begin{table}[htb] \centering \begin{tabular}{p{1cm}|p{1cm}p{1cm}p{1cm}|p{1cm}} & Div I & Div II & Div III & All \\ \hline Div I & \textbf{0.263} & 0.110 & 0.104 & 0.477\\ Div II & 0.062 & \textbf{0.105} & 0.038 & 0.205\\ Div III & 0.062 & 0.044 & \textbf{0.213} & 0.318\\ \hline All & 0.386 & 0.259 & 0.355 & \\ \end{tabular} \caption{In the NCAA men's basketball, faction of coach who graduated from a school in one division(row) and are hired as head coach in a school from another division(column). Movements inside one division are highlighted in bold.} \label{table:BK_Div} \end{table} \begin{table}[htb] \centering \begin{tabular}{p{1cm}|p{0.6cm}p{0.6cm}p{0.8cm}p{0.9cm}|p{0.6cm}} & FBS & FCS & Div II & Div III & All \\ \hline FBS & \textbf{0.153} & 0.063 & 0.054 & 0.043 & 0.314\\ FCS & 0.040 & \textbf{0.078} & 0.046 & 0.033 & 0.198\\ Div II & 0.033 & 0.025 & \textbf{0.104} & 0.038 & 0.201\\ Div III & 0.019 & 0.026 & 0.041 & \textbf{0.200} & 0.286 \\ \hline All & 0.246 & 0.193 & 0.246 & 0.315 & \\ \end{tabular} \caption{In the NCAA football, faction of coach who graduated from a school in one division(row) and are hired as head coach in a school from another division(column). Movements inside one division are highlighted in bold. Football Division I has two subdivisions I-A and I-AA (renamed the Football Bowl Subdivision(FBS) and the Football Championship Subdivision(FCS) in 2006)} \label{table:FB_Div} \end{table} Here we also take into account the movements among the two subdivisions of division I in football and other divisions. Generally speaking, the FBS has more funding, more scholarship and better sport facilities than the FCS. From the tables we find that: 1.The diagonal number represents the fraction of coach who graduated and got hired in the same division. And the fraction of within-division movements is greater than others. 2.There are more coaches move downwards from division I to II and III than move upwards. 3.Excluding the coach working within their division, there are more coach moving upwards to division I than moving to division II or III. \section{Conclusion} In this paper, we collect a dataset containing the NCAA men's basketball and football head coach's career data and hiring data. Based on the dataset, we build coach hiring networks and use network-based methods to analyze the hiring networks in four aspects including inequality, community structure, coach production rankings and the movements between divisions. The results reveal that: (1).the coach hiring market is actually of great inequality, which means most of the head coaches come from a small proportion of the NCAA members. It indicates an unequal distribution of the US sports education resource. (2).Coaches prefer to stay in the same division and geographic region to their alma maters'. (3).The coach production rankings are generally correlated to the authoritative rankings, which indicates that good teams are likely to foster future head coaches. However, in specific time period, this is not true probably because of the contemporary NCAA policies and social events. Our future directions include: 1.We have found some similar hierarchical organization properties as in ~\cite{ravasz2003hierarchical} on our dataset. We could develop proper temporal evolving networks model to predict the coach hiring market. 2.To better explain and find out the mechanism behind our findings, such as the inequality, a complete, deeper understanding of the NCAA's history~\cite{smith2000brief}, the contemporary related policy will be probably of help. \begin{raggedright}
{ "timestamp": "2017-12-15T02:04:18", "yymm": "1712", "arxiv_id": "1712.05112", "language": "en", "url": "https://arxiv.org/abs/1712.05112" }
\section{Introduction and Background} The classical braid groups $B_n$ on the plane were introduced by Artin \cite{name}. Geometrically, elements of such braid groups appear as a collection of $n$ paths emanating from a set of $n$ distinct points on the plane which wind around each other and return to some permutation of the original set of points. Braid groups play an important role in various areas of mathematics, including the knot theory, representation theory, and the study of monodromy invariants in algebraic geometry. It is a well-known result of Artin \cite{name} that the group $B_n$ has the presentation \begin{equation} B_n\simeq\langle\sigma_1, \sigma_2, \ldots, \sigma_{n-1}|\sigma_i\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_i\sigma_{i+1}, \sigma_i\sigma_j=\sigma_j\sigma_i\rangle \end{equation} where $1\leq i\leq n-2$ in the first group of relations and $|i-j|\geq 2$ in the second group of relations. The generators $\sigma_i$ correspond to ``transposition" braids which swap adjacent points.\\ \indent Zariski \cite{zar} later provided a natural generalization of these notions by considering braid groups on more general surfaces. Let us restrict our attention to $\Sigma_g$, the closed orientable surface of genus $g$. Let $\Sigma_g^n$ denote the $n$-fold cartesian product of $\Sigma_g$ with itself and let $F_n(\Sigma_g)$ denote the \textit{$n^{th}$ ordered configuration space} of $\Sigma_g$, i.e the space $$ F_n(\Sigma_g)=\{(x_1, \ldots, x_n)\in\Sigma_g^n|x_i\neq x_j, \forall i\neq j\}. $$ \indent Note that the symmetric group $S_n$ acts freely on $F_n(\Sigma_g)$ by permuting coordinates. We define the \textit{$n^{th}$ configuration space} $C_n(\Sigma_g)$ of $\Sigma_g$ as the orbit space $C_n(\Sigma_g)=F_n(\Sigma_g)/S_n$. Note that $C_n(\Sigma_g)$ is a 2-manifold since $F_n(\Sigma_g)\subset\Sigma_g^n$ is an open subset and the permutation action of $S_n$ on $F_n(\Sigma_g)$ is free. We define the \textit{$n^{th}$ braid group} $B_n(\Sigma_g)$ of $\Sigma_g$ as the fundamental group$$B_n(\Sigma_g)=\pi_1 (C_n(\Sigma_g), [x_1, \ldots, x_n])$$ where $[x_1, \ldots, x_n]\in C_n(\Sigma_g)$ is a set of unordered points in $F_n(\Sigma_g)$. Since we are working with connected surfaces, we usually leave the basepoint implicit in our notation. \begin{exmp} The classical braid group $B_n$ is the fundamental group of the $n$-fold configuration space $C_n(\mathbb{R}^2)$ of the plane. In fact, $C_n(\mathbb{R}^2)$ is an Eilenberg-Maclane space $K(B_n,1)$. \end{exmp} \begin{rem} If $n=1$, then $F_n(\Sigma_g)=\Sigma_g$, so $B_n(\Sigma_g)=\pi_1(\Sigma_g)$. Thus, braid groups on surfaces can be viewed as generalizations of their fundamental groups. \end{rem} Every element $[\sigma]\in B_n(\Sigma_g)$ induces a permutation of the elements of its basepoint. Thus, we obtain a surjective group homomorphism $f:B_n(\Sigma_g)\rightarrow S_n$ (which implies that $B_n(\Sigma_g)$ is nonabelian for $n\geq 3$). The subgroup $\mathrm{ker}(f)\subset B_n(\Sigma_g)$ is called the \textit{pure braid group} on $\Sigma_g$ and is denoted $P_n(\Sigma_g)$. By definition, $P_n(\Sigma_g)$ is a normal subgroup of index $n!$. Note also that $P_n(\Sigma_g)\simeq\pi_1 F_n(\Sigma_g)$. These groups fit into a canonical short exact sequence$$ 1\longrightarrow P_n(\Sigma_g)\longrightarrow B_n(\Sigma_g)\longrightarrow S_n\longrightarrow 1. $$ Let $p^i:F_n(\Sigma_g)\rightarrow\Sigma_g$ denote the projection onto the $i^{th}$ coordinate for $1\leq i\leq n$. Then we have induced maps $$p^i_*:P_n(\Sigma_g,(x_1, \ldots, x_n))\longrightarrow\pi_1(\Sigma_g, x_i) $$ for each $1\leq i\leq n$. The \textit{vertex loop} of $x_i$ induced by $[\sigma]\in P_n(\Sigma_g)$ is $p^i_*([\sigma])$. We denote this by $[\sigma]_{x_i}$. We study interactions between surface braid groups $B_n(\Sigma_g)$ and the singular homology groups of $\Sigma_g$. In particular, we study a natural group homomorphism$$ \omega:B_n(\Sigma_g)\longrightarrow H_1(\Sigma_g; \mathbb{Z}) $$ which maps a braid on $\Sigma_g$ to the integral homology class of the formal sum of the individual paths (viewed as singular 1-simplices) it induces on each element of the basepoint. We show that $\mathrm{ker}(\omega)$ is generated by simple braids which arise from triangulations of $\Sigma_g$. Generally, $B_n(\Sigma_g)$ is a complicated object for arbitrary $g$, while the homology groups $H_1(\Sigma_g; \mathbb{Z})\simeq\mathbb{Z}^{2g}$ are well-understood. Thus, our results describe a useful and well-behaved relationship between braid groups on surfaces and homology groups. \section{Acknowledgments} I'd like to thank my mentor Gus Lonergan for his guidance in this project and Prof. Roman Bezrukavnikov for suggesting the project. I would also like to thank Dr. John Rickert and Dr. Tanya Khovanova for their advice on mathematical writing. Additionally, I thank Daniel Vitek for numerous helpful comments and revisions. Also, I thank the Research Science Institute, the Center for Excellence in Education, and the Massachusetts Institute for Technology for supporting this research. \section{Preliminary Constructions} \subsection{Construction of the Homomorphism $\omega$}\label{pre} Let $B_n(\Sigma_g)$\footnote{$\omega$ can be constructed in the same way for braid groups on general topological spaces.} be based at $[x_1, \ldots x_n]\in C_n(\Sigma_g)$. We fix this basepoint throughout the paper. Let $[\phi]\in B_n(\Sigma_g)$ be a braid and let $\pi:F_n(\Sigma_g)\rightarrow C_n(\Sigma_g)$ denote the quotient map. Let $(x_1, \ldots, x_n)$ be an element in the fiber of $[x_1, \ldots, x_n]$ under $\pi$. The map $\pi$ is an $S_n$-cover, so we can lift $\phi$ to a unique path $\tilde{\phi}:[0,1]\rightarrow F_n(\Sigma_g)$ with $\tilde{\phi}(0)=(x_1, \ldots, x_n)$. \begin{prop}\label{zero} Let $[\phi]\in B_n(\Sigma_g)$. Then $\sum_{i=1}^n(p_i\circ\tilde{\phi})\in\mathrm{ker}(\partial_1)$, where $\partial_1$ denotes the boundary operator. \end{prop} \begin{proof} We have that \begin{equation} \partial_1(\sum_{i=1}^n(p_i\circ\tilde{\phi}))=\sum_{i=1}^n\partial_1(p_i\circ\tilde{\phi}) \end{equation} \begin{equation} =\sum_{i=1}^n(p_i\circ\tilde{\phi})|_{\{1\}}-\sum_{i=1}^n(p_i\circ\tilde{\phi})|_{\{0\}} \end{equation} By definition of $C_n(\Sigma_g)$, the $n$-tuple $(p_1\circ\tilde{\phi}|_{\{1\}}, \ldots, p_n\circ\tilde{\phi}|_{\{1\}})$ is a permutation of $(p_1\circ\tilde{\phi}|_{\{0\}}, \ldots,p_n\circ\tilde{\phi}|_{\{0\}})$, so $\sum_{i=1}^n(p_i\circ\tilde{\phi})|_{\{1\}}=\sum_{i=1}^n(p_i\circ\tilde{\phi})|_{\{0\}}$ and thus $\partial_1(\sum_{i=1}^n(p_i\circ\tilde{\phi}))$ vanishes. \end{proof} Thus, $\sum_{i=1}^n(p_i\circ\tilde{\phi})$ is a singular 1-cycle and hence yields a homology class, so that we can define a function $\omega:B_n(\Sigma_g) \rightarrow H_1(\Sigma_g; \mathbb{Z})$ sending $[\phi]\mapsto [\sum_{i=1}^n(p_i\circ\tilde{\phi})]$. \begin{prop} The function $\omega$ is a well-defined group homomorphism. \end{prop} \begin{proof} That $\omega$ is a group homomorphism follows from a straightforward computation, so we'll just show that it is well-defined. This is a consequence of the fact that $\pi$ is a covering map. Explicitly, suppose that $\phi_1,\phi_2:[0,1]\rightarrow C_n(\Sigma_g)$ are homotopic loops via $h:[0,1]\times[0,1]\rightarrow C_n(\Sigma_g)$ rel $\{0,1\}$. Fix unique lifts $\tilde{\phi}_1, \tilde{\phi}_2:[0,1]\rightarrow F_n(\Sigma_g)$ with $\tilde{\phi}_1(0)=\tilde{\phi}_2(0)=(x_1, \ldots, x_n)$. We need to show that $\sum_{i=1}^n(p_i\circ\tilde{\phi}_1-p_i\circ\tilde{\phi}_2)\in\mathrm{im}(\partial_2)$. By the Homotopy Lifting Property, we can lift $h$ to a homotopy $\tilde{h}:[0,1]\times[0,1]\rightarrow F_n(\Sigma_g)$ rel $\{0,1\}$ between $\tilde{\phi}_1$ and $\tilde{\phi}_2$. It follows that $p_i\circ\tilde{\phi}_1$ and $p_i\circ\tilde{\phi}_2$ are homotopic, so that $p_i\circ\tilde{\phi}_1-p_i\circ\tilde{\phi}_2\in\mathrm{im}(\partial_2)$, whence the result. \end{proof} We call $\omega$ the \textit{total winding number map} and refer to elements $[\sigma]\in\mathrm{ker}(\omega)$ as \textit{balanced braids} (See Figure \ref{balance}). Elements in $P_n(\Sigma_g)\cap\mathrm{ker}(\omega)$ are referred to as \textit{pure balanced braids}. So, balanced braids are those braids whose individual strands ``wind" around with orientations that cancel. \begin{figure}\label{balance} \centering \begin{tikzpicture} \draw[gray, ultra thick] (0,0) -- (3,0) -- (3,3) -- cycle; \draw[gray, ultra thick] (0,0) -- (0,3) -- (3,3) -- cycle; \draw[gray, ultra thick] (3,0) -- (6,0) -- (6,3) -- cycle; \draw[gray, ultra thick] (3,0) -- (3,3) -- (6,3) -- cycle; \draw[gray, ultra thick] (3,0) -- (0,-3) -- (3,-3) -- cycle; \draw[gray, ultra thick] (0,0) -- (0,-3) -- (3,0) -- cycle; \draw[gray, ultra thick] (3,0) -- (3,-3) -- (6,0) -- cycle; \draw[gray, ultra thick] (6,0) -- (3,-3) -- (6,-3) -- cycle; \draw[gray, ultra thick] (6,0) -- (9,0) -- (9,3) -- cycle; \draw[gray, ultra thick] (6,0) -- (6,3) -- (9,3) -- cycle; \draw[gray, ultra thick] (6,0) -- (6,-3) -- (9,0) -- cycle; \draw[gray, ultra thick] (9,0) -- (6,-3) -- (9,-3) -- cycle; \draw[gray, ultra thick] (0,-3) -- (3,-3) -- (0,-6) -- cycle; \draw[gray, ultra thick] (3,-3) -- (0,-6) -- (3,-6) -- cycle; \draw[gray, ultra thick] (3,-3) -- (3,-6) -- (6,-3) -- cycle; \draw[gray, ultra thick] (6,-3) -- (6,-6) -- (3,-6) -- cycle; \draw[gray, ultra thick] (6,-3) -- (9,-3) -- (6,-6) -- cycle; \draw[gray, ultra thick] (9,-3) -- (9,-6) -- (6,-6) -- cycle; \draw[->] (0,0) .. controls (3,2.5) .. (5.8, 2.9); \draw[->] (6,3) .. controls (5,1) .. (3.2, 0.1); \draw[->] (3,0) .. controls (1.5,-1) .. (0.1, -0.1); \end{tikzpicture} \caption{Balanced Braid on a Triangulation of the Torus $\Sigma_1$ (Viewed on its fundamental polygon)} \end{figure} \begin{comment} \begin{rem}\label{hhhh} For any $n\in\mathbb{N}$, the following diagram commutes \begin{displaymath}\xymatrixcolsep{1.0cm}\xymatrixrowsep{1.0cm} \xymatrix{ B_n(\Sigma_g)\ar[d] \ar[r]^{\omega} & H_1(\Sigma_g;\mathbb{Z}) \\ B_n(\Sigma_g)/[B_n(\Sigma_g),B_n(\Sigma_g)]\ar[r]^{\hspace{6mm}\simeq} & H_1(C_n(\Sigma_g);\mathbb{Z})\ar[u]} \end{displaymath} in which the left vertical map is the canonical quotient map, the bottom map is the Hurewicz isomorphism, and the right vertical map sends $[\sum c_j\gamma_j]\mapsto\sum_{i=1}^np^i_*([\sum c_j\gamma_j])$. \end{rem} \end{comment} Let $j\in\{1, \ldots, n\}$ and let $B_j(\Sigma_g)$ be based at $[x_1, \ldots, x_j]\in C_n(\Sigma_g)$. For each such $j$, there is a natural group homomorphism $$B_j(\Sigma_g)\longrightarrow B_n(\Sigma_g)$$ given by ``adding constant strands" to the points $x_{j+1}, \ldots, x_n$. In particular, for $j=1$, we obtain a homomorphism $$\upsilon:\pi_1(\Sigma_g)\longrightarrow B_n(\Sigma_g)$$ \begin{comment} Fix some $i\in\{1, 2, \ldots, n\}$ and let $x_1,\ldots, x_n$ be a set of $n$ distinct points in $\Sigma_g$. For each $n\in\mathbb{N}$, define a map $u_i:\Sigma_g\rightarrow F_n(\Sigma_g)$ sending $x\mapsto(x_1, \ldots x, \ldots, x_n)$, where $x$ replaces $x_i$. Post-composing with the quotient map $\pi$ gives a continuous map $\pi\circ u_i:\Sigma_g\rightarrow C_n(\Sigma_g)$. Note that $\pi\circ u_i$ can be regarded as a map of pointed spaces $(\Sigma_g,x_i)\rightarrow (C_n(\Sigma_g),[x_1, \ldots, x_n])$, so there is an induced group homomorphism $$\alpha_i=(\pi\circ u_i)_*:\pi_1(\Sigma_g)\longrightarrow B_n(\Sigma_g)$$ \end{comment} which fits into a commutative triangle of groups \begin{displaymath} \xymatrix{ B_n(\Sigma_g) \ar[r]^{\omega} & H_1(\Sigma_g;\mathbb{Z}) \\ \pi_1(\Sigma_g) \ar[u]^{\upsilon}\ar[ur]_{\Phi} } \end{displaymath} where $\Phi$ denotes the Hurewicz homomorphism. Since $\Phi$ is surjective, we immediately see that $\omega$ is surjective as well. \begin{comment} \begin{prop} $\omega$ is surjective \end{prop} \begin{proof} Choose some $x_j\in\{x_1, \ldots, x_n\}$, where $[x_1, \ldots, x_n]$ is the basepoint of $B_n(\Sigma_g)$. Let $\pi_1(\Sigma_g)$ be based at $x_j$. The Hurewicz map $\Phi$ factors as a composition \begin{displaymath} \xymatrix{ B_n(\Sigma_g) \ar[r]^{\omega} & H_1(\Sigma_g;\mathbb{Z}) \\ \pi_1(\Sigma_g) \ar[u]^{i}\ar[ur]_{\Phi} } \end{displaymath} where $i$ maps $[\phi]\in\pi_1(\Sigma_g)$ to the unique braid whose only nontrivial vertex loop is at $x_j$ and is $[\phi]$ and $i([\phi])_{x_{j}}$ has trivial winding number around all points $x_i\in\{x_1, \ldots, x_n\}$ with $i\neq j$ (so that if $([\phi])_{x_{j}}$ is non-trivial, then $i([\phi])_{x_{j}}\notin\mathrm{ker}(u)$, where $$u:\pi_1(\Sigma_g-\{x_1, \ldots, x_{j-1}, x_{j+1}, \ldots, x_n\})\longrightarrow\pi_1(\Sigma_g)$$ is the natural map). The surjectivity of $\omega$ follows from the fact that $\Phi$ is surjective since $\Sigma_g$ is connected\footnote{By the Hurewicz theorem, the same proof works if $\Sigma_g$ is replaced with any connected topological space.}. \end{proof} \end{comment} \subsection{Braids from Triangulations} Let $K$ be a finite simplicial complex with geometric realization $|K|$ and let $\theta:|K|\xrightarrow{\simeq}\Sigma_g$ be an arbitrary triangulation of $\Sigma_g$. We fix this triangulation throughout the paper. We will identify $|K|$ with its image in $\Sigma_g$ under $\theta$, so that in particular, a ``vertex" in the triangulation of $\Sigma_g$ refers to a 0-simplex of $K$. Let the basepoint of $B_n(\Sigma_g)$ be the vertex set $K_0$ of our triangulation. Fix some directed edge $e=(v_0, v_1)$ in the triangulation of $\Sigma_g$ which is a 1-face of two 2-simplices, say $(v_0, v_1, v_2)$ and $(v_0, v_1, v_3)$. We can obtain a braid by rotating $v_0$ and $v_1$ clockwise around $e$ until $v_0$ and $v_1$ have swapped positions whilst remaining in the interior of $(v_0, v_1, v_2)\cup(v_0, v_1, v_3)$ (see Figure \ref{edge}). All the strands starting at points in $K_0 -\{v_0, v_1\}$ remain constant. \begin{figure}\label{edge} \centering \begin{tikzpicture} \draw[gray, ultra thick] (0,0) -- (3,0) -- (3,3) -- cycle; \draw[gray, ultra thick] (0,0) -- (0,3) -- (3,3) -- cycle; \draw[->] (0,0) .. controls (0.5,2.5) .. (2.8, 2.9); \draw[->] (3,3) .. controls (2.5,0.5) .. (0.2, 0.1); \end{tikzpicture} \caption{Edge Braid on a Local Piece of a Triangulation} \end{figure} Braids constructed in this fashion are called \textit{edge braids}. Every edge $e=(v_0, v_1)$ in $\Sigma_g$ yields two mutually inverse edge braids, which we denote by $b_e$ (the ``clockwise" edge braid) and $b_e^{-1}$ (the ``counter-clockwise" edge braid). Let $E_n^{\theta}(\Sigma_g)$ denote the subgroup of $B_n(\Sigma_g)$ generated by the edge braids corresponding to the triangulation $\theta$. If the context is clear, we will write $E_n(\Sigma_g)$ instead of $E_n^{\theta}(\Sigma_g)$. We refer to elements of $E_n(\Sigma_g)$ as \textit{quasi-edge braids}. Note that each edge braid vanishes under $\omega$, so that there is an inclusion $E_n(\Sigma_g)\subset\mathrm{ker}(\omega)$.\\ \indent An \textit{edge path of length $k$} in $K$ is a concatenation of directed edges $\lambda=e_1 \ast e_2\ast\ldots\ast e_k$ in the triangulation of $\Sigma_g$ such that the target of $e_i$ is the source of $e_{i+1}$ for $1\leq i\leq k-1$. $\lambda$ is called an \textit{edge loop} if the target of $e_k$ is the source of $e_1$. We will abuse terminology by referring to edge paths in $K$ as edge paths in $\Sigma_g$. An edge path/loop is \textit{simple} if it is non-self intersecting. A ``vertex" in $\lambda$ refers to a vertex of one of the edges contained in $\lambda$.\\ Recall that for any $v\in K$ there are isomorphisms $E(K,v)\simeq\pi_1(|K|,v)\simeq\pi_1(\Sigma_g,v)$, where $E(K,v)$ denotes the \textit{edge path group} of $K$. Thus, we can naturally consider (edge-equivalence classes of) edge loops in the triangulation of $\Sigma_g$ based at $v$ as (homotopy classes of) loops also based at $v$. \section{Main Results} Our main result is Theorem \ref{theorem}, which, given any triangulation of $\Sigma_g$, gives a characterization of $\mathrm{ker}(\omega)$ using edge braids. \begin{thm}\label{theorem} Let $\Sigma_g$ be equipped with an arbitrary simplicial triangulation $|K|\xrightarrow{\simeq}\Sigma_g$ and let $n=\# K_0$. Let the surface braid group $B_n(\Sigma_g)$ have basepoint $K_0$. Then the kernel of the total winding number map $\omega:B_n(\Sigma_g)\rightarrow H_1(\Sigma_g; \mathbb{Z})$ is precisely $E_n(\Sigma_g)$. \end{thm} Since $\omega$ is surjective, Theorem \ref{theorem} implies that any triangulation of $\Sigma_g$ induces a short exact sequence of groups $$ 1\longrightarrow E_n(\Sigma_g)\longrightarrow B_n(\Sigma_g)\longrightarrow H_1(\Sigma_g;\mathbb{Z})\longrightarrow 0. $$ \begin{cor} \textit{$E_n(\Sigma_g)$ contains the commutator subgroup $[B_n(\Sigma_g),B_n(\Sigma_g)]$.} \end{cor} \begin{proof} Theorem \ref{theorem} supplies an isomorphism of groups $H_1(\Sigma_g;\mathbb{Z})\simeq B_n(\Sigma_g)/E_n(\Sigma_g)$. Since $H_1(\Sigma_g;\mathbb{Z})$ is abelian, we have that $[B_n(\Sigma_g),B_n(\Sigma_g)]\subset E_n(\Sigma_g)$. \end{proof} \begin{rem} By Heawood's bounds [3], the number of vertices $n$ of the simplicial complex $K$ used in any triangulation of $\Sigma_g$ for $g\neq 2$ must satisfy \begin{equation} n\geq \frac{7+\sqrt{49-24\chi(\Sigma_g)}}{2}=\frac{7+\sqrt{1+48g}}{2} \end{equation} where $\chi(\Sigma_g)=2-2g$ denotes the Euler characteristic of $\Sigma_g$. In particular, Theorem \ref{theorem} may apply for $n$ sufficiently large relative to the genus $g$. \end{rem} The remainder of this paper is dedicated to proving Theorem \ref{theorem}. The genus $g=0$ case is well-known; we provide a proof for completeness. \subsection{Proof of Theorem \ref{theorem} for $g=0$}\label{prooooof} Since $H_1(\Sigma_0;\mathbb{Z})\simeq0$, the genus $0$ case is the statement that $B_n(\Sigma_0)=E_n(\Sigma_0)$ given any triangulation of $\Sigma_0$, where $n$ is the number of vertices in the triangulation. Recall that $B_n(\Sigma_0)$ is generated by the ``transposition" braids $\sigma_i$ for $1\leq i\leq n-1$ analogous to the generators of Artin's braid group $B_n$. Thus, it suffices to show that each $\sigma_i$ is a quasi-edge braid. Let $x_i$ and $x_{i+1}$ be the elements of the basepoint of $B_n(\Sigma_0)$ that $\sigma_i$ swaps. Fix a simple edge path $\delta=e_{i_1}\ast\ldots\ast e_{i_n}$ from $x_i$ to $x_{i+1}$. Then the quasi-edge braid \begin{equation}\label{cool} q_{\delta}=b_{e_{i_1}}b_{e_{i_2}}\ldots b_{e_{i_n}}b_{e_{i_{n-1}}}^{-1}b_{e_{i_{n-2}}}^{-1}\ldots b_{e_{i_1}}^{-1} \end{equation} is equal to $\sigma_i$ (it swaps $x_{i}$ and $x_{i+1}$ while leaving all other elements of the basepoint fixed), so we're done.\\ For any edge path $p$ in the triangulation of $\Sigma_g$, $q_{p}$ denotes the quasi-edge braid constructed in manner of Equation (5). \begin{exmp} The most basic example of of the genus $0$ case of Theorem \ref{theorem} is when the triangulation is the canonical homeomorphism $\partial(\Delta^3)\simeq\Sigma_0$, where $\partial(\Delta^3)$ denotes the boundary of the standard 3-simplex. By the assumptions of the theorem, the elements of the basepoint of $B_4(\Sigma_0)$ are the four endpoints of $\Delta^3$, all of which are pairwise adjacent. Hence, each generator of $B_4(\Sigma_0)$ is actually an edge braid. \end{exmp} \subsection{Reduction of Theorem \ref{theorem} to Pure Balanced Braids on $\Sigma_g$} We start with the following observation. \begin{prop}\label{surj} The restriction $f|_{E_n(\Sigma_g)}:E_n(\Sigma_g)\rightarrow S_n$ is surjective. \end{prop} \begin{proof} It suffices to show each transposition $s$ in $S_n$ is hit by $f|_{E_n(\Sigma_g)}$. Let $(x_1, \ldots, x_n)$ be the basepoint of $B_n(\Sigma_g)$. The proof is the same as in the $g=0$ case. Explicitly, let $s$ swap $i$ and $j$, where we assume without loss of generality that $1\leq i< j\leq n$. Fix a simple edge path $\lambda$ from $x_i$ to $x_j$. Then $f(q_{\lambda})=s$, so we're done. \end{proof} Let $l:S_n\longrightarrow\mathbb{Z}_{\geq 0}$ denote the \textit{length function} of $S_n$ relative to the generating transpositions $s_i\in S_n$ of the usual Coxeter presentation. $l(\gamma)$ is defined to be the minimum number of transpositions required to express the permutation $\gamma\in S_n$. The following fact now is an easy consequence of Proposition \ref{surj}. \begin{prop}\label{gen} Every element in $B_n(\Sigma_g)$ can be written as a product of pure braids and edge braids. \end{prop} \begin{proof} Define a function $$l_{\star}:B_n(\Sigma_g)\longrightarrow\mathbb{Z}_{\geq 0}$$ $$[\sigma]\mapsto l(f([\sigma])).$$ This is clearly well-defined. We induct on $l_{\star}([\sigma])$. The base case $l_{\star}([\sigma])=0$ is clear since $[\sigma]$ must be a pure braid. Suppose the result holds for all $[\sigma]$ with $l_{\star}([\sigma])=k$. To complete the inductive step, it suffices to show that any braid $[\sigma]$ of length $k+1$ can be multiplied by some quasi-edge braid such that the resulting braid has length $k$. Write $f([\sigma])=s_{i_1} s_{i_2}\ldots s_{i_{k+1}}$ for transpositions $s_{i_j}\in S_n$, $1\leq j\leq k+1$. By Proposition \ref{surj}, we may choose some quasi-edge braid $[\mu]$ such that $f([\mu])=s_{i_{k+1}}^{-1}$. Thus, $l_{\star}([\sigma\mu])=k$, which completes the proof. \end{proof} By Proposition \ref{gen}, it is sufficient to show that the subgroup $\mathrm{ker}(\omega|_{P_n(\Sigma_g)})=P_n(\Sigma_g)\cap\mathrm{ker}(\omega)$ of $\mathrm{ker}(\omega)$ is generated by edge braids in order to deduce Theorem \ref{theorem}. This is a useful reduction since we may think of pure braids as collections of homotopy classes of loops on $\Sigma_g$. \begin{comment} \subsection{Pure Braid Invariants} By Proposition \ref{gen}, it suffices to show that the edge braids generate $P_n(\Sigma_g)\cap \mathrm{ker}(\omega)$. We construct an invariant of pure braids which we use to reduce \ref{theorem} to studying a smaller subgroup of $P_n(\Sigma_g)\cap \mathrm{ker}(\omega)$. Let $[\varphi]\in P_n(\Sigma_g)$ and fix the canonical basis $\{[e_1], [e_2], \ldots, [e_{2g}]\}$ of $H_1(\Sigma_g; \mathbb{Z})\simeq\mathbb{Z}^{2g}$. \begin{lem} Each representative $e_{i}$ of any basis class of of $H_1(\Sigma_g; \mathbb{Z})$ is homotopic to some edge loop $e^{\star}_i$ in $K$ that passes through any given vertex $v\in K$. \end{lem} We have a natural function $$\tau:H_1(\Sigma_g; \mathbb{Z})\rightarrow\mathbb{Z}_{\geq 0}$$ sending $c_1[e_1]+\ldots +c_{2g}[e_{2g}]\mapsto |c_1|+\ldots+|c_{2g}|$. Let $\Phi$ denote the Hurewicz homomorphism $\Phi:\pi_1(\Sigma_g)\rightarrow H_1(\Sigma_g; \mathbb{Z})$ and let $p_i:F_n(\Sigma_g)\rightarrow\Sigma_g$ denote the projection onto the $i^{th}$ coordinate. Now define the function $$\lambda:P_n(\Sigma_g)\rightarrow\mathbb{Z}_{\geq 0}$$ $$[\varphi]\mapsto\sum_{i=1}^n (\tau\circ\Phi\circ p_i)[\varphi]$$ We refer to the value of $\lambda[\sigma]$ as the \textit{complexity} of $[\sigma]$. We note that $\lambda[\sigma]=0$ if and only if each vertex loop (viewed under $\Phi$ as elements in $H_1(\Sigma_g; \mathbb{Z})$) induced by $[sigma]$ is null-homologous, which in turn is equivalent to the vertex loops being null-homotopic when $\pi_1(\Sigma_g)$ is abelian. We state the following lemmae without proof. \begin{lem} Fix a basis class $e_i\in H_1(\Sigma_g; \mathbb{Z})$ and choose two vertices $x_0, x_1\in K$. Then there exists some representative $e_i'$ of $[e_i]$ such that $e_i'$ is an edge loop that passes through $x_0$ and $x_1$. \end{lem} \end{comment} \subsection{Some Properties of $E_n(\Sigma_g)$}\label{braids} We prove some results concerning which braids on $\Sigma_g$ are quasi-edge braids and describe some relations that the edge braids satisfy. \subsubsection{Conjugation action of $E_n(\Sigma_g)$}\label{conjj} We briefly describe a property of the conjugation action of certain quasi-edge braids which is relevant to the proof of Lemma \ref{super}. In particular, conjugation by certain quasi-edge braids has a useful property when the pure braid being conjugated has exactly one non-trivial vertex loop. Let $\varphi:B_n(\Sigma_g)\rightarrow\mathrm{Aut}P_n(\Sigma_g)$ denote the conjugation homomorphism and let $[\gamma]\in P_n(\Sigma_g)$ be a pure braid with exactly one non-trivial vertex loop, say $[\gamma]_{x_i}$ for some $i\in\{1, \ldots, n\}$. \\ \indent Let $x_j$ be a vertex adjacent to $x_i$ in the triangulation of $\Sigma_g$ and let $e$ be an edge connecting them. By a direct computation, we see that the braid $b_{e}[\gamma]b_{e}^{-1}$ still has exactly one non-trivial vertex loop, except that it is located at $x_j$ instead of $x_i$, so that conjugating by $b_{e}$ ``moves" the loop at $x_i$ to $x_j$. Suppose that $x_i$ and $x_j$ are vertices in the triangulation that are not necessarily adjacent. Fix a simple edge path $\eta$ from $x_i$ to $x_j$. Extrapolating from the above case, we see that $[\gamma]$ conjugated by the quasi-edge braid $q_\eta$ (constructed in the fashion of Equation (5) of Section \ref{prooooof}) has exactly one non-trivial vertex loop located at $x_j$.\\ \indent For any braid $[\alpha]\in B_n(\Sigma_g)$, let $C_{[\alpha]}$ denote the subset of the basepoint $\{x_1, \ldots, x_n\}$ consisting of the points whose induced vertex loops are trivial. Since $[\gamma]$ has only one non-trivial vertex loop, it can naturally be regarded as element of $\pi_1(\Sigma_g-C_{[\gamma]},x_i)$. We will not make a distinction between such braids and elements of $\pi_1(\Sigma_g-C_{[\gamma]},x_i)$ for the rest of the paper. Similarly, $q_\eta[\gamma]q_\eta^{-1}$ can be seen as an element of $\pi_1(\Sigma_g-C_{{q_{\eta}[\gamma]q_{\eta}^{-1}}},x_j)$. The above discussion can be re-phrased via the following proposition. \begin{prop} Let $x_i$ and $x_j$ vertices in the triangulation of $\Sigma_g$ and let $\lambda$ be a simple edge path from $x_i$ to $x_j$. Then the conjugation map $\varphi(q_{\lambda})\in\mathrm{Aut}B_n(\Sigma_g)$ restricts to an isomorphism $$\pi_1(\Sigma_g-C_{[\gamma]},x_i)\xrightarrow{\simeq}\pi_1(\Sigma_g-C_{q_{\lambda}[\gamma]q_{\lambda}^{-1}},x_j)$$ \end{prop} \subsubsection{Quasi-Edge Braid Constructions and Relations} \begin{lem}\label{super} Let $\Lambda=e_1\ast \ldots\ast e_k$ be a simple edge loop in $\Sigma_g$ and fix two vertices $v_i, v_j$ contained in $\Lambda$. Then there exists a quasi-edge braid $w(v_i,v_j)$ such that the vertex loop $w_{v_i}$ is homotopic to $\Lambda$ and $w_{v_j}^{-1}$ is homotopic to $\Lambda$. \end{lem} \begin{proof} For ease of notation, the indices of the vertices and edges in $\Theta$ will be taken modulo $k$ (i.e $v_k=v_0$). Let $e_i=(v_i, v_{i+1})$ and let $\Omega$ denote the quasi-edge braid \begin{equation} \Omega=b_{e_i}b_{e_{i+1}}\ldots b_{e_{i-2}}b_{e_{i-1}}. \end{equation} It is clear that $\Omega_{v_i}$ is homotopic to $\Lambda$. Then $\widehat{\Omega}=\Omega b_{e_{i-1}}b_{e_{i-2}}\ldots b_{e_{i+2}}b_{e_{i+1}}$ is such that $\widehat{\Omega}_{v_{i+1}}^{-1}$ is also homotopic to $\Lambda$. Furthermore, the only non-trivial vertex loops of $\widehat{\Omega}$ are located at $v_i$ and $v_{i+1}$. Let $\mu$ be the unique edge path from $v_{i+1}$ to $v_j$ that does not contain $v_i$ and is a subset of $\Lambda$. Then $w(v_i, v_j)=q_{\mu}\widehat{\Omega}q_{\mu}^{-1}$ gives the desired braid via the discussion in Section \ref{conjj}. \end{proof} \begin{lem}\label{cool} Let $\lambda =e_1\ast \ldots\ast e_k$ be a simple edge path from $v_1$ to $v_{k+1}$, where $e_i=(v_i, v_{i+1})$ for $1\leq i\leq k$. Then there exists a quasi-edge braid such that the induced vertex loop at $v_1$ has a winding number of one about $v_{k+1}$ and zero around any other element in the basepoint of $B_n(\Sigma_g)$ (which are precisely the other vertices in the triangulation). \end{lem} \begin{proof} The quasi-edge braid \begin{equation} w_{\lambda}=b_{e_1}b_{e_2}\ldots b_{e_{k-1}}b_{e_k}^{-2}b_{e_{k-1}}\ldots b_{e_2}b_{e_1} \end{equation} gives the desired braid. The fact that $w_{\lambda}$ has trivial winding number around any other vertex follows from the definition of edge braids. \end{proof} \begin{rem} Note that $w_{\lambda}$ can be viewed as a non-identity element of $\pi_1(\Sigma_g-v_{k+1},v_1)$. It is constructed so that set $\{w_{\lambda}\}\cup S$ generates $\pi_1(\Sigma_g-v_{k+1},v_1)$, where $S$ is the set of generators of $\pi_1(\Sigma_g,v_1)$ (viewed on the punctured surface $\Sigma_g-v_{k+1}$). The quasi-edge braids built in Lemmata \ref{super} and \ref{cool} are used in Proposition \ref{one}. \end{rem} \begin{lem} Let $\alpha=(v_0, v_1, v_2)$ and $\alpha'=(v_0, v_2, v_3)$ be two 2-simplices in the triangulation such that $\alpha\cap\alpha'=(v_0, v_2)$. Let $e_{i}=(v_i,v_{i+1})$ for $0\leq i\leq 2$ and $e_{3}=(v_3,v_{0})$. Then there exists a quasi-edge braid that is homotopic to $e_0\ast e_1\ast e_2\ast e_3$. \end{lem} \begin{proof} The braid $b_{e_0}b_{e_1}b_{e_0}b_{e_2}b_{e_0}b_{e_2}$ gives the desired braid. \end{proof} \begin{lem}{(Local Edge Braid Relations).}\label{local} Fix a 2-simplex $\alpha=(v_0, v_1, v_2)$ in the triangulation of $\Sigma_g$ with boundary $\partial(\alpha)=\{e_0, e_1, e_2\}$, where $e_0=(v_0,v_{1})$, $e_1=(v_1,v_{2})$, and $e_2=(v_2,v_{0})$. Then the following relations hold: \begin{equation} b_{e_1}b_{e_0}=b_{e_2}b_{e_1}=b_{e_0}b_{e_2} \end{equation} \begin{equation} b_{e_0}b_{e_1}b_{e_0}=b_{e_1}b_{e_0}b_{e_1} \end{equation} \begin{equation} b_{e_1}b_{e_2}b_{e_1}=b_{e_2}b_{e_1}b_{e_2} \end{equation} \begin{equation} b_{e_0}b_{e_2}b_{e_0}=b_{e_2}b_{e_0}b_{e_2} \end{equation} \end{lem} \begin{proof} Relation (8) follows from a direct computation. By multiplying both sides of $b_{e_1}b_{e_0}=b_{e_2}b_{e_1}$ by $b_{e_0}$ on the left and using the relation $b_{e_0}b_{e_2}=b_{e_1}b_{e_0}$, we deduce relation (9): $$ b_{e_0}b_{e_1}b_{e_0}=b_{e_0}b_{e_2}b_{e_1} $$ $$ =b_{e_1}b_{e_0}b_{e_1}. $$ Relations (10) and (11) follow by symmetry. \end{proof} \begin{comment} $b_{e_0}b_{e_1}b_{e_0}=b_{e_1}b_{e_2}b_{e_0}=b_{e_1}b_{e_0}b_{e_1}$. The result follows by symmetry.\end{comment} \begin{comment} \begin{figure} \centering \begin{tikzpicture} \draw[gray, ultra thick] (0,0) -- (4,0) -- (2,3.46) -- cycle; \draw[->] (4,0) .. controls (2,-1) .. (0.1, -0.1); \draw[->] (0,0) .. controls (0.1,2.1) .. (1.9, 3.36); \end{tikzpicture} \caption{Local Edge Relation (1)} \end{figure} \end{comment} \begin{rem} The relations between edge braids described in Lemma \ref{local} are very similar to those between the generators of Artin's classical braid groups $B_n$, which themselves induce relations between the generating transpositions in the Coxeter presentation of $S_n$. \end{rem} \subsection{Proof of Theorem \ref{theorem} for $g\geq 1$} \subsubsection{Outline of Approach}\label{out} Fix a arbitrary element $[\sigma]\in P_n(\Sigma_g)\cap\mathrm{ker}(\omega)$. We develop a procedure to successively multiply $[\sigma]$ by quasi-edge braids until the resulting braid $[\sigma']$ has at most one non-trivial vertex loop at some vertex $x_i$ in the triangulation. This makes use of the results of sections \ref{braids} and \ref{absd}. The braid $[\sigma']$ can then be regarded as an element of $\pi_1(\Sigma_g-C_{[\sigma']},x_i)$. We then use topological and group theoretical methods involving the fundamental groups of (punctured) surfaces to deduce that $[\sigma']$ itself is a quasi-edge braid, which implies the result. \subsubsection{Generators of $\pi_1(\Sigma_g)$ and Edge Loops}\label{absd} Proposition \ref{hom} and Lemma \ref{unzip} allow us to conveniently describe generators of $\pi_1(\Sigma_g)$ using edge loops. Recall that $\pi_1(\Sigma_g)$ has generators $ [f_1], [f_2], \ldots, [f_{2g}]$. If we think of $\Sigma_g$ as the connected sum $\Sigma_g=\Sigma_1\#\ldots\#\Sigma_1$ of $g$ tori, then $[f_{i}]$ and $[f_{i+1}]$ for $i \equiv 1 \Mod{2}$ can be realized as generators of the fundamental group of the $i^{\mathrm{th}}$ torus in the connected sum. We refer to the $[f_{i}]$'s as the \textit{standard generators} of $\pi_1(\Sigma_g)$. \begin{prop}\label{hom} Let $v\in\Sigma_g$ be a vertex in the triangulation of $\Sigma_g$. Fix the usual generators $[f_1], \ldots, [f_{2g}]$ of $\pi_1(\Sigma_g,v)$. Then there exists representatives of each class $[f_i]$ for $1\leq i\leq 2g$ that are simple edge loops in the triangulation of $\Sigma_g$, i.e each $f_i$ is homotopic to a simple edge loop. \end{prop} \begin{proof} Clearly we can assume that $f_i$ is a simple loop. Locally deform $f_i$ to a homotopic loop $f_i'$ based at $v$ such that the only vertex in the triangulation of $\Sigma_g$ that $\mathrm{im}(f_i')$ intersects is $v$ and $f_i'$ remains simple. Let $S$ denote the set of simplices $\beta$ in the triangulation such that $\beta\cap\mathrm{im}(f_i')$ is nonempty. Let $S (v)$ denote the star of $v$, i.e the set of simplices in the triangulation that contain $v$ as $0$-face. It is clear by inspection that $\mathrm{im}(f_i')$ intersects exactly two elements of $S(v)$ (or that $f_i'$ can be homotoped into such a loop whose image satisfies this), which we denote by $\gamma_1$ and $\gamma_2$. Further homotope $f_i'$ into a simple loop $f_i''$ such that $f_i''$ intersects precisely two 1-faces of each element of $\beta-\{\gamma_1,\gamma_2\}$. Let $S'$ denote the set of all 1-faces of elements in $\beta-\{\gamma_1,\gamma_2\}$ that are disjoint from $\mathrm{im}(f_i')$. We construct an algorithm which helps us build an edge loop which is homotopic to $f_i''$. \begin{itemize} \item Step $0$: Choose a vertex $w_1$ in $S(v)$ that is also an endpoint of an element of $S'$ \item Step $1$: Let $e_1$ denote the unique element of $S'$ of which $w_1$ is an endpoint of. Set $P_1=\{e_1\}$. \item Step $2$: Let $e_2$ denote the unique element of $S'-\{e_1\}$ that contains as an endpoint the endpoint of $e_1$ that is not $w_1$. Let $w_2$ denote this endpoint and set $P_2=\{e_1,e_2\}$ \item Step $3$: Continue in the same fashion as Steps 1 and 2 by letting $e_j$ denote the unique element of $S'-\{e_1, \ldots, e_{j-1}\}$ that contains as an endpoint the endpoint of $e_{j-1}$ that is not $w_{j-1}$, letting $w_{j}$ denote this endpoint and setting $P_k=P_{k-1}\cup \{e_j\}$. Repeat until $w_{j+1}$ (the endpoint of $e_j$ that is not $w_j$) is a vertex in $S(v)$ for $j\geq 2$. \end{itemize} Since $w_i$ is always a $0$-face of a 2-simplex in the triangulation which intersects $\mathrm{im}(f_{i}'')$, the process in Step 3 will terminate, say after $k$ total iterations. Since $w_1, w_{k+1}\in S(v)$, there are edges $r=(w_{k+1},v)$ and $r'=(v,w_1)$. Then $l=e_1\ast e_2\ast\ldots\ast e_k\ast r\ast r'$ (the concatenation of the elements in $P_{k}$ with $r\ast r'$) is a simple edge loop which is homotopic to $f_{i}''$, and hence to $f_{i}$. That $l$ is simple follows from the fact that $f_i''$ is simple, so the proposition follows. \end{proof} The next lemma lets us ``extend" edge loops to homotopic edge loops that intersect certain vertices. This is a key ingredient in the proof of Proposition \ref{one}. \begin{lem}\label{unzip} Fix vertices $v$, $v'$ in the triangulation and let $L$ be a simple edge loop. Then $L$ is homotopic to a simple edge loop that intersects $v$ and $v'$. \end{lem} \begin{proof} We show that $L$ can be homotoped so that it intersects $v$ (a similar argument proves the full lemma). Choose a simple edge path $\lambda$ with endpoints $v$ and some vertex $x$ contained in $L$. Let $k$ be the length of $\lambda$. By induction, it suffices to show that there exists an edge loop $\xi$ homotopic to $L$ such that there is a simple edge path from $v$ to some vertex in $\xi$ of length less than $k$. Let $e_1, \ldots, e_r$ denote ordered list of edges that defines $L$. Let $\lambda'$ denote the edge path $e_1, \ldots e_{\floor{\frac{r}{2}}}$ and let $\lambda''$ denote the edge path $e_{\ceil{\frac{r}{2}}}, \ldots, e_r$. Let $E(x)$ denote the set of edges in the triangulation that contain $x$ as an endpoint. Denote the unique edge in $E(x)\cap\lambda'$ by $\mu_0$. Label the elements of $E(x)$ that lie in between $\lambda'$ and $\lambda$ in sequential order starting with $\mu_1=e_1\in\lambda'$ and ending at $\mu_p\in\lambda$, so that there are no edges in $E(x)$ that are between $\mu_i$ and $\mu_{i+1}$. \begin{claim} The edges $\mu_i$ and $\mu_{i+1}$ must be 1-faces of a common 2-simplex $\sigma_{i,i+1}$. \end{claim} \begin{proof} Suppose not. Since there are no edges in $E(x)$ between $\mu_i$ and $\mu_{i+1}$, this would imply that $\Sigma_g$ is homotopy equivalent to a wedge sum of $2g$ circles, which is a contradiction. \end{proof} For such $\mu_i$ and $\mu_{i+1}$, let $\mu_{i,i+1}$ denote the 1-face of $\sigma_{i,i+1}$ that is not $\mu_i$ or $\mu_{i+1}$. Let $w_i$ denote the endpoint of $\mu_i$ that is not $x$. We proceed by casework on the configuration of edges in $E(x)$. \begin{itemize} \item Case 1: Suppose that there does not exist an edge $\mu_{j}$ (with $2\leq j\leq p-1$) such that $w_j\in\lambda\cup\lambda'\cup\lambda''$ (See Figure 3). Let $g_1$ denote the unique edge in $E(x)\cap\lambda$. Set $\xi=g_1\ast\mu_{p-1, p}\ast\mu_{p-2, p-1}\ast\ldots\ast\mu_{1,2}\ast e_1\ast\ldots \ast e_{k}$; this is homotopic to $L$ since the subcomplex $\sigma_{1,2}\cup\ldots\cup\sigma_{p-1,p}$ is contractible. There is clearly a sub-path of $\lambda$ from $v$ to $w_p\in\xi$ of length $k-1$, so this completes the induction. \end{itemize} In the following cases, we assume that there exists an edge $\mu_j$ (with $2\leq j\leq p-1$) such that $w_j\in\lambda\cup\lambda'\cup\lambda''$. Let $i$ be the smallest index such that $\mu_i$ satisfies this condition. \begin{itemize} \item Case 2: $w_i\in\lambda$ (See Figure 4). The argument in the proof of Case 1 holds. \vspace{2mm} \vspace{2mm} \item Case 3: $w_i\in\lambda''$ (See Figure 5). Note that for all $j$ such that $i+1\leq j\leq p-1$, we must have that $w_{j}\in\lambda\cup\lambda''$. There are two possible subcases. \begin{itemize} \item Subcase 3.1: There exists a $j$ such that $i+1\leq j\leq p-1$ and $w_j\in \lambda$. Then there is some edge path $\Delta$ from $w_i$ to $w_j$. Let $\Delta'$ denote the unique sub-path of $\lambda$ with endpoints $w_j$ and $x$. Let $e_k$ denote the unique edge in $L$ such the head of $e_k$ is $w_i$. Then the edge loop $\Delta'\ast e_1\ast e_2\ast\ldots\ast e_k\ast\Delta$ is homotopic to $L$ and there is evidently a proper sub-path of $\lambda$ from $x$ to $v$. \vspace{2mm} \item Subcase 3.2: No such $j$ exists. \begin{itemize} \item 3.2a: There is some $n$ such that $i+1\le n\leq p-1$ and $w_n\in \lambda''$. Let $k$ be the maximum index such that $i+1\leq k\leq p-1$ and $w_k\in\lambda''$. Then there is an edge path $\Upsilon$ from $w_k$ to $w_p$. Let $e_t$ be the unique edge in $f_i$ such that the head of $e_t$ is $w_k$. Then the edge loop $\Upsilon\ast\mu_p\ast e_1\ast e_2\ast \ldots\ast e_t$ satisfies the desired conditions. \item 3.2b: No such $n$ exists. Let $\zeta$ denote the unique edge in $E(x)\cap\lambda''$. Since none of the edges in $E(x)$ that lie between $\mu_p$ and $\zeta$ can have an endpoint in $\lambda'$, the proof reduces to the same arguments given in Case 2 and Case 4 (see below), except they are applied to the edges in $E(x)$ that lie between $\mu_p$ and $\zeta$ instead of the edges in $E(x)$ that lie between $\mu_1$ and $\mu_p$. \end{itemize} \end{itemize} \vspace{2mm} \item Case 4: $w_i\in\lambda'$ (See Figure 6). \begin{itemize} \item Subcase 4.1: For all $i$ with $i+1\leq j\leq p-1$, $w_i\notin \lambda\cup\lambda'\cup\lambda''$. This is handled the same way as Case 1. \item Subcase 4.2: There is some $i$ such that $i+1\leq j\leq p-1$ and $w_i\in \lambda\cup\lambda'\cup\lambda''$. In the remaining subcases, $q$ denotes the largest index such that $i+1\leq q\leq p-1$ and $w_q\in\lambda\cup\lambda'\cup\lambda''$. \begin{itemize} \item Subcase 4.2a: $w_q\in\lambda$. Clearly, $w_q\neq w_p$ since the equality $w_q= w_p$ would contradict the definition of a simplicial complex. Let $\chi$ be the edge path obtained by concatenating $\mu_q$ with the unique sub-path of $\lambda$ with endpoints $w_q$ and $v$. Then $\chi$ is a simple edge path from $x$ to $v$ that is shorter than $\lambda$, as desired. \item Subcase 4.2b: $w_q\in\lambda'$. Then for any $m$ with $i+1\leq m\leq q-1$, it must be that $w_{m}\in\lambda'$ or $w_{m}\notin\lambda\cup\lambda'\cup\lambda''$. By the same logic as the proof of Subcase 3.1, there is an edge path $\gamma$ from $w_p$ to $w_q$. Let $e_s$ denote the unique edge in $L$ with tail $w_q$. Then by similar logic to the proof of Case 1, we see that the edge loop $\mu_p\ast\gamma\ast e_s\ast e_{s+1}\ast\ldots\ast e_r$ works. \item Subcase 4.2c: $w_q\in\lambda''$. This is handled using arguments similar to those in Subcase 3.1 \end{itemize} \end{itemize} \end{itemize} All cases are covered, so the proof is complete. \end{proof} \begin{center} \begin{figure} \includegraphics[width=0.7\textwidth]{test.png} \caption{Illustration of Case 1 of Lemma \ref{unzip}. The blue edge path is a portion of the homotoped edge loop $\xi$.} \end{figure} \begin{figure} \includegraphics[width=0.7\textwidth]{Case2x.png} \caption{Illustration of Case 2 of Lemma \ref{unzip}} \end{figure} \begin{figure} \includegraphics[width=0.7\textwidth]{Case3.png} \caption{Illustration of Case 3 of Lemma \ref{unzip}} \end{figure} \begin{figure} \includegraphics[width=0.7\textwidth]{Case4.png} \caption{Illustration of Case 4 of Lemma \ref{unzip}} \end{figure} \end{center} \subsubsection{Words} We prove a combinatorial lemma which is used in the next section. Throughout this subsection, $G$ denotes a finitely generated group with generators $a_1, \ldots, a_n$. By a \textit{word} in $F_n$, we mean a potentially unreduced word (a string consisting of generators in which terms such as $xx^{-1}$ need not be simplified to $1$). Let $S\subset\{a_1, \ldots a_n\}$ be a subset. Fix a word $w=a_{i_1}\ldots a_{i_k}$ in $G$. Let $w_{S}^\mathrm{min}$ (resp. $w_{S}^\mathrm{max}$) denote the minimum (resp. maximum) index $m$ such that $a_{i_m}\in S$. A word $w$ is \textit{$S$-connected} if $a_{i_j}\in S$ for all $w_{S}^\mathrm{min}\leq j\leq w_{S}^\mathrm{max}$. \begin{prop}\label{represent} Let $w$ be a word in $G$. Then for any non-empty ordered subset $S\subset\{a_1, \ldots a_n\}$, there exists an $S$-connected word $w_{\star}$ such that the following conditions hold. \begin{itemize} \item All elements in $w_{\star}$ that are also in $S$ appear in the same order that they appear in $w$. \item $w=w_{\star}$. \item All elements in $w_{\star}$ are either elements of $w$ or commutators of elements in $w$. \end{itemize} \end{prop} \begin{proof} We induct on the length $l(w)$ of $w$. The base case $k=1$ is trivial. Suppose the result holds for all words of length less than $k$ (for $k>1$) and let $w=a_{i_1}\ldots a_{i_k}$ be a word of length $k$. Let $m=w_{S}^\mathrm{min}$ and set $r=a_{i_{m+1}}\ldots a_{i_{k}}$. Then by the inductive hypothesis there exists an $(S-\{a_{i_{m}}\})$-connected word $r_{\star}$ such that $r=r_{\star}$ and all elements in $r_{\star}$ that are also in $S-\{a_{i_{m}}\}$ appear in the same order that they appear in $r$. Consider the word $w'=a_{i_1}\ldots a_{i_{m}} r_{\star}$. Let $b_{i_{m+j}}$ denote the element in the $j^{th}$ position of $r_{\star}$, so that $w'=a_{i_1}\ldots a_{i_{m}}b_{i_{m+1}}\ldots b_{i_{k}}$. Let $u$ denote the minimum index such that $b_{i_{u}}\in S-\{a_{i_{m}}\}$. Then we have that the word $$w_{\star}=a_{i_1}\ldots a_{i_{m-1}}[a_{i_{m}},b_{i_{m+1}}\ldots b_{i_{u-1}}]b_{i_{m+1}}\ldots b_{i_{u-1}}a_{i_{m}}b_{i_{u}}b_{i_{u+1}}\ldots b_{i_{k}}$$ satisfies the desired conditions, so the proof is complete. \end{proof} \begin{exmp} Let $w=a_1 a_3 a_2 a_5 a_4$ and $S=\{a_2, a_4\}$. Then $w_{\star}=a_1a_3[a_2,a_5]a_5a_2a_4$ is $S$-connected and equal to $w$. \end{exmp} \subsubsection{Unwinding Pure Balanced Braids} Recall that for any $[\sigma]\in B_n(\Sigma_g)$, $\pi_1(\Sigma_g-C_{[\sigma]})$ is the free group on the generators $[f_1], [f_2], \ldots, [f_{2g+\#C_{[\sigma]}-1}]$. The generators $[f_i]$ for $1\leq i\leq 2g$ can be regarded as standard generators of $\pi_1(\Sigma_g)$, while the generators $[f_i]$ for $2g+1\leq i\leq 2g+\#C_{[\sigma]}-1$ have each wind around an element of $C_{[\sigma]}$ once. \begin{prop}\label{one} Let $[\sigma]$ be a pure balanced braid such that $\# C_{[\sigma]}\leq n-2$. Fix $x_j\in \{x_1, \ldots, x_n\}-C_{[\sigma]}$ and let $[f_1], \ldots, [f_{2g+\#C_{[\sigma]}-1}]$ be the usual generators of $\pi_1(\Sigma_g-C_{[\sigma]}, x_j)$. Then for each $[f_i]$, $1\leq i\leq 2g+\#C_{[\sigma]}-1$, there exists a quasi-edge braid $[\gamma]$ such that the induced vertex loop of $\gamma$ on $x_j$ is $[f_i]$, while the induced vertex loops on all other vertices whose vertex loops under $[\sigma]$ were initially constant remain constant. \end{prop} \begin{proof} There are two possible cases. \begin{itemize} \item Case 1: $1\leq i\leq 2g$. Then we can regard $[f_i]$ as a standard generator of $\pi_1(\Sigma_g)$; it is clear that there exists a representative $f_i$ of its homotopy class that has trivial winding number around all $y\in C_{[\sigma]}$. Since $k\leq n-2$, we can choose some $x_{k}\in \{x_1, \ldots, x_n\}-C_{[\sigma]}$ with $j\neq k$. Use Proposition \ref{hom} to find a simple edge loop $l$ which is homotopic to $e_i$ and passes through $x_j$ and $x_k$. Then take our desired quasi-edge braid to be the braid $[\gamma]=w(x_j,x_k)$ constructed in Lemma \ref{super}. \vspace{2mm} \item Case 2: $2g+1\leq i\leq 2g+\#C_{[\sigma]}-1$. In this case, $f_i$ can be taken to be a vertex loop which has a winding number of one about some $x_m\in C_{[\sigma]}$ and zero with respect to all points in $C_{[\sigma]}-\{x_m\}$. Choose a simple edge path $\lambda$ from $x_j$ to $x_m$ in the triangulation of $\Sigma_g$. Then take $[\gamma]=w_{\lambda}$ to be the quasi-edge braid constructed in Lemma \ref{cool}. \end{itemize} \end{proof} Fix $[\sigma]\in P_n(\Sigma_g)$ and write each vertex loop $[\sigma]_{x_i}$ for $x_i\in\{x_1, \ldots, x_n\}-C_{[\sigma]}$ as a product of the standard generators of $\pi_1(\Sigma_g - C_{[\sigma]})$. Using Proposition \ref{one}, we may sequentially multiply by $[\sigma]$ by quasi-edge braids whose vertex loops equal the inverses of the generators of $\pi_1(\Sigma_g - C_{[\sigma]})$ contained in the expression of $[\sigma]_{x_i}$ as a product of generators until it becomes the trivial vertex loop. This process can be repeated until all but one element of the basepoint $\{x_1, \ldots, x_n\}$ has a trivial vertex loop. From the discussion in Section \ref{out}, Theorem \ref{theorem} follows from Proposition \ref{two}.\\ \begin{lem}\label{split} Let $[f_i]$ be such that $2g+1\leq i\leq 2g+\#C_{[\sigma]}-1\subset E_n(\Sigma_g)$. Then $[f_i]\in E_n(\Sigma_g)$ \end{lem} \begin{proof} Follows from the same argument given in Case 2 of Proposition \ref{one}. \end{proof} \begin{lem}\label{comm} Commutators of the generators $[f_i]$ of $\pi_1(\Sigma_g-C_{[\sigma]},x)$ are quasi edge braids. \end{lem} \begin{proof} Fix arbitrary generators $[f_i]$ and $[f_j]$ with $i\neq j$. We have the following possibilities: \begin{itemize} \item Case 1: $1\leq i,j\leq 2g$. Once again, we may regard $[f_i]$ and $[f_j]$ as two of the standard generators of $\pi_1(\Sigma_g,x)$. By Proposition \ref{hom}, we may assume that $f_i$ and $f_j$ are simple edge loops that intersect only at the basepoint $x\in\Sigma_g$. Let $\lambda=e_1, \ldots, e_n$ (resp. $\lambda'=z_1, \ldots, z_k$) be the ordered list of edges constituting $f_i$ (resp. $f_j)$. For ease of notation, the indices of the vertices and edges in $\lambda$ (resp. $\lambda'$) will be taken modulo $n$ (resp. modulo $k$). Let $e_r=(v_r, v_{r+1})$ and $z_l=(w_l, w_{l+1})$. We may assume that $x=v_1=w_1$. Then a direct computation shows that the commutator $ [b_{e_1}b_{e_2}\ldots b_{e_0},b_{z_1}b_{z_2}\ldots b_{z_0}] $ is precisely $\big[[f_i],[f_j]\big]$. \vspace{2mm} \item Case 2: $2g+1\leq i,j\leq 2g+\#C_{[\sigma]}-1$. This is immediate from Lemma \ref{split}. \vspace{2mm} \item Case 3: $1\leq i\leq 2g$ and $2g+1\leq j\leq 2g+\#C_{[\sigma]}-1$. Let $f_i$ and $\lambda$ be as in Case 1. As before, $f_j$ can be taken to be a vertex loop which has a winding number of one about some $x_m\in C_{[\sigma]}$ and zero with respect to all points in $C_{[\sigma]}-\{x_m\}$. Fix a simple edge path $\mu$ from $x$ to $x_m$ and let $w_{\mu}$ be the quasi-edge braid constructed in Lemma $\ref{cool}$. Then the commutator $ [b_{e_1}b_{e_2}\ldots b_{e_0},w_{\mu}] $ is $\big[[f_i],[f_j]\big]$, as desired. \end{itemize} The remaining case follows by symmetry, so the proof is complete. \end{proof} \begin{rem}\label{qebbb} Furthermore, one can show via a similar argument that \textit{conjugates} of commutators of the standard generators of $\pi_1(\Sigma_g-C_{[\sigma]},x)$ are quasi-edge braids. This implies the commutator subgroup $[\pi_1(\Sigma_g,x),\pi_1(\Sigma_g,x)]$ is contained in $E_n(\Sigma_g)$. \end{rem} \begin{comment} For any $i\in \{1, \ldots, n\}$, let $A_i=\mathrm{im}(\alpha_i)$, where $\alpha_i:\pi_1(\Sigma_g)\rightarrow B_n(\Sigma_g)$ is the homomorphism defined in Section \ref{pre}. Note that $A_i\simeq\pi_1(\Sigma_g)$. \begin{lem} For each $i\in\{1, \ldots, n\}$, $A_i\cap\mathrm{ker}(\omega)\subset E_n(\Sigma_g)$ \end{lem} \begin{proof} Let $[\Omega]\in A_i\cap\mathrm{ker}(\omega)$. We will identify $[\Omega]$ with its pre-image in $\pi_1(\Sigma_g)$. It is clear that $[\Omega]\in\mathrm{ker}(\Phi)$ (where $\Phi$ is the Hurewicz homomorphism on $\Sigma_g$). Thus, $[\Omega]\in [A_i,A_i]$. So, it suffices to show that commutator subgroup of $A_i$ is contained within $E_n(\Sigma_g)$. By identifying the generators $\{[e_i]\}_{i=1}^{2g}$ of $\pi_i(\Sigma_g)$ with their images in $A_i$, we are reduced to showing that conjugates of commutators of the $[e_i]$'s are quasi-edge braids. \end{proof} \end{comment} \begin{prop}\label{two} Let $[\sigma]$ be a pure balanced braid with $\#C_{[\sigma]}=n-1$, so that $[\sigma]$ has exactly one non-trivial vertex loop, say $[\sigma]_{x_j}$. Then $[\sigma]\in E_n(\Sigma_g)$. \end{prop} \begin{proof} Regard $[\sigma]$ as an element of $\pi_1(\Sigma_g - C_{[\sigma]}, x_j)$ and let $$i_*:\pi_1(\Sigma_g-C_{[\sigma]},x_j)\rightarrow\pi_1(\Sigma_g,x_j)$$ be the surjection induced by the inclusion of spaces $$i:(\Sigma_g-C_{[\sigma]},x_j)\hookrightarrow(\Sigma_g,x_j).$$ We have that $$i_*([f_i])= \begin{cases} [f_i] &\mbox{if } 1\leq i\leq 2g \\ 1 & \mbox{if } 2g+1\leq i\leq 2g+\# C_{[\sigma]}-1 . \end{cases}$$ Since $[\sigma]\in\mathrm{ker}(\omega)$, it is clear that $i_*([\sigma])\in\mathrm{ker}(\Phi)$, where $$\Phi:\pi_1(\Sigma_g,x_j)\rightarrow H_1(\Sigma_g;\mathbb{Z})$$ denotes the Hurewicz map. By the Hurewicz theorem, $i_*([\sigma])$ is in the commutator subgroup $\big[\pi_1(\Sigma_g,x_j),\pi_1(\Sigma_g,x_j)\big]$ of $\pi_1(\Sigma_g,x_j)$. Using Lemma \ref{comm} and Remark \ref{qebbb}, it follows that $i_*([\sigma])$ is a quasi-edge braid. By writing $[\sigma]$ as a unique word on the generators, this means that the product of all generators $[f_i]$ with $1\leq i\leq 2g$ in the word (in order of appearance) is a quasi-edge braid. Let $S=\{[f_1], \ldots, [f_{2g}]\}$. Then by Proposition \ref{represent}, we may assume that $[\sigma]$ is expressed by an $S$-connected word $w$ such that all such elements in $w$ are either commutators or $[f_{i}]$ for some $2g+1\leq i\leq 2g+\#C_{[\sigma]}-1$. By the $S$-connectivity of $w$, it suffices to show that the product of all elements in $w$ that are \textit{not} in $S$ is a quasi-edge braid. This holds by Lemma \ref{split} and Remark \ref{qebbb}, so it follows that $[\sigma]$ is a quasi-edge braid. This completes the proof. \begin{comment} The following lemma provides a useful reduction. \begin{lem}\label{as} In the notation of Proposition \ref{one}, there exists a quasi-edge braid $[\Omega]$ such that $[\Omega\sigma]_{x_j}\in\mathrm{ker}(i_*)$. \end{lem} \begin{proof} Let $\Phi:\pi_1(\Sigma_g)\rightarrow H_1(\Sigma_g; \mathbb{Z})$ denote the Hurewicz map. Since $[\sigma]\in\mathrm{ker}(\omega)$, it must be that $i_*([\sigma]_{x_j})\in\mathrm{ker}(\Phi)=[\pi_1(\Sigma_g),\pi_1(\Sigma_g)]$. To prove the lemma, it suffices to show that all conjugates of commutators $[[e_{i}], [e_j]]$ for $1\leq i,j\leq 2g$ are quasi-edge braids (since such braids generate the derived subgroup $[\pi_1(\Sigma_g),\pi_1(\Sigma_g)]$). Locally deform each generator $e_i\in \pi_1(\Sigma_g)$ so that $e_i$ passes through no other vertices in the triangulation except for $v$ while remaining simple. Then cut $\Sigma_g$ along each $e_i$ to obtain the fundamental polygon of $\Sigma_g$ as a triangulated $4g$-gon. For here, we can argue using quasi-edge braid constructions similar to those in Section \ref{braids} to obtain all conjugates of commutators of generating elements. \end{proof} Let $B =\{ [e_1], [e_2], \ldots, [e_{2g}]\}$ denote the generating set of $\pi_1(\Sigma_g)$ and let\\ $B'=\{ [e_1], [e_2], \ldots, [e_{2g}], \ldots, [e_{2g+n-2}]\}$ denote the generating set of $\pi_1(\Sigma_g-C_{[\sigma]})$ (so that the set $B'-B$ generates $\mathrm{ker}(i_*)$). By Lemma \ref{as}, we may assume that $[\sigma]_{x_j}\in\mathrm{ker}(i_*)$. Thus, we can write $[\sigma]_{x_j}$ as a product of elements in $B'-B$ and argue as in Case 2 of the proof of Proposition \ref{one} to deduce the result. \end{comment} \end{proof} \section{Future Work} In this paper, we constructed a natural map $\omega:B_n(\Sigma_g)\rightarrow H_1(\Sigma_g; \mathbb{Z})$ and studied its kernel using simplicial triangulations of $\Sigma_g$. In particular, we showed that $\mathrm{ker}(\omega)$ is generated by canonical braids which are constructed using edges in the triangulation. There are several avenues for future investigation.\\ \begin{itemize} \item Since $\mathrm{ker}(\omega)$ is generated by edge braids, it would be useful to find a minimal set of relations between the edge braids so that one can build a presentation of $\mathrm{ker}(\omega)$. Lemma \ref{local} gives evidence that such a group presentation might be similar to Artin's \cite{name} presentation of the classical braid groups $B_n$.\\ \item One can also consider braid groups on non-orientable surfaces (See \cite{non}). Furthermore, the homomorphism $\omega$ can be constructed in exactly the same way for general topological spaces. Thus, it is natural to ask whether there is an analog of Theorem \ref{theorem} for more general surfaces (e.g non-orientable surfaces) than the closed orientable surfaces $\Sigma_g$. The proof of such a result would probably be similar, except it would depend on properties of the fundamental groups of non-orientable surfaces. \end{itemize} \newpage
{ "timestamp": "2017-12-15T02:02:37", "yymm": "1712", "arxiv_id": "1712.05057", "language": "en", "url": "https://arxiv.org/abs/1712.05057" }
\section{Introduction} In 2010, the CREMA (Charge Radius Experiment with Muonic Atoms) collaboration extracted the proton charge radius from measurements of the $2S-2P$ transition in muonic hydrogen ($\mu$H), a proton orbited by a muon. It was found to deviate by about 7$\sigma$~\cite{Pohl:2010zza, Antognini:1900ns} with respect to the value obtained in decades of experiments on both hydrogen spectroscopy and electron scattering off the proton. This large discrepancy hinted towards new physics and created a lot of excitement in the community. Interpretations of the discrepancy are being sought into systematic experimental errors, novel aspects of hadronic structure, or beyond-the-standard-model theories, leading to lepton universality violations. To investigate whether the discrepancy persists or changes with the nuclear mass number $A$ and proton number $Z$, the CREMA collaboration has embarked on a strong experimental program to extract the charge radii of light nuclei by measuring the Lamb shifts in $\mu$-D, $\mu$-$^3$He$^+$ and $\mu$-$^4$He$^+$. The Lamb shift is related to the charge radius $R_{c}$ by \begin{equation} \label{eq:LS} \Delta E_{\rm LS} = \delta_{\rm QED}+\delta_{\rm FS}(R_{c})+\delta_{\rm TPE}. \end{equation} The three terms, from the largest to the smallest, are the quantum electrodynamics (QED) contributions, the leading correction due to the finite size of the nucleus, $\delta_{\rm FS}(R_{c}) = \frac{m^{3}_{r}(Z\alpha)^{4}}{12}R^{2}_{c}$ (in $\hbar=c=1$ units and with $Z$ and $\alpha$ being the proton number and fine structure constant, respectively), and the two-photon exchange (TPE) contribution. \begin{figure}[htb] \centerline{\includegraphics*[width=5.cm]{TPE.pdf}} \caption{The lepton-nucleus two-photon-exchange. The blob denotes the excitation of the nucleus in the intermediate states between the two photons.} \label{fig:tp} \end{figure} While quantum electrodynamical calculations of these atoms are extremely precise, effects due to the structure of the nucleus constitute the main source of uncertainty and are the bottleneck to increase the precision of the extracted radius. Nuclear structure corrections appear via finite nuclear size effects -- precisely those effect that enable the extraction of the radius--, as well as via nuclear excitations in the TPE diagram. Here, virtual photons are exchanged between the lepton and the nucleus/hadron as shown in Fig.~\ref{fig:tp}. The precision via which $\delta_{\rm TPE}$ can be calculated determines the precision in the extracted charge radius. Independently on whether the puzzle is due to beyond-the-standard-model physics or not, precise calculations of nuclear structure corrections will always be needed and must accompany the experimental program aimed at extracting radii. \begin{table}[htb] \centering \caption{Experimental error bar in the measured Lamb-shift energy of muonic atoms $\delta_{\rm exp}(\Delta E_{\rm LS})$ compared to the error bar in the theoretical calculation of the TPE energy corrections to the Lamb-shift $\delta_{\rm th}(\Delta E_{\rm LS})$. Data taken from Ref.~\cite{Antognini:1900ns,science2016,Krauth_paper,Franke}. } \label{tab:1} \begin{tabular}{l|l|l} \hline\noalign{\smallskip} &$\delta_{\rm exp}(\Delta E_{\rm LS})$& $\delta_{\rm th}(\Delta E_{\rm LS})$\\ \noalign{\smallskip}\hline\noalign{\smallskip} $\mu$-H &2.3 $\mu$eV & 2 $\mu$eV \\ $\mu$-D & 3.4 $\mu$eV & 20 $\mu$eV \\ $\mu$-$ ^{3} {\rm He}^{+}$ & 0.08 meV & 0.52 meV\\ \noalign{\smallskip}\hline \end{tabular} \end{table} To appreciate the importance of nuclear structure corrections in nuclei, it is interesting to look at the experimental error bar via which the Lamb shift energy can be measured, $\delta_{\rm exp}(\Delta E_{\rm LS})$, and compare it to the theoretical error bar in the TPE calculations, $\delta_{\rm th}(\Delta E_{\rm LS})$. As shown in Table~\ref{tab:1}, while for the $\mu$-H case both errors are of the same order of magnitude, for $\mu$-D and $\mu-^3$He$^+$ the ratio between $\delta_{\rm th}(\Delta E_{\rm LS})$ and $\delta_{\rm exp}(\Delta E_{\rm LS})$ is about 6. This indicates, that for light muonic atoms TPE corrections constitute the real bottleneck to exploit the experimental precision in the extraction of the charge radius. So far, the TRIUMF - Hebrew University group has provided the most precise determination of $\delta_{\rm TPE}$ corrections to the Lamb shift for $\mu$-D~\cite{Hernandez2014}, $\mu$-$^3$He$^+$ and $\mu$-$^3H$~\cite{Nevo16}, and $\mu$-$^4$He$^+$~\cite{Ji2013,Nevo2014} using chiral effective field theory~\cite{entem2003,Epelbaum09} and phenomenological potentials~\cite{AV18} combined with state-of-the-art few-body calculational tools. Contributions to $\delta_{\rm TPE}$ can be divided into the elastic Zemach term and the inelastic polarization term, i.e., $\delta_{\rm TPE} = \delta_{\rm Zem}+\delta_{\rm pol}$. Both can be further separated into nuclear $(\delta^{A})$ and nucleonic $(\delta^{N})$ components, i.e., $\delta_{\rm TPE} = \delta^{A}_{\rm Zem}+\delta^{A}_{\rm pol}+\delta^{N}_{\rm Zem}+\delta^{N}_{\rm pol}$. The inelastic nuclear term is called polarization term, since it is related to the polarizability of the nucleus, i.e., the excitations of the nucleus over all its continuum spectrum due to the virtual absorption and subsequent emission of photons, expressed by the blob in Fig.~\ref{fig:tp}. Below we report our results for the various nuclei, also shown in Ref.~\cite{Javier2016}. The uncertainty associated with each value is given in brackets and includes the numerical, nuclear model, and atomic physics errors. It is worth noticing that the uncertainties in $\delta_{\rm TPE}$ are slightly different than those shown in Table~\ref{tab:1}. This is due to the fact that the uncertainties in Table~\ref{tab:1} are taken from the analysis performed by colleagues~\cite{Krauth_paper,Franke} and do not include only our calculations, but an average with results of other groups as well~\cite{Pachucki11,Friar13,Carlsson14,Pachucki15}. \begin{table}[htb] \centering \caption{Contributions to $\delta_{\rm TPE}$ of the Lamb shift in light muonic atoms, in meV, where we omit the proton-neutron subtraction term~\cite{Krauth_paper}.} \label{tab:2} \begin{tabular}{l|llll|l} \hline\noalign{\smallskip} & $\delta^{A}_{\rm Zem}$ & $\delta^{A}_{\rm pol}$ & $\delta^{N}_{\rm Zem}$ & $\delta^{N}_{\rm pol}$ & $\delta_{\rm TPE}$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} $\mu$-D & -0.424(3) & -1.245(19) & -0.030(2) & -0.028(2) & -1.727(20) \\ $\mu$-$ ^{3}{\rm H}$ & -0.227(6) & -0.473(17) & -0.033(2) & -0.034(16) & -0.767(25) \\ $\mu$-$ ^{3} {\rm He}^{+}$ & -10.49(24) & -4.17(17)& -0.52(3) & -0.28(12) & -15.46(39) \\ $\mu$-$ ^{4}{\rm He}^{+}$ & -6.29(28) & -2.36(14) & -0.54(3) & -0.38(22) & -9.58(38) \\ \noalign{\smallskip}\hline \end{tabular} \end{table} In particular, here we want to concentrate on the deuteron, for which we have so far performed the most thorough calculations, by also analyzing the convergence of the chiral expansion, see Ref.~\cite{Hernandez2014}. Our results, together with others, have already been used by the CREMA collaborations to extract the value of the charge radius from muonic deuterium Lamb shift measurements~\cite{science2016}. Interestingly, in analogy to the proton case, such radius revealed to be smaller, with about a $7\sigma$ deviation with respect to CODATA-2010 evaluations~\cite{Mohr:2012tt} and 3.5 $\sigma$ with respect to spectroscopic extractions from ordinary deuterium alone~\cite{deut_spect}. Different from the proton case, in the so called ``deuteron-radius puzzle'' electron scattering data~\cite{Sick} are not precise enough to discriminate among muonic and electronic deuteron spectroscopy. By using Eq.~(\ref{eq:LS}) one can also experimentally determine the size of $\delta_{\rm TPE}$. Indeed, the $\mu$-D measurements in \cite{science2016} provided the left-hand-side of Eq.~(\ref{eq:LS}), while the size of the deuteron can be determined from a combination of measurements of isotope shift in ordinary deuteron and muonic Lamb shift in the proton. Interestingly, the measured $\delta_{\rm TPE}$ turns out to deviate 2.5$\sigma$~\cite{science2016} with respect to theoretical computations, including our work~\cite{Hernandez2014} and calculations by others, see e.g., Refs.~\cite{Pachucki11,Pachucki15}. While this fact certainly needs to be further investigated, with respect to the ~7$\sigma$ deviation between muonic deuteron and CODATA-2010 evaluation, this difference is minor. In the past we investigated the dependence of $\delta_{\rm TPE}$ on the nuclear potential used in input and found it to be small. \begin{figure}[htb] \centering \includegraphics*[width=14.cm]{fig2.pdf} \caption{Graphic representation of the various contributions to $\delta^A_{pol} + \delta^A_{Zem}$ in the \mbox{2S-2P} Lamb shift of $\mu$D, calculated with the AV18~\cite{AV18} and a chiral effective field theory nuclear potential at N3LO~\cite{entem2003}.} \label{fig:deut} \end{figure} Below, we present a graphic representation of the various contributions to $\delta^A_{pol} + \delta^A_{Zem}$ for the muonic deuteron case. This corresponds to $\delta_{\rm TPE}$, a part from the $\delta^N_{pol}$ term, which is tabulated in Table~\ref{tab:2} and is independent on the nuclear interaction. We use two potentials, one of phenomenological nature, the AV18~\cite{AV18} and one chiral interaction at next-to-next-to-next-to-leading order (N3LO)~\cite{entem2003}. Details on the expressions of the various terms can be found in Refs.~\cite{Hernandez2014,Nevo16,Ji2013}. As one can see from Fig.~\ref{fig:deut}, the potential dependence is quite small, of the order of 0.5$\%$. Rather than sampling potentials among those available in the literature, in future we aim at performing a statistical analysis of $\delta_{\rm TPE}$ by propagating the error bars associated to the parameters in the interaction through the observables themselves. This should enable us to investigate whether the above mentioned 2.5$\sigma$ deviation is originated from the procedures used in nuclear physics or not. Work in this direction is in progress. Similarly to the case of the Lamb shift, the hyperfine splitting energy is related to the magnetic radius $R_Z$ as \begin{equation} \label{eq:HFS} \Delta E_{\rm hfs} = \delta^{\rm hfs}_{\rm QED}+\delta^{\rm hfs}_{\rm FS}(R_{Z})+\delta^{\rm hfs}_{\rm TPE}\,, \end{equation} where nuclear structure corrections come mostly from a TPE diagram. Due to the fact that measurements of the hyperfine splitting are planned for $\mu$-D and $\mu$-$^3$He$^+$, we are refining our tools to perform calculations of $\delta^{\rm hfs}_{\rm TPE}$ as well. In case of the hyperfine splitting $\delta^{\rm hfs}_{\rm TPE}$ is expected to be also related to magnetic properties of the nucleus~\cite{Pachucki,Friar,Chen_HFS}. To the purpose of enhancing our capabilities to compute magnetic properties, we investigate sum rules of the magnetic response function, starting from the deuteron. The magnetic response function is defined as \begin{equation} \label{resp} R(\omega)= \frac{1}{2J_0+1}\int \!\!\!\!\!\!\!\sum _{f} \left|\left\langle \Psi_{f} \left|\left| {\bm \mu} \right|\right| \Psi _{0}\right\rangle \right| ^{2}\delta\left(E_{f}-E_{0}-\omega \right)\,, \end{equation} where ${\bm \mu}$ is the magnetic dipole operator. Here, $\left| \Psi _{0}\right\rangle$ and $\left| \Psi _{f}\right\rangle $ denote the ground and excited states, respectively, while the sum/integral symbol is intended as a sum of discrete and an integral on continuum quantum numbers and states. The double bar denotes the reduced matrix element and the factor in front is an average on the projection of the ground state angular momentum $J_0$. It is known that in nuclei magnetic transitions are not well described in impulse approximation, i.e., using one-body operators, and that two-body currents are important. Their expression has been derived in chiral effective field theory and their effect has been found to be very important in magnetic dipole moments and magnetic dipole transitions of light nuclei~\cite{Pastore13}. Here, we will develop our tools to accommodate the effect of leading order two-body currents from chiral effective field theory in the magnetic transitions of the deuteron. \section{Two-body currents in the magnetic operator at next-to-leading order} In chiral effective field theory, similarly to what done for the strong force, the electromagnetic current can be expanded into many-body operators as \begin{equation} \label{eq:j} {\bf j}= \sum_i ~{\bf j}_i + \sum_{i<j}~ {\bf j}_{ij} +\dots \ . \end{equation} Calculations performed using one-body operators only are named impulse approximation calculations and are based on the idea that nuclear properties are expressed as if the probing photon interacted only with individual nucleons. The impulse approximation corresponds to the leading (LO) order in chiral effective field theory. This description is improved by accounting for the effects of two-nucleon interactions onto the electromagnetic currents associated with nucleon pairs. Two-body currents follow naturally once a meson-exchange mechanism is invoked to describe the interactions among nucleons. If one considers only the long range part of the nucleon-nucleon force, mediated by a one-pion exchange, two-body currents of one-pion nature emerge. They result from photons hooking up with exchanged pions, are shown in Fig.~\ref{fig:mec} by the seagull and pion-in-flight diagrams. \begin{figure}[htb] \centering \includegraphics*[width=5.cm]{MEC.pdf} \caption{Two-body currents in chiral effective field theory from a one-pion exchange diagram between the two nucleons: seagull (left) and pion in flight (right). The wiggle represents the electromagnetic interaction.} \label{fig:mec} \end{figure} The one-body electromagnetic current operator in the non-relativistic limit consists of the usual convection and spin-magnetization currents and in coordinate space read~\cite{pionnuclei} \begin{eqnarray} {\bf j}_i^c({\bf x})&=& \frac{e_i}{2m}\{{\bf p}_i,\delta({\bf x}-{\bf r}_i)\} ,\\ {\bf j}_i^s({\bf x})&=& i\frac{e\mu_i}{2m}{\bm \sigma}_i \times [{\bf p}_i,\delta({\bf x}-{\bf r}_i)]\,. \end{eqnarray} Here $ m$ is the nucleon mass (we keep the mass of the proton equal to the mass of the neutron) and $e_i$ and $\mu_i$ are the electric charge and magnetic moment of the nucleon, respectively, defined as \begin{eqnarray} e_i&=&\left( \frac{1+\tau^3_i}{2}\right)\\ \mu_i&=&\mu_p\left( \frac{1+\tau^3_i}{2}\right) + \mu_n \left( \frac{1-\tau^3_i}{2}\right)\,, \end{eqnarray} with $\tau_i^3$ being the third component of the nucleon isospin and $\mu_p=2.793$ and $\mu_n=-1.913$ in nucleon magneton $\mu_{N}$ units. Here, nucleon coordinates and momenta are denoted by ${\bf r}_i$ and ${\bf p}_i$, respectively, while ${\bm \sigma_i}$ is the spin of the nucleon. The one pion exchange two-body currents of Fig.~\ref{fig:mec} appear at next-to-leading order (NLO) in chiral effective field theory and is the leading two-body contribution. The effect of NLO currents amounts to 70-80$\%$ of the total two-body currents contribution in magnetic properties of few-body systems~\cite{Piarulli}. We will call them ${\bf j}_{ij}^{\rm NLO}$ and separate them into seagull current ${\bf j}_{ij}^{s}$ and the pion in-flight current ${\bf j}_{ij}^{\pi}$. Their expressions, more commonly found in momentum space, read~\cite{Piarulli} \begin{eqnarray} {\bf j}^{s}_{ij}({\bf k}_{i},{\bf k}_{j})&=-ie\frac{g^{2}_{A}}{F^{2}_{\pi}} G^{V}_{E}(q^{2})\left({\bm \tau}_{i} \times {\bm \tau}_{j} \right)^{3}\left( {\bm \sigma}_{i}\left( \frac{{\bm \sigma}_{j}\cdot {\bf k}_{j}}{\omega^{2}_{k_{j}}}\right)- {\bm \sigma}_{j}\left( \frac{{\bm \sigma}_{i}\cdot {\bf k}_{i}}{\omega^{2}_{k_{i}}}\right) \right)\\ {\bf j}^{\pi}_{ij}({\bf k}_{i},{\bf k}_{j})&=-ie \frac{g^{2}_{A}}{F^{2}_{\pi}} G^{V}_{E}(q^{2})\left({\bm \tau}_{i} \times {\bm \tau}_{j} \right)^{3} \left({\bf k}_{j}-{\bf k}_{i} \right) \left( \frac{{\bm \sigma}_{i}\cdot {\bf k}_{i}}{\omega^{2}_{k_{i}}} \right) \left( \frac{{\bm \sigma}_{j}\cdot {\bf k}_{j}}{\omega^{2}_{k_{j}}} \right)\,,\\ \end{eqnarray} with \begin{equation} {\bf j}_{ij}^{\rm NLO}({\bf k}_{i},{\bf k}_{j})={\bf j}^{s}_{ij}({\bf k}_{i},{\bf k}_{j}) +{\bf j}^{\pi}_{ij}({\bf k}_{i},{\bf k}_{j}) \,. \end{equation} Here, ${\bf k}_{i/j}$ is the momentum transferred to the nucleon $i$ or $j$, $\omega_{k_{i/j}}^2=k_{i/j}^2+m_\pi^2$ is the squared energy of the exchanged pion, while ${\bm \tau}_{i/j}$ are nucleon isospin Pauli matrices. By performing the Fourier transform of these two-body currents, we obtain the expressions in coordinate space~\cite{Dubach01} \begin{eqnarray} \nonumber {\bf j}^{s}_{ij}({\bf q})&= &-e\frac{m^{2}g^{2}_{A}}{4\pi F_{\pi}^{2}}G^{V}_E(q^{2})\left( {\bm \tau}_{i} \times {\bm \tau}_{j} \right)^{3}e^{i{\bf q}\cdot {\bf R}} \left[ e^{\frac{1}{2} i{\bf q}\cdot {\bf r}}{\bm \sigma}_{i}\left({\bm \sigma}_{j}\cdot \hat{r}\right)+ e^{-\frac{1}{2}i{\bf q}\cdot {\bf r}}{\bm \sigma}_{j}\left({\bm \sigma}_{i}\cdot \hat{r}\right)\right]\left( 1+\frac{1}{mr} \right)\frac{e^{-mr}}{mr}\\ \nonumber {\bf j}^{\pi}_{ij}({\bf q})& =& e\frac{2 g^{2}_{A}}{(2\pi)^{3}F^{2}_{\pi}} G^{V}_{E}(q^{2})\left({\bm \tau}_{i} \times {\bm \tau}_{j} \right)^{3} e^{i{\bf q}\cdot {\bf R}} \left( {\bm \sigma}_{i}\cdot \left(\frac{1}{2}{\bf q}-i{\bm \nabla}_{r} \right) \right)\!\!\!\left( {\bm \sigma}_{j}\cdot \left(\frac{1}{2}{\bm q}+i{\bm \nabla}_{r} \right) \right){\bm \nabla}_{r}I\left({\bf q},{\bf r} \right) \,,\\ \end{eqnarray} where ${\bf q}$ is the momentum transfer and we use relative and center of mass coordinate of the two-interacting particles \begin{eqnarray} \nonumber {\bf R} &= \frac{1}{2}\left({\bf r}_{i}+{\bf r}_{j} \right) \\ {\bf r} &= {\bf r}_{i}-{\bf r}_{j} \,. \end{eqnarray} In the current expression, the functions $I({\bf q},{\bf r})$ arise when taking the Fourier transform of the pion in-flight term and are defined as \begin{equation} I({\bf q},{\bf r}) = \int d^{3}p \frac{e^{i{\bf p}\cdot {\bf r}}}{\left( m^{2}+\left( {\bf p}-\frac{1}{2}{\bf q} \right)^{2} \right)\left( m^{2}+\left( {\bf p}+\frac{1}{2}{\bf q} \right)^{2} \right)}\,. \end{equation} Given a current operator in coordinate space, the magnetic dipole operator is obtained from the latter using \begin{equation} {\bm \mu} = \frac{1}{2} \int d^{3}x \ {\bf x} \times {\bf j}({\bf x})\,. \end{equation} This general expression can be rewritten in the following way \begin{equation} {\bm \mu} = \frac{1}{2}{\bf R} \times \int d^3x~ {\bf j}({\bf x}) + \frac{1}{2}\int d^{3}x ~({\bf x}-{\bf R})\times {\bf j}({\bf x})\,, \end{equation} and thus decomposed into two parts, where ${\bf R}$ is our center of mass coordinate. It is evident that the first term of the above equation will vanish if one considers an $A=2$ body problem in the center of mass frame. Indeed, since we will be studying the deuteron, we will only consider the second term. Since the ${\bf R}$-dependency in the current can be written as $e^{i{\bf q}\cdot{\bf R} }{\bf j}({\bf q},{\bf r})$, the magnetic dipole operator obtained from the second term can be written by the curl of the translational-invariant current operator at low ${\bf q}$ as~\cite{Pastore2008} \begin{equation} \label{curl} {\bm \mu}({\bf r}) = \lim_{{\bf q} \rightarrow 0} -\frac{i}{2}{\bm \nabla}_{{\bf q}}\times {\bf j}({\bf q},{\bf r} )\,. \end{equation} Using the one-body current in Eq.~(\ref{curl}) one obtains the usual magnetic dipole operator as \begin{equation} {\bm \mu}^{\rm LO}_i= \mu_N \left[ \left( \frac{\mu^S+\mu^V\tau^3_i}{2}\right){\bm \sigma}_i +\left( \frac{1+\tau^3_i}{2}\right) {\bm \ell }_i \right ] \,, \end{equation} where $\mu^{S/V}$ are the isoscalar and isovector nucleon magnetic moments, $4.7$ and $0.88$ in nucleon nucleon magneton $\mu_N$ units, respectively. This one-body operator is the leading order term in chiral effective field theory. To obtain the two-body corrections to the above one-body operator, we can plug in the expression of the seagull and pion in flight currents in Eq.~(\ref{curl}), to obtain the magnetic dipole operator due to the seagull and pion in flight diagrams, respectively, as \begin{eqnarray} {\bm \mu}^{s}_{ij} &=& -e\frac{m g^{2}_{A}}{16 \pi F^{2}_{\pi}}\left( {\bm \tau}_{i} \times {\bm \tau}_{j} \right)^{3}\left[\hat{\bf r}(\hat{\bf r}\cdot \left({\bm \sigma}_{i}\times {\bm \sigma}_{j} \right))-{\bm \sigma}_{i}\times {\bm \sigma}_{j} \right] f(r)\\ \nonumber {\bm \mu}^{\pi}_{ij}&=& -\frac{eg^{2}_{A}m}{16\pi F^{2}_{\pi}}\left( {\bm \tau}_{i} \times {\bm \tau}_{j} \right)^{3}\left[(\hat{\bf r} \cdot {\bm \sigma}_{j})(\hat{\bf r}\times {\bm \sigma}_{i})-(\hat{\bf r} \cdot {\bm \sigma}_{i})(\hat{\bf r}\times {\bm \sigma}_{j}) \right] f(r) \\ &-&\frac{eg^{2}_{A}m}{8\pi F^{2}_{\pi}}\left( {\bm \tau}_{i} \times {\bm \tau}_{j} \right)^{3} \left({\bm \sigma}_{i}\times {\bm \sigma}_{j} \right) Y(r)\,. \end{eqnarray} Here, the functions $f(r)$ and $Y(r)$, with $r=|{\bf r}|$ are \begin{eqnarray} \nonumber f(r) & =& \left( 1+\frac{1}{mr} \right) {e^{-mr}}\,, \\ Y(r) & = & \frac{e^{-mr}}{mr}\,. \end{eqnarray} Thus, at next-to-leading-order the two-body magnetic moment is given by the sum ${\bm \mu}^{\rm NLO}_{ij} = {\bm \mu}^{s}_{ij}+{\bm \mu}^{\pi}_{ij}$ as~\cite{pionnuclei} \begin{equation} {\bm \mu}^{\rm NLO}_{ij} = -\frac{eg^{2}_{A}m}{8\pi F^{2}_{\pi}}\left({\bm \tau}_{i}\times {\bm \tau}_{j} \right)^{3}\left[\left(1+ \frac{1}{mr} \right)\left(\left({\bm \sigma}_{i}\times {\bm \sigma}_{j} \right)\cdot \hat{\bf r} \right)\hat{\bf r} -\left({\bm \sigma}_{i}\times {\bm \sigma}_{j} \right) \right]e^{-mr}\,. \end{equation} Finally, the magnetic dipole operator will be given by a leading-order one body component and a next-to-leading order two-body component as \begin{equation} {\bm \mu}=\sum_i {\bm \mu}^{\rm LO}_i + \sum_{i<j} {\bm \mu}^{\rm NLO}_{ij}\,. \end{equation} Next, we will implement these operators in our deuteron calculation of magnetic properties. It is important to remember that, in a many-body nucleus with $A>2$, the NLO two-body correction contains an other term which explicitly depends on ${\bf R}$ even at small ${\bf q}$ and is called Sachs term~\cite{pionnuclei}. Furthermore, as already mentioned, other corrections exist at higher order in chiral effective field theory and have been accounted for, e.g., in Refs.~\cite{Pastore13,Piarulli}. From those calculations, it is evident that the NLO two-body currents accounts for up to 70-80 $\%$ of the total two-body current effects. \section{Results} Using the above expressions for the one- and two-body magnetic dipole operators, we now study some magnetic observables in the deuteron. First of all, due to the fact that ${\bm \mu}^{\rm NLO}_{ij}$ is of isovector nature, contributions of two-body currents at next-to-leading order will vanish in the magnetic moment of the deuteron. Thus, we concentrate on break-up observables, such as the sum rules of the magnetic dipole transition function. In the following we will investigate quantities of this kind \begin{equation} \label{sumrule} m_n = \int d\omega R(\omega) \omega^{n}\, \end{equation} with $n=-1$ and $0$, and $R(\omega)$ as in Eq.~(\ref{resp}). In particular, for the case of $n=-1$ this quantity is related to the magnetic susceptibility, and is the magnetic analogous of the electric dipole polarizability. Such sum rules have also been calculated in the past, see e.g. Ref.~\cite{Arenhovel}, so we can compare our calculations with similar theoretical calculations. A comparison with experiment is more difficult due to the fact that in sum rules one has to integrate the strength up to infinity and one has to clearly separate out the contribution from other multipoles. We perform our analysis by solving the deuteron via a diagonalization of the intrinsic Hamiltonian on the harmonic oscillator basis. We can perform calculations with any realistic two-body potential and will show here results with either the AV18~\cite{AV18} or the N3LO chiral potential~\cite{entem2003}. The calculation of the sum rules follows according to Ref.~\cite{Hernandez2014}. First, to check our numerical ability to calculate magnetic sum rules we compare our LO calculations, corresponding to the use of a one-body operator only ${\bm \mu}=\sum_i {\bm \mu}^{\rm LO}_i$, with results by Arenh\"{o}vel~\cite{Arenhovel,Arenhovel_private}. In Ref.~\cite{Arenhovel}, results were obtained with the Bonn r-space potential, but here we present a comparison with a more modern interaction, the AV18~\cite{AV18} potential. As one can see in Table~\ref{tab:3} we obtain a rather good agreement. The small sub-percentage difference has to be attributed to the fact that we have integrated the magnetic dipole strength obtained from Ref.~\cite{Arenhovel_private} using Eq.~(\ref{sumrule}), while in our case we computed the sum rule directly as an expectation value on the ground-state. To confirm our numbers, we have performed two independent implementations and obtained very nice numerical agreement among them, at the level of 0.1$\%$ or better. \begin{table}[htb] \centering \caption{Sum rules of the magnetic response function of the deuteron, calculated with the AV18 potential~\cite{AV18}, using a one-body magnetic dipole operator.} \label{tab:3} \begin{tabular}{l|l|l} \hline\noalign{\smallskip} &$m_{-1}$& $m_0$\\% & $m_1$\\ \noalign{\smallskip}\hline\noalign{\smallskip} This work & 13.9 fm$^3$ & 0.245 fm$^2$ \\% & 0.0137 fm \\ Ref.~\cite{Arenhovel_private} & 14.0 fm$^3$ & 0.244 fm$^2$ \\% & 0.0126 fm\\ \noalign{\smallskip}\hline \end{tabular} \end{table} Next, we introduce the two-body correction to the magnetic dipole operator at NLO and compare to the LO calculation in Table~\ref{tab:4}. In this case we will use a potential from chiral effective field theory at N3LO~\cite{entem2003}. It has to be noted that, from the chiral effective field theory stand point, such calculations are not fully consistent, since potential and currents are not taken at the same order, but at this point our objective is to prepare our tools for a more sophisticated calculation to be carried out in the future. \begin{table}[htb] \centering \caption{Sum rules of the magnetic response function of the deuteron, calculated with a chiral interaction at N3LO~\cite{entem2003} and a magnetic dipole operator at LO and at NLO. } \label{tab:4} \begin{tabular}{l|l|l} \hline\noalign{\smallskip} &$m_{-1}$& $m_0$ \\%& $m_1$\\ \noalign{\smallskip}\hline\noalign{\smallskip} LO & 14.0 fm$^3$ & 0.245 fm$^2$ \\% & 0.0106 fm \\ LO+ NLO & 15.1 fm$^3$ & 0.277 fm$^2$ \\% & 0.0184 fm\\ \noalign{\smallskip}\hline \end{tabular} \end{table} Given that, to the best of our knowledge, no calculation with just the NLO two-body current is available in the literature, we have compared our numerics against an independent computation of the multipole matrix elements of tensor currents~\cite{Wendt} and found a numerical agreement at the $0.1\%$ level or better. Overall, we find the effect of two-body currents to be between $5$ and $11\%$ depending on the order of the sum rule. Notably, the effect is bigger on the $m_0$ sum rule than on the $m_{-1}$. This fact indicats that two-body currents have more of an effect at larger energies than at lower energies. This is consistent with results in the literature~\cite{Arenhovel} from phenomenological currents and potentials. Clearly, even if their effect might be as small as a $5\%$, two-body currents need to be taken into account when doing precision physics as in the studies of nuclear structure corrections in muonic atoms. In the case of the Lamb shift, magnetic contributions to the $\delta_{\rm TPE}$ diagram appear via $\delta_{M}$. This term is very small, amounting to $0.007$ meV in the deuteron when the LO magnetic operator is used. With the addition of the two-body contributions at NLO, its contribution goes from 0.007 to $0.009$ meV, with a 20$\%$ enhancement. While the overall contribution of two-body magnetic currents is very small in the Lamb shift due to the fact the $\delta_M$ itself is small, it is expected to be larger in the hyperfine splitting, where the magnetic current distributions play a role, see Ref.~\cite{Pachucki,Friar,Chen_HFS}. \section{Conclusion} We have reviewed the status of the nuclear structure calculations performed by the TRIUMF - Hebrew University group for the Lamb shift in muonic atoms and presented new calculations of magnetic sum rules in the deuterium with two-body currents. We find an effect of 5 and 11$\%$ on the $m_{-1}$ and $m_0$ magnetic sum rules, respectively, which is consistent with previous investigations. Two-body currents are expected to provide a non-negligible contribution in nuclear structure corrections to the hyperfine splitting in muonic atoms. While the presented results constitute a necessary ingredient for a detailed study of the hyperfine splitting corrections, a complete analysis is left for future work. \section{Acknowledgments} We thank Saori Pastore and the other members of the TRIUMF-Hebrew University collaboration for helpful discussions. We are indebted to Hartmuth Arenh\"{o}vel for providing us with the magnetic dipole strength of the deuteron that served as an important check. This work was supported in parts by the Natural Sciences and Engineering Research Council (NSERC), the National Research Council of Canada. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Funding was provided by LLNL, Lawrence Fellowship Program.
{ "timestamp": "2017-12-15T02:06:55", "yymm": "1712", "arxiv_id": "1712.05187", "language": "en", "url": "https://arxiv.org/abs/1712.05187" }
\section{Introduction} The Landau Fermi liquid (FL) theory is one of the corner-stones of condensed matter physics that explains most of the basic properties of materials~\cite{paper47}. Another known generic liquid is the Luttinger liquid~\cite{paper48}, which seems to be another fixed point of interacting fermionic systems~\cite{paper49}. However, in many strongly correlated materials, various non-Fermi liquid (NFL) behaviors have been observed experimentally that are qualitatively distinct from these known generic liquid behaviors. The most well known example is the normal state of cuprate superconductors~\cite{paper3,paper2}. The observed resistivity shows a linear temperature dependence in a wide doping and temperature range~\cite{paper7,paper8}. This so-called bad-metal behavior is in great contrast to the generic quadratic dependence in the FL and has led to the suggestion of a ``hidden Fermi liquid''~\cite{paper53,dmft1}. Furthermore, in the hole underdoped ``pseudogap'' regime, the Fermi surface becomes an open ``arc,'' beyond which the spectral function demonstrates an incomplete gap-like feature near momentum $k=(\pi,0)$~\cite{paper10,paper11}, without a well-defined quasiparticle peak~\cite{paper50,paper51,paper52}. Even in the Fermi arc, where a quasiparticle-like peak can be observed by angule-resolved photoemission spectroscopy (ARPES), the peak is accompanied by a ``background'' spanning a large energy range taking a significant amount (at least half) of weight from the peak~\cite{paper3}. Furthermore, in optimally doped samples, the scattering rate of the quasiparticles near the chemical potential is believed to have a $(\sqrt{T^2+\omega^2})$ like temperature $T$/energy $\omega$-dependence~\cite{paper31}. This is very exotic, as it implies a nonanalyticity at the zero-temperature, zero-energy limit of the electronic self-energy, qualitatively different from the analytical $T^2+\omega^2$ dependence of the FL~\cite{paper33} and the $T+\omega^2$ dependence of the hidden Fermi liquid~\cite{paper53,hdl08}. In fact, the smooth FL behavior has a profound origin related to the diminishing phase space of clean fermionic systems (not just FL) at low energy that reduces the scattering rate to zero in this limit. (In other words, given the Pauli principle, clean fermionic systems are not supposed to have dissipation near the ground state.) The observation implies unusual non-analytic behavior that perhaps further promotes the notion of a quantum critical point~\cite{paper31,paper55,paper56}, whose associated quantum fluctuation can in principle lead to unconventional superconductivity~\cite{paper57,paper58,paper59}. This exotic scattering rate has been one of the most essential puzzles of condensed matter physics, in association with the above bad-metal behavior. However, its microscopic origin remains elusive. A phenomenological interpretation is the marginal Fermi liquid (MFL) which hypothesizes charge and spin polarizabilities~\cite{paper24} from unknown physical origins. More recently, the same nonanalytic behavior was shown to appear via holographic gauge/gravity duality~\cite{paper34,paper35}. A realistic physical picture of this exciting new line of consideration still requires further development. Even more unexpectedly, a recent experiment~\cite{paper23} found very similar structures in the high-temperature normal-state self-energy (which gives the scattering rate) and the anomalous self-energy in the low-temperature superconducting state (which gives the superconducting gap). This is in excellent agreement with the earlier observation~\cite{paper36} that the normal-state quasiparticle scattering rate on the Fermi surface correlates directly with the low-temperature superconducting gap in multiple materials near optimal doping. Together, these observations indicate that whatever constitutes the microscopic mechanism of superconductivity at low temperature, has already been encoded in the scattering of the normal state, a feature of the large energy scale of the essential correlations absent in all weak-coupling pictures. In addition to these unusual behaviors that connect profoundly to the most basic concepts of condensed matter physics, and the recent studies on the charge-density wave~\cite{paper12,paper13,paper14,paper15}, quasiparticles in the cuprates present another universal and distinct ``kink'' in their dispersion~\cite{paper16,paper17,paper18,paper19,paper20,paper21,paper22}. Coupling a MFL to the magnetic resonance in the superconducting state ~\cite{paper18,paper20,paper25} was proposed to be the origin of the kink, but the lack of magnetic resonance above the superconducting transition temperature, $T_c$ appears to contradict the observation of the kink above $T_c$~\cite{paper16,paper21}. Coupling to the phonon~\cite{paper16,paper19} provides another possible origin, but its strength is questioned by a later calculation~\cite{paper26}. A similar structure can also be produced by replacing the phonon by the spin fluctuation~\cite{paper27,paper28}, but no consensus has been reached for such a mechanism. Notice however that none of these proposals properly includes the essential NFL scattering mentioned above that is obviously controlling the low-energy physics. The combination of these four characteristics in the one-particle spectral function indicates unambiguously that the cuprates are in a many-body state \textit{completely} distinct from the usual Fermi liquid. Then, other than a vague ``strongly correlated electronic system,'' what exactly are the cuprates? The best-known attempt to answer this essential question is probably Anderson's ``hidden Fermi liquid''~\cite{paper53,hdl08}, which, however, does not naturally incorporate the above-mentioned strong correspondence between the superconducting gap and the normal-state scattering rate. In this paper, we show that NFL scattering rate results naturally from scattering against an emergent Bose liquid of tightly bound pairs. Near the optimal doping, we find a \textit{finite} scattering rate even at the zero-temperature and zero-energy limit that grows linearly with temperature, in contrast to the typical FL behavior. In essence, the formation of bosonic pairs allows finite thermal fluctuation (and thus dissipation) in the low-temperature, low-energy limit, in the absence of condensation. Note that such a NFL scattering rate is produced with an analytical self-energy and thus does not require a quantum critical point. Most unexpectedly, the same scattering also produces a kink in the quasiparticle dispersion at the experimentally observed energy, revealing that the kink is essentially another manifestation of the underlying NFL scattering process. Furthermore, our results give the observed direct correspondences between the normal and superconducting states in several cuprates, including their structures of the self-energies and scattering rate vs. superconducting gap. Our study demonstrates a generic route for clean fermionic systems to break the fermionic zero-dissipation characteristics. The simultaneous description of these seemingly unrelated experimental observations in the cuprates by a \textit{single} model suggests strongly that by room temperature a large number of the doped holes in the cuprates have formed an ``emergent Bose liquid'', whose condensation at low temperature gives the unconventional superconductivity. \begin{figure}[h] \vspace*{-0.6cm} \includegraphics[width=1.1\columnwidth,clip=true]{FIG0.pdf} \vspace*{-1.2cm} \caption{\label{fig:fig0} (a) Illustration of pivoting motion of EBL in Eq.~(\ref{eq6}). The green solid ellipse denotes a bosonic tightly bound pair of holes located at the blue and red solid squares. Through the second- and third- nearest-neighbor hoppings of holes (open squares), $\tau^\prime$ and $\tau^{\prime\prime}$, the boson can hop to the first- and second- nearest-neighbor bosonic sites (open ellipses). The resulting bosonic lattice (black ellipses) forms a checkerboard lattice. (b) and (c) Illustration of the scattering process $\tau_{ii^\prime}b^{\dagger}_{ij}f_{j}f^{\dagger}_{j^\prime}b_{i^\prime j^\prime}$, of a photohole (yellow circle) against a boson in Eq.~(\ref{eq1}). } \vspace*{-0.2cm} \end{figure} \section{ Model} We assume a model system with very strong short-range correlations in spin, charge, and pairing channels, corresponding to energy scales much larger than the temperature and energy range of experimental interest. In such a limiting case, these correlations would appear to be ``frozen'' or saturated in experimentally observed low-energy physics. We further assume~\cite{paper37,paper43} concerning the charge and pairing degrees of freedom, the essential correlations manifest themselves to three constraints of the doped holes in the system: 1) no double occupancy of sites, 2) the formation of tightly bound nearest neighboring pairs of doped holes and 3) a fixed total number of bosons (since the pair-breaking fluctuation is assumed to be of higher energy and can be integrated out). These assumptions lead to a simple model~\cite{paper37,paper43} of an emergent Bose liquid (EBL) in a checkerboard lattice (a two orbital Hamiltonian corresponding to the two types of neighboring bonds, vertical and horizontal) as shown in Fig.~\ref{fig:fig0}(a): \begin{equation} \label{eq6} H^{b}=\sum_{ii^\prime, j\in \text{NN}(i)\cap \text{NN}(i^\prime)}\tau_{ii^\prime }b^{\dagger}_{ij}b_{i^\prime j}, \end{equation} where $b_{i^\prime j}$ denotes the annihilation of a boson composed of fermions sitting at Cu site $i^\prime$ and its \textit{adjacent} site $j$. $\tau_{ii^{'}}=\tau^\prime$ or $\tau^{\prime\prime}$ is the strength of a fully dressed kinetic process involving second- or third-nearest-neighbor sites, describing the pivoting motion of the two-legged boson. The resulting non-interacting band structure and density of states of typical solutions of this two-orbital model are illustrated in Fig.~\ref{fig:fig3} below and correspond to the one-particle propagator of the boson, $D=1/(\omega-H^b)$~\cite{supp1}. Justifications for applying this idealized model to the actual cuprates can be argued from general theoretical grounds~\cite{paper37,paper43} and are at least consistent with interpretations of many experimental observations~\cite{paper38,paper39,paper40,ong1,ong2,ong3,shi,bollinger2011}. This model also takes into consideration the importance of phase fluctuation~\cite{doniach1990,paper41,paper42} for the superconductivity in the underdoped cuprates. But of course, the ultimate justification for this model, particularly in contrast to other various proposals of ``preformed pairs''~\cite{preformed1,preformed2,preformed3,preformed4}, should come from verification of its physical properties against \textit{all} available experiments. Previously, \textit{without using any free parameter}, this model successfully explained quantitatively the demise of superconductivity at 5.2\% doping~\cite{paper43} in excellent agreement with experiments, and produced a kinetics-driven second kind of superconducting gap with the correct experimental gap size~\cite{paper37}. Below we will use this model to explain intuitively the novel physics behind all four main characteristics of the electronic spectral functions, giving further credibility to this model. \begin{figure}[h] \vspace{-0.3cm} \includegraphics[width=1.0\columnwidth,clip=true]{FIG1.pdf} \vspace*{-3.0cm} \caption{\label{fig:fig1} Feynman diagram of (a) a kernel of quasiparticle scattering against two-orbital EBL, (b) Dyson's equation for dressed hopping $F$, (c) self-energy$\Sigma$, and (d) Dyson's equation for the dressed fermionic propagator $G$. The dotted blue line stands for bare hopping$\tau_{ii^{'}}$. The thick red line denotes the extracted renormalized propagator $G$. } \vspace*{0.2cm} \end{figure} We are most concerned about the effects on the electronic one-particle propagator $G$ of the injected photohole and the small number of residual unpaired holes (created by $f^\dagger$) via scattering against the bosonic pairs composed of holes \textit{indistinguishable} from them. As an illustration, we consider the inelastic scattering process~\cite{paper43} that conserves the bosonic particle number, as shown in Fig.~\ref{fig:fig0}(b) and (c): \begin{equation} \label{eq1} {\sum_{ii^\prime}}{\sum_{\substack{j\in \text{NN}(i),\\ j^\prime\in \text{NN}(i^\prime)\setminus j}}}\tau_{ii^\prime}b^{\dagger}_{ij}f_{j}f^{\dagger}_{j^\prime}b_{i^\prime j^\prime}, \end{equation} Treating this process as a perturbation and making use of Wick's theorem, we derive the corresponding Feynman diagrams and their rules~\cite{supp2}. We then perform the following partial sum of fermionic self-energy diagrams at finite temperature (see Fig.~\ref{fig:fig1}) \begin{equation} \label{eq2} \Sigma(1,1^\prime)=F(\overline{2^\prime},\overline{2})D(1,\overline{2};1^\prime,\overline{2^\prime}), \end{equation} [in the 1 $\rightarrow$ (space,time) notation, with variables with an overline denoting dummy ones to be summed over]. Here $D(1,2;1^\prime,2^\prime)$ denotes the propagation of the boson from $1^\prime$ and its adjacent $2^\prime$ to 1 and its adjacent 2. $F$ denotes the dressed hopping obtained from (in matrix notation) \begin{equation} \label{eq3} F=\tau S\tau+\tau SF, \end{equation} which is dressed by \begin{equation} S(1,1')=G(\overline{2'},\overline{2})D(1,\overline{2};1',\overline{2'}) \end{equation} via the dressed fermion one-particle propagator $G$, which itself is self-consistently obtained with the self-energy (in matrix notation) \begin{equation} \label{eq5} G=G_{0}+G_{0}\Sigma G \end{equation} Note that in Eq.~(\ref{eq3}), the lowest-order term containing only the bare hopping is removed since its contribution to Eq.~(\ref{eq2}) leads to a nearly $k$ independent constant that can be absorbed by the chemical potential. \begin{figure*}[t] \begin{center} \vspace{-2.0cm} \resizebox*{1.5\columnwidth}{!}{\includegraphics{FIG2.pdf}} \end{center} \vspace{-2.0cm} \caption{\label{fig:fig2} Real (blue) and imaginary (red) parts of the (a) experimental~\cite{paper23} and (b)-(f) calculated self-energy at different doping and temperatures. Solid arrows mark two distinct features in Im$\Sigma$. Open arrows and black lines indicate the direct correspondence of the peak feature and (g) the kink observed in the dispersion. The background removed in (a) is a quasilinear analytical $\sqrt{c^2+\omega^2}-c$ function with $c$ much smaller than the first feature around 20 meV, so that it does not introduce any visible artificial feature. } \vspace*{-0.4cm} \end{figure*} To best account for realistic cuprate materials, we use the same doping-dependent $\tau$ parameters in Refs.~~\cite{paper37,paper43} obtained from the dispersion $\epsilon$ near the chemical potential in the ARPES measurement of La$_{2-x}$Sr$_x$CuO$_4$. We further use the same experimental dispersion to construct an approximate $G\sim \widetilde{G}(\vec{q},\omega)=W/[\omega-\epsilon (\vec{q})]$ with a reduced quasiparticle weight $W\sim 0.5$, roughly estimated from the experimental spectra~\cite{paper3} that shows a large weight loss in the incoherent features. We then choose a featureless reference $G_0$ to ensure that the low-energy part of the resulting $G$ from Eq.(~\ref{eq5}) agrees well with $\widetilde{G}$ (experiment) to respect as much as possible the self-consistency of our formalism. It is important to note that the third assumption above dictates that the chemical potential of the boson needs to be calculated for each temperature to guarantee the fixed particle number ($\sim x/2$) of bosons. \section{Results} Figure~\ref{fig:fig2} shows our calculated normal-state self-energy at two temperatures (40 and 80K). Also shown in Fig.~\cite{fig:fig2}(a) is the measured Im$\Sigma$ with high resolution~\cite{paper23} for optimally doped Bi$_{2}$Sr$_{2}$CaCu$_{2}$O$_{8+\delta}$, which contains two distinct features at low energy (easier to see after removing a featureless background associated with other decay channels). Amazingly, these features and, particularly, their energies are very nicely captured in our results in Fig.~\cite{fig:fig2}(b). The agreement in their energies is not to be taken lightly, considering that it results from a framework that has \textit{no free parameter}: the essential parameters $\tau$ and $\widetilde{G}$ are obtained directly from ARPES experimental dispersion, and $D$ is obtained from $\tau$ directly. (The weight reduction factor of 0.5 in $\widetilde{G}$ roughly estimated from the experiments mostly just fine-tunes the intensity of our results and does not alter their energies much.) In fact, in our calculation, the stronger peak around 50 meV obtains its energy approximately from the binding energy of the Van Hove singularity at ($\pi$,0) given directly from the ARPES dispersion of La$_{2-x}$Sr$_x$CuO$_4$~\cite{paper37,supp3}. (A similar energy of the Van Hove singularity was observed in optimally doped Bi$_{2}$Sr$_{2}$CaCu$_{2}$O$_{8+\delta}$ as well~\cite{paper3}.) Consistently, Fig.~\ref{fig:fig2}(c)-~\ref{fig:fig2}(f) shows that at lower (7$\%$) doping the feature grows to 70meV, again following the well-known nonrigid band shift of the Van Hove singularity at ($\pi$,0)~\cite{paper44}. Microscopically, this comes simply from the significant opening of the scattering phase space near the van Hove singularity. This realization accounts naturally for the observed sudden change in momentum distribution curve of ARPES~\cite{paper16,paper17,paper18,paper19,paper21} as well. Most unexpectedly, Figs.~\ref{fig:fig2}(c) and~\ref{fig:fig2}(g) show that when applied to a smooth featureless reference $G_0=W/[\omega-\epsilon_{0} (\vec{q})]$, this stronger feature produces a clear kink structure in the dispersion of the resulting $G$, a structure intensively studied by ARPES. A careful examination of the resulting kink structure should make clear that both experimental dispersion and our calculated dispersion actually contain two kinks (marked by thin lines), between which the dispersion is steeper. This behavior is also clearly observed in experimental data shown here (6.3\%, 20K)~\cite{paper50} and in other experiments~\cite{paper20,paper21,paper22}. Such a flat-steep-flat, two-kink structure is qualitatively distinct from the flat-vertical, one-kink structure produced by coupling to phonon~\cite{paper26} and spin fluctuations~\cite{paper28}, as it requires a peak, not a dip, in Im$\Sigma$. Since the kink energy is closely related to the Van Hove singularity derived peak in Im$\Sigma$, one should expect a systematic correspondence between these two measured quantities. Indeed, in overdoped Bi$_{2}$Sr$_{2}$Cu$_{2}$O$_{8+\delta}$, the Van Hove singularity occurs at higher energy around 100meV~\cite{feng2001}, and correspondingly, a kink of the similar energy was reported by ARPES~\cite{paper20}. \begin{figure*}[t] \begin{center} \vspace*{-2.4cm} \includegraphics[width=1.8\columnwidth,clip=true]{FIG3.pdf} \end{center} \vspace*{-3.0cm} \caption{ \label{fig:fig3} (a), (b), (e) Temperature-dependent scattering rate $\Gamma$ of the nodal quasiparticle at the Fermi wavevector $k_F$ (black open circles) and at the chemical potential $k_\mu$ (red solid circles) for $7\%$ (top panel) and $15\%$ (bottom panel). Both show NFL behavior with a linear temperature dependence with a constant at the zero-temperature limit. The shaded region is below $T_C$. (c) and (f) The corresponding band structure and (d) and (g) densitys of state of the bosonic pairs. (h) The experimentally extracted Eliashberg function $\alpha^{2}F(\omega)$~\cite{paper23} at optimal doping, showing a strong peak around 40meV, a weak peak around 140meV, and a large energy range around 250meV, all of which are well captured by the calculated bosonic density of states in (g). } \vspace{-0.3cm} \end{figure*} We now show that near the optimal doping, this model produces an exotic NFL scattering rate at low temperature. Figure~\ref{fig:fig3}(a) shows a linear temperature $T$ dependent scattering rate $\Gamma(k_F)$ measured from the full width at half maximum of the peak in our resulting spectral function at the fixed Fermi wave vector $k_F$. Such a linear temperature dependence signifies an exotic scattering, qualitatively distinct from the standard $T^2$ dependence scattering of a FL. This linear dependence has been observed experimentally near the optimal doping~\cite{paper29,paper31,kondo2015}, and is regarded as the phenomenological MFL~\cite{paper24}. Even more exotically, Fig.~\ref{fig:fig3}(b) shows that the scattering rate approaches a \textit{finite} value at the low temperature, indicating that the low-energy carriers can dissipate even at low-temperature limit without disorder. This is quite unexpected, since generally speaking, due to the Pauli principle, a typical \textit{clean} fermionic system should have a diminishing phase space of scattering at zero temperature and cannot dissipate at the chemical potential. This is why even the phenomenological MFL picture assumes a zero Im$\Sigma$ at the chemical potential as temperature approaches zero and why the hidden Fermi liquid shows the same behavior. This is also the reason why such a finite scattering rate is always ignored in experimental analysis~\cite{paper21,paper23} by regarding disorder as its origin. Our results open an entirely new possibility that such finite scattering might be intrinsic to the clean fermionic system, and should be analyzed with care in future experiments. This issue has a significant physical consequence. If, indeed, the scattering rate must be zero at the chemical potential, the observed linear $\omega$ dependence necessarily dictates a non-analytical function of $\omega$. Such non-analytical behavior, is of course, quite special and might support the notion of a quantum critical point~\cite{paper31,paper55,paper56}, for example. However, if the scattering rate is allowed to be finite at the chemical potential, as found here, a linear $\omega$ dependence comes simply from the lowest-order expansion of an analytical function, $\text{Im}\Sigma(\vec{k}_{F},\omega ,T)\approx a_{0}+a_{1}T+a_{2}\omega$. So how can our model break the above generic phase-space limitation of dissipation of fermions? The answer lies in the nontrivial EBL. On the one hand, the indistinguishableness between the photohole and the holes that constitute the boson results in scattering processes like that in Eq.~(\ref{eq1}). On the other hand, the peculiar nature of larger thermal fluctuation of uncondensed bosons would produce incoherent scattering even at the zero-temperature limit. In essence, by forming an EBL, the fermionic system can escape from its fermionic constraints. Note that this consideration is clearly very general and does not rely on the details of our specific model. Such NFL behavior is possible only in the limitless richness of emergence in many-body systems. \begin{figure*}[t] \begin{center} \vspace{-2.0cm} \resizebox*{1.7\columnwidth}{!}{\includegraphics{FIG4.pdf}} \vspace{-2.7cm} \end{center} \caption{ \label{fig:fig4} A scaling relationship between the kinetic-driven superconducting gap~\cite{paper37} at zero temperature and quasiparticle scattering rate at the normal state slightly above $T_{C}$ in La$_{2-x}$Sr$_x$CuO$_4$, as a function the of rescaled Fermi surface $\phi/\phi_{c}$. (a) The calculated trend resembles (b) the observed one in Bi series cuprates ~\cite{paper36}. The inset in (a) is an illustration of the Fermi surface angle $\phi$, with $\phi_{c}$ being the angle of the endpoint on the Fermi surface.} \vspace{-0.4cm} \end{figure*} The linear temperature dependence of the scattering rate can be visualized by rewriting Eq.~\ref{eq2} into approximately the form of the Eliashberg function: \begin{equation} \text{Im}\Sigma(\vec{k},0,T)=-\int \alpha^{2}F(\vec{k},u)[n_{b}(u,T)+n_{f}(u,T)]du, \end{equation} where the Eliashberg function $\alpha^{2}F$ is approximately proportional to the bosonic density of states (DOS). The $n_b$-related first term yields a constant (since the number of tightly bound pairs is fixed), while the $n_f$-related second terms yields a linear temperature dependence~\cite{supp4}, as long as the DOS does not change too fast at the energy scale of $k_{B}T$. This is why such linearity persists longer in optimally doped system, in which the band width of the lower-energy band is the biggest [see Fig.~\ref{fig:fig3}(g)]. Of course, this approximate analysis is limited to low temperature, where the chemical potentials for the boson and the photohole do not shift strongly with temperature. Otherwise, Fig.~\ref{fig:fig3}(e) shows that at high temperature, the spectral function that peaks at the chemical potential will occur at a different wave vector $k_\mu$ and experience a different scattering rate. Particularly, the shift of the bosonic chemical potential will cause the scattering channels to deviate from linear increase. Interestingly, a similar reduction of the quasiparticle scattering rate at high temperature was produced by a dynamical mean-field calculation recently~\cite{dmft1}, even though the physics in play is quite different. Finally, our picture also offers a natural explanation of the puzzling correspondence between the normal-state scattering rate and the superconducting gap~\cite{paper36}. For example, ARPES measurements reported an unexpected correlation between the normal-state scattering rate at the transition temperature $\Gamma_{T_c}$ and the low-temperature superconducting gap $\Delta_0$ shown in Fig.~\ref{fig:fig4}(b): \begin{equation} \label{eq8} \Delta_{0}(\phi)\propto \frac{\phi}{\phi_{c}} \Gamma_{Tc}(\phi), \end{equation} where $\phi$ denotes the $k$-space angle from the nodal point ($\pi$,$\pi$)/2, and $\phi_c$ denotes the same for the end of the Fermi arc [see the inset in Fig.~\ref{fig:fig4}(a)]. In the traditional weak-coupling theory of superconductivity, the superconducting gap is controlled by the strength of pairing, which does not leave much of a signature in the normal state when amplitude fluctuation overwhelms the system. So, this correspondence is quite unimaginable in the weak coupling regime. In our picture, on the other hand, this is quite straightforward. As reported in a previous study~\cite{paper37}, at low temperature, a second kind of superconducting gap appears in the one-particle spectral function through coherent kinetic scattering against the condensed EBL. This ``superconducting gap'' is a simple analytical function of the condensation density ($\sim x/2$, half of the doping level at zero temperature) and fully renormalized hopping $\tau_{ii^\prime}$. With a $d$-wave condensate~~\cite{paper43}, the momentum dependence becomes simply linear near the nodal point: $\Delta_0 \propto \sqrt{x} \phi$. On the other hand, the normal-state scattering rate results from inelastic scattering against the \textit{same} set of bosons, except they are not yet condensed, $\Gamma \propto x$ with very weak $k$ dependence due to the heavy convolution in Eq.~(\ref{eq5}). Given that $\phi_{c}$ is approximately proportional to the Fermi arc length, which scales as the square root of the hole pocket size that is proportional to the doping level $x/2$, $\phi_c \propto \sqrt{x}$, the observed trend is easily understood. Indeed, our results shown in Fig.~\ref{fig:fig4}(a) reproduce very nicely the observed trend of Eq.~(\ref{eq8}) in Fig.~\ref{fig:fig4}(b). In essence, in this picture, all the short-range correlations are so strong that they become frozen at low temperature, including at the normal state slightly above $T_c$. In other words, all the relevant information concerning the lower-temperature condensed state is already available in the normal state. Therefore, this kind of direct correspondence between many properties of the normal state and the superconducting state is natural. This consideration immediately applies to yet another observed correspondence. The Eliashberg function $\alpha^2F$ of the normal and pairing self-energies extracted from high resolution laser ARPES data was found to have the same characteristics~\cite{paper23} [see Fig.~\ref{fig:fig3}(h)]: a strong peak around 40meV, a weak peak around 140meV, and a broad feature extending to 250meV. (Calculation of conductivity data~\cite{paper46} also suggests such a large cutoff.) It is obviously very hard to imagine a phonon extending to such a high energy, or spin fluctuation demonstrating such a rich structure. However, compared to the DOS of our boson [see Fig.~\ref{fig:fig3}(h)], one immediately recognizes the resemblances in all three characteristics. Again, both states are scattering against the \textit{same} set of boson, condensed or not. \section{Summary} In summary, we showed that the non-Fermi liquid scattering rate results naturally from scattering against an emergent Bose liquid of tightly bound pairs, designed to model the hole-doped cuprates. At the chemical potential, even clean fermionic systems develop a \textit{finite} scattering rate at the zero-temperature limit that grows linearly with temperature, in contrast to the usual non-dissipative fermionic characteristics. Such exotic behavior does not involve a non-analytic self-energy and does not require proximity to a quantum critical point. Unexpectedly, the same non-Fermi liquid scattering process also generates a kink structure in the resulting one-particle propagator at the experimentally observed energy, revealing that the kink is another manifestation of the non-Fermi liquid scattering. Our results further produced the observed direct correspondence between the normal-state scattering rate and the superconducting gap, as well as their underlying structures in the self-energy. Our findings provide a generic route for fermionic systems to demonstrate non-Fermi liquid behavior. They also suggest that the cuprates are in this exotic regime in which a large number of doped holes develop bosonic features by forming an emergent Bose liquid of tightly bound pairs that condense into a superfluid at lower temperature. We thank V. Dobrosavljevic, A. Hegg and S. Sen for useful discussions. Work was supported by National Natural Science Foundation of China Grants No. 11674220 and No. 11447601, and Ministry of Science and Technology Grants No. 2016YFA0300500 and No. 2016YFA0300501.
{ "timestamp": "2019-03-05T02:03:59", "yymm": "1712", "arxiv_id": "1712.05303", "language": "en", "url": "https://arxiv.org/abs/1712.05303" }
\section{Introduction} Market timing is an investment technique whereby an investment manager (professional or individual) attempts to anticipate the price movement of asset classes of securities, such as stocks and bonds, and to switch investment money away from assets with lower anticipated returns into assets with higher anticipated returns. Market timing managers use economic or other data to calculate propitious times to switch. Market timing seems a popular approach to investment management, with Morningstar listing several hundred funds in its tactical asset allocation (TAA) category---TAA being an industry name for market timing---and mainstream fund managers advertising their ability to switch to defensive assets when stock markets seem poised for a downturn. The antithesis of market timing, and another broadly popular investing approach, is buy-and-hold, whereby investment managers allocate static fractions of their monies to the available asset classes and then ignore market price gyrations. Is market timing likely to be successful relative to investing in a static allocation to the available asset classes? The literature in this area is focused on developing sophisticated statistical tools that can detect and measure the market timing ability of professional fund managers \cite{Henriksson_timing_1984}. Numerous uses of these techniques over decades have produced mixed results \cite{Bello_timing_1997,Becker_timing_1999,Goetzmann_daily_2000,Chance_timers_2001,Jiang_time_2007,Ptak_practice_2012}. Some authors detect no market timing ability, while others report statistically significant evidence of market timing ability. On the other hand, Dalbar measures the market timing results of the average individual investor through mutual fund sales, redemptions and exchanges \cite{Dalbar_QAIB_2016}. These studies find unambiguously that market timing by the average investor is unsuccessful relative to a static allocation. The ambiguous results for successful market timing from professional managers suggests that, at minimum, it is difficult to market time successfully, while the unambiguous results for individuals strongly suggests that it is easy to market time unsuccessfully. My goal here is both different and simpler than statistical tests to detect market timing. I want to create a simple model to ask the question, what is the likelihood of successful market timing? Or more precisely, what is the return probability distribution function (PDF) for market timing? Is the PDF of market timing returns symmetric? If it is hard to obtain above average returns by market timing, is it also hard to obtain below average returns? What is the most basic mathematics of market timing? I try in this paper to evoke a similar spirit to Sharpe's "The Arithmetic of Active Management" \cite{Sharpe_arithmetic_1991}, in which elementary arithmetic is all that is required to demonstrate why active management must in aggregate under perform low-cost index funds. While I will need to invoke elementary probability theory, it will show that the most probable outcome of market timing is to under perform a buy-and-hold, suitably weighted average of the available asset classes. Moreover, as I build the simple model from the returns of US stock and bond total market index funds since 1993, market returns over that time period mean that the suitably weighted average portfolio, while not identical with the 60:40 stock:bond balanced fund, is in practice barely distinguishable from it. In the rest of the paper my approach will be to calculate the boundaries of the feasible set of market timing portfolios using fund data for perfectly timed (by hindsight) switching between two asset classes, stocks and bonds. From this analysis I also obtain the historically optimal timing path of switches, which the NIST\footnote{National Institute of Standards and Technology, U.S.\/ Department of Commerce, \texttt{www.nist.gov}.} suite of tests for randomness shows is indistinguishable from a random sequence. The key elementary result is that the geometric mean of market timing returns has an asymmetric PDF. One implication of this is that the most probable market timing return is below the median return, which can be directly calculated to be given by a static portfolio weighted by the relative fraction of time periods that each asset class outperforms the other. These results are illustrated through Monte Carlo sampling of timing paths within the feasible set and by the return paths of several market timing funds with comparably long, publicly available data. To begin, in the next section I describe the data. \section{Data} \label{sec:data} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{quarterly_returns.png} \caption{Quarterly return time series for stock and bond total market index funds, 1993--2017. Returns are in multiplicative form.} \label{fig:quarterlyreturns} \end{figure} The data consists of time series of quarterly returns for three index funds starting in 1993, the advent of the youngest of the three funds, and ending in Q3 2017. The series covers 24 years, and there are $N = 99$ data points per series. The funds, all from Vanguard, are Total Stock Market, Total Bond Market, and Balanced Index, the last a static portfolio of 60\% Total Stock and 40\% Total Bond. Other information on these funds is in appendix~\ref{sec:index_funds}. Figure~\ref{fig:quarterlyreturns} shows the quarterly return time series for stocks and bonds. Because the data are from live funds, calculated return paths are net of management and trading costs; however, tax consequences are ignored. For quarterly switching taxes would likely be substantial, but the effect would only dampen the spread of net returns and change only the quantitative, not the qualitative results of the model. Note that because fund data are the basic building blocks of the model, all return paths calculated could have been obtained by an investor during the time period. Since the way to calculate total return is to multiply the sub-period returns together, I trivially transform the original data to multiplicative form, e.g.\/ a $+3$\% return becomes $1.03$ and a $-3$\% return becomes $0.97$. The differences between multiplicative and additive random processes will be important in the subsequent analysis. \section{Two Asset, All or Nothing Market Timing Model} \label{sec:model} Here I define the simple two asset market timing model with all or nothing quarterly switches. Using perfect hindsight, it is easy to identify the best and worst possible market timing portfolios, which form the boundaries of the feasible return paths for all market timing portfolios, i.e.\/ all possible market timing portfolios lie between the boundaries of the feasible set\footnote{Technically it is all market timing portfolios that conform to the assumptions of the model; however, in section~\ref{sec:discussion} we will see that real, non-conforming market timing funds fall within the feasible set.}. I reveal the optimal (highest possible return) timing sequence and test it for randomness. Section~\ref{sec:pdf} focuses on deriving the return PDF for the model. \subsection{Model} The model consists of quarterly all or nothing switches between stocks and bonds. In the $i$th time period $t_i$ the return of stocks is denoted $r_{si}$ and the return of bonds is denoted $r_{bi}$. A {\em timing path} is the binary sequence $f_i$ that is \begin{equation} \label{eq:timing_path} f_i = \begin{cases} 1 & \quad \mathrm{if \: during} \quad t_i \quad r_{si} > r_{bi} \\ 0 & \quad \mathrm{if \: during} \quad t_i \quad r_{si} < r_{bi}. \end{cases} \end{equation} In other words $f$ is set to $f = 1$ when the stock return is larger than the bond return and set to $f = 0$ when the bond return is larger than the stock return. A special class of timing path has $f = \textrm{constant}$ and is termed a static allocation or buy-and-hold portfolio. I call a {\em return path}, denoted $\rho$, the sequence of returns generated by a particular timing path $f_i$. The $j$th return path is given by \begin{equation} \label{eq:return_path} \rho_j = \prod_i^N \left( f_{ij} r_{si} + (1 - f_{ij}) r_{bi} \right). \end{equation} The geometric mean of a return path is given by $\rho_j^{1/N}$. \subsection{Feasible Set} With perfect hindsight the best and worst performing return paths are easily found. In the notation of Matlab code\footnote{Matlab code and data are available at \\ \texttt{https://www.dropbox.com/s/6i82p9phq7q56be/timing.m?dl=0}.} equation~\ref{eq:return_path} becomes for the best $\rho_b$ and worst $\rho_w$ possible return paths \begin{subequations} \label{eq:optimum_paths} \begin{align} \rho_b &= \textrm{cumprod(max(stocks, bonds))} \\ \rho_w &= \textrm{cumprod(min(stocks, bonds))}, \end{align} \end{subequations} and the timing path for $\rho_b$ is given by $f_b = (\textrm{stocks} > \textrm{bonds})$; similarly for $\rho_w$. Figure~\ref{fig:envelope}(a) shows the quarterly return series for $\rho_b$ and $\rho_w$, while figure~\ref{fig:envelope}(b) shows histograms of quarterly returns for stocks, bonds, $\rho_b$, and $\rho_w$. There are no surprises: partitioning returns by equation~\ref{eq:optimum_paths} puts the positive return, right tail of the stocks distribution into $\rho_b$, while excluding the negative return, left tail. The reverse happens to $\rho_w$. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{quarterly_returns_best_worst.png} & \includegraphics[width=0.5\textwidth]{quarterly_returns_histogram.png} \\ (a) & (b) \\ \multicolumn{2}{c}{\includegraphics[width=0.75\textwidth]{feasible_envelope.png}} \\ \multicolumn{2}{c}{(c)} \end{tabular} \caption{Two asset, all or nothing market timing model switches to whichever of the two assets classes will have the better return that quarter. (a) Quarterly returns of the best and worst market timing portfolios as a function of time in multiplicative form. (b) Histograms of returns for the indicated data sets. (c) Feasibility envelope plotted on semi-log axes. Thick red lines are the best and worst possible return paths over this time period. Blue lines are the three data sets: stocks $(f = 1)$, bonds $(f = 0)$, and balanced $(f = 0.6)$. The fixed portfolio lines order as expected from $f = 0$ to $f = 1$.} \label{fig:envelope} \end{figure} Figure~\ref{fig:envelope}(c) plots several return paths on semi-log axes. The best and worst possible return paths for this period are thick red lines. Blue lines are the fund data for stocks $(f = 1)$, bonds $(f = 0)$ and balanced $(f = 0.6)$. The returns of the fixed portfolios are ordered as expected with $f = 0$ producing the lowest returns of the fixed $f$ portfolios and $f = 1$ producing the highest. Note, however, that the large difference in returns normally associated with stocks and bonds is dwarfed by the difference in returns between the best and worst market timing portfolios. The potential reward to successful market timing is clearly enormous; however, just an enormous is the potential penalty to unsuccessful market timing. The best and worst possible return paths demark the feasible set of return paths for the two asset model. All possible return paths (all possible market timing paths $f_i$) fall inside the envelope made by $\rho_b$ and $\rho_w$. As the model has all or nothing switches, the number of possible paths of length $N$ is $2^N$. As the data set has $N = 99$, the number of possible return paths is $2^{99} \sim 10^{29}$, which is large. \subsection{The Unpredictable Optimal Timing Path} \begin{figure} \vspace*{-4em} \centering \includegraphics[width=\textwidth]{best_timing_sequence.png} \vspace{-4em} \caption{Optimal timing path $f_b$ that would have produced the highest possible return path $\rho_b$ over the time period. Black regions have $f_i = 1$ (stocks $>$ bonds). White regions have $f_i = 0$ (bonds $>$ stocks).} \label{fig:optimal_timing_path} \end{figure} Figure~\ref{fig:optimal_timing_path} shows the historical optimal timing path $f_b$ that produces the highest possible return path $\rho_b$ over the time period. Black regions have $f_i = 1$ (stock return $>$ bond return). White regions have $f_i = 0$ (bond return $>$ stock return). It will be convenient to define $p$ as the fraction of time periods in which $f = 1$, which is easily calculated by summing $f_b$ and dividing by $N$. For this data $p = p_b \approx 0.64$: over this time period approximately 2/3 of the time stocks returned more than bonds. While the optimal timing path $f_b$ is not random like a coin flip ($p_b \ne 1/2$), figure~\ref{fig:optimal_timing_path} shows no pattern readily discernible to the eye. Is $f_b$ random? It is worth distinguishing random and unpredictable. The historically optimal timing path is not a random bit sequence because ones occur about two-thirds of the time. Nonetheless, the important question is can I predict the next element in the sequence, given knowledge of the previous elements of the sequence? How can a sequence be not random but at the same time unpredictable? Consider a 6-sided die, of which four sides have a one and two sides have a zero. For each fair roll of the die there is a two-thirds probability of a one and a one-third probability of a zero. Since each fair roll of the die is independent of all rolls that have come before, there is no way to predict from the past sequence of rolls what the next roll of the die will produce. The analogy is not perfect because $p_b$ is not known {\it a priori}, and in fact $p_b$ could be different over different time periods. Leaving the details to appendix~\ref{sec:nist_details}, I use the suit of 15 tests published by NIST \cite{Bassham_NIST_2010} and designed for the purpose of verifying random number generators for cryptography. While most of the NIST tests, in order to ensure an accurate test, require orders of magnitude longer bit sequences than the financial time series provides, for four of the tests the $N = 99$ bit length of $f_b$ is close to the suggested minimum length. Again, leaving details to appendix~\ref{sec:nist_details}, the result of those four tests is that $f_b$ is random (unpredictable) at the 99\% confidence level. While the historically optimal timing sequence $f_b$ is clearly special in some sense---the probability of that particular sequence to occur is $2^{-99}$---the question is what, if anything, distinguishes $f_b$ from any other random timing paths? If we look at $f_b$ and randomly generated timing paths {\em without knowing which is which}, can we distinguish $f_b$ from the masses of possible timing paths? If $f_b$ is random, as the NIST tests say it is, there is nothing to tell why it is special, which says that it is not special, that just by a $2^{-99}$ random chance, it was special for this time period and that, in itself, $f_b$ is unpredictable, i.e.\/ it contains {\em no} information about any future optimal timing path. \section{Probability Distribution of Return Paths} \label{sec:pdf} As the optimal timing path is indistinguishable from a random sequence, I review elementary properties of random multiplicative processes, from which it follows that the highest probability outcome of market timing is a return less than the median of the PDF of market timing returns. The return PDF is estimated by Monte Carlo sampling of random timing paths. The median of the return PDF can be directly calculated as the weighted average of the returns of the assets with the weights given by the fraction of time each asset has a higher return than the other. For the time period covered by the data the median return was close to the $f = 0.6$ balanced index fund. \subsection{Monte Carlo} \begin{figure}[tb] \centering \includegraphics[width=\textwidth]{paths_M=1e5.png} \caption{Return paths (gray) for $M = 10^5$ randomly generated timing paths. Red lines are the best and worst market timing return paths. The black line is the observed $f = 0.6$ balanced fund returns.} \label{fig:paths} \end{figure} The distribution of typical returns of the model can be estimated by Monte Carlo methods. Generate $M$ random timing paths of length $N$ and calculate $M$ return paths with equation~\ref{eq:return_path}. In order to match the period data, set random timing paths to have the same fraction of ones and zeros as the data, i.e.\/ the average value of $p$ for the $M$ timing paths is set to $p = p_{b}$. This is done by using Matlab's \verb|rand| function to generate a length $N$ sequence of random real numbers $n$ drawn from a uniform distribution in the range $[0, 1]$ and setting each term in the sequence equal to one if $n < p_{b}$ or to zero if $n \ge p_{b}$. Figure~\ref{fig:paths} shows $M = 10^5$ return paths as thin gray lines in a semi-log plot similar to figure~\ref{fig:envelope}(c). Red lines are the boundaries of the feasible set, $\rho_b$ and $\rho_w$, while the thick black line is the data for the $f = 0.6$ balanced fund. Before further examination of the return PDF it will be useful to review several facts about distributions from random multiplicative processes, such as that of equation~\ref{eq:return_path}. \subsection{Random Multiplicative Processes} A {\em sum} of random numbers is guaranteed by the central limit theorem to converge to a Gaussian (normal) PDF in the limit of a large number of terms in the sum. A {\em product} of random numbers, such as that used in equation~\ref{eq:return_path} to calculate return, does not share this nice property. On the contrary, the PDF for a random multiplicative process (of positive numbers) depends on rare sequences that generate an asymmetric PDF with a long tail. The average value of the PDF (or of any moment) depends sensitively on the sampling size $M$ and, until $M$ approaches the number of possible outcomes, becomes larger and larger compared to the mode \cite{Redner_multiplicative_1990}. Nonetheless, what can be done is to take the log of the geometric mean of equation~\ref{eq:return_path} to change the product of returns to a sum of the log returns: \begin{equation} \label{eq:log_geo_mean} \log\left(\rho_j^{1/N}\right) = N^{-1} \sum_i^N \log\left( f_i r_{si} + (1 - f_i) r_{bi} \right). \end{equation} Equation~\ref{eq:log_geo_mean} says that the log of the geometric mean is given by the average of the log return. The PDF of log return then does obey the central limit theorem to converge to a Gaussian PDF. Moreover, if the log of something is distributed as a Gaussian, then the something has a log-normal PDF \cite{Redner_multiplicative_1990}. In other words, the return PDF for market timing is log-normal, as a simple consequence of elementary properties of the logarithm. Further, if $\mu$ and $\sigma$ are respectively the median and variance of the Gaussian PDF, then $e^\mu$ is the median and $e^{\mu - \sigma^2}$ is the mode of the log-normal PDF: the mode, which is the most probable outcome, is less than the median of the log-normal PDF. Thus from elementary considerations the most probable outcome from market timing is a return that is less than the median of the return PDF. \begin{figure}[p] \centering \begin{tabular}{c} \includegraphics[width=\textwidth]{log_return_pdf_p=poptimum_full.png} \\ (a) \\ \begin{tikzpicture}[ every node/.style={anchor=south west,inner sep=0pt}, x=1mm, y=1mm, ] \node (fig1) at (0,0) {\includegraphics[width=\textwidth]{return_pdf_p=poptimum_short.png}}; \node (fig2) at (41,21) {\includegraphics[width=0.6\textwidth]{return_pdf_p=poptimum_full.png}}; \end{tikzpicture} \\ (b) \end{tabular} \caption{Probability distribution function of (a) log-return and (b) return estimated from $M = 10^5$ trials with $p = p_{b} \approx 0.64$. Green and purple vertical bars are respectively the worst and best timing portfolios. The orange bar is the median of the PDF and the observed return of the $f = 0.6$ balanced index fund, which so closely approximates the median as to be indistinguishable at this scale. Inset of (b) is the full data range, showing the extreme low probability position of the optimum timing portfolio (purple bar) in the tail of the distribution.} \label{fig:returnpdf} \end{figure} To illustrate, figure~\ref{fig:returnpdf}(a) plots the histogram of end of period log returns from the Monte Carlo data of figure~\ref{fig:paths}. Even though $M = 10^5$ grossly under samples the order $10^{29}$ distinct paths in the feasible set, convergence to a Gaussian PDF is evident, as predicted by the form of equation~\ref{eq:log_geo_mean}. The green and purple bars at the extremes are the results for respectively $\rho_w$ and $\rho_b$. The orange bar marks the median log-return and the log -return for the $f = 0.6$ balanced index fund, which are indistinguishable in this plot, and the reason for this will be discussed in the next section. Figure~\ref{fig:returnpdf}(b) plots the histogram of the end of period return (not log-return). The predicted log-normal form with a long tail is also evident. The inset shows the entire data range to indicate how long the return tail is. Colored bars have the similar meaning as in figure~\ref{fig:returnpdf}(a), just for the return PDF instead of the log return PDF. The highest probability outcome is the mode (maximum) of the distribution, which is less than the median return marked by the orange bar. \subsection{Expectation Value of the Median} The expectation value operator $\boldmath E$ gives the most probable value of a PDF. After a calculation given in detail in appendix~\ref{sec:calculation_detailed}, the expectation value of equation~\ref{eq:log_geo_mean} for the median $\mu$ of the log return distribution is \begin{equation} \label{eq:evalue_log_geo_mean} \mu = {\boldmath E}\left[\log\left(\rho_j^{1/N}\right)\right] = \log\left( p_b \bar{r}_{s} + (1 - p_b) \bar{r}_{b} \right), \end{equation} where $\bar{r}_{s,b}$ are the geometric mean returns of the stock and bond assets. Recall $p_b$ is the observed fraction of time periods that the stock return exceeds the bond return. The median of the distribution of log returns is given by the log of the weighted average of the two assets with the weights given by the fraction of time periods $p$ that each asset's return exceeded that of the other. The median of the return PDF is $e^\mu$. Note that because over the time period of the data $p_b \approx 0.64$, that using the log return for the $f = 0.6$ balanced fund for the right hand side of equation~\ref{eq:evalue_log_geo_mean} well approximates the exact result for $\rho_b$, which, of course, cannot be known {\it a priori\/}. As noted above, in figure~\ref{fig:returnpdf} the median return and the return for the $f = 0.6$ balanced index fund are indistinguishable at the scale of the plot. It is important to note that figure~\ref{fig:returnpdf} shows the PDF for {\em costless} market timing. In practice, market timing costs higher than the index fund costs would shift the PDF to the left, but the boundaries of the feasible set and the median of the PDF would not shift because they are calculated from fund data, which already includes the small index funds costs. In practice what the Monte Carlo simulation estimates is the lower bound of the most likely shortfall of market timing to the median return given by the appropriately weighted static portfolio. \section{Discussion} \label{sec:discussion} \begin{figure}[tb] \centering \includegraphics[width=\textwidth]{paths_M=1e5_taa_overlay.png} \caption{Reprise of figure~\protect\ref{fig:paths} with the addition of two market timing funds with publicly available data of comparable length (yellow lines). Red lines are the best and worst timing portfolio return paths. The black line is the observed $f = 0.6$ balanced index fund.} \label{fig:paths_taa_overlay} \end{figure} Several critiques could be leveled at the analysis in this paper. For example, adherents of market timing would claim that their timing systems are not random, therefore they would be able to choose timing paths to have returns far out on the right tail of the PDF, i.e.\/ that the strategy to generate random paths (random $f$ sequences) is not representative of actual market timing. There are two answers to this. One is that the feasible set is well-defined and that it is simply a fact that all market timing paths, {\em no matter how they are generated}, are contained in the feasible set. As such, any sampling of the feasible set generates valid timing paths. The second answer is in figure~\ref{fig:paths_taa_overlay}, which reproduces figure~\ref{fig:paths} with the addition of the return paths (yellow lines) of two funds that Morningstar classifies as TAA funds and for which there are publicly available returns data from 1994, almost as long as for the index funds data series. Appendix~\ref{sec:taa_funds} has details about these two funds, which are rated by Morningstar as above average. While these market timing funds were neither limited to two asset classes, nor did they make all or nothing switches, yet their return paths are, as expected, contained inside the feasible set. The conclusion is that real-life market timers are correctly characterized---except for costs---by the PDF within the feasible set, and that random sampling of the PDF does properly characterize the return distribution expected from market timing schemes. Figure~\ref{fig:paths_taa_overlay} also nicely illustrates the main result with live, not simulated, market timing data. These long-lived, above average market timing funds trailed the median return over the time period---and its close proxy, the $f = 0.6$ balanced fund---as simple math says is the most probable outcome. This longer-term observation is consistent with more recent analysis covering a much shorter time period but many more TAA funds \cite{Ptak_practice_2012}\footnote{From \cite{Ptak_practice_2012}, "We found that very few tactical funds generated better risk-adjusted returns than Vanguard Balanced Index over the extended time period we studied. Not only has the group of tactical allocation funds underperformed, but not a single one of them outperformed the simple, low-cost, passive fund."}. A more subtle criticism is that I have not disproved market timing. This is because of the possibility of hidden variables. Hidden variables represent information, such as earnings, book value, anything, that a market timer could put into a function that produces a timing path. While the observed optimal timing path $f_b$ is random to the extent that it passes the NIST tests, it is possible that there was a set of hidden variables that could have been combined in a function that would have produced the optimal timing path $f_b$. Good pseudo-random number generators also pass the NIST tests but are produced by deterministic systems. Taking into account the fund data of figure~\ref{fig:paths_taa_overlay}, I think it highly unlikely, but it could be true and so market timing is not mathematically disproved. Take comfort in that, dear reader, as you will. \section{Conclusions} I have examined a two asset, all or nothing market timing model with 24 years of data from US stock and bond total market index funds from 1993--2017. The model is deliberately kept simple in order to see the basic mathematics of market timing at work answering the question, what is the likelihood of successful market timing? The boundaries of the feasible set of market timing paths, within which all market timing return paths must lie, is easy in hindsight to calculate by always choosing the higher or lower returning asset each quarter. The historical optimal timing path is, however, indistinguishable from a random sequence; it is unpredictable and codes no information about the future optimal timing path. The key observation is that return is a multiplicative process and so its PDF is log-normal. The implication is the mathematical fact that the most probable outcome from market timing is a below median return---even before accounting for costs. This stems from an elementary property of the logarithm. Put another way, simple math says the most likely outcome of market timing is under performance. Exactly what this under performance is can be ascertained because the median of the market timing return PDF can be directly calculated as a weighted average of the returns of the model assets with weights given by the fraction of time periods each asset has a higher return than the other. For the time period of the data the median return was close to the return of the static 60:40 stock:bond balanced index; althrough, the value of $p_b$ need not be fixed for all time. For simplicity of analysis and clarity of results the model in this paper has only two asset classes; however, it is clear that the methodology could be extended to any number of asset classes.
{ "timestamp": "2017-12-15T02:02:00", "yymm": "1712", "arxiv_id": "1712.05031", "language": "en", "url": "https://arxiv.org/abs/1712.05031" }
\section{Introduction} \label{section_1} \IEEEPARstart{D}{eep} Learning (DL) algorithm, which is materialized by Deep Neural Networks (DNNs) for learning meaningful representations~\cite{bengio2013representation}, is a very hot research area during recent years~\cite{krizhevsky2012imagenet,farabet2013learning,tompson2014joint}. Meaningful representation refers to the outcome of the raw input data that goes through multiple nonlinear transformations in the DNNs, and the outcome could remarkably enhance the performance of the subsequent machine learning tasks. The hyper-parameter settings and parameter values in DNNs are substantially interrelated to the performance of DL algorithms. Specifically, hyper-parameters \HL{(such as the size of weights, types of nonlinear activation functions, \HL{a} priori term types, and coefficient values)} refer to the parameters that are needed to be assigned prior to training the models, and parameter values refer to the element values of the weights and are determined during the training phase. Due to the deficiencies of the current optimization techniques for searching for optimal hyper-parameter settings and parameter values, the power of DL algorithms cannot be shown fully. To this end, an effective and efficient approach concerning the hyper-parameter settings and parameter values has been proposed in this paper. \textbf{Meaningful Representations}~~Typically, arbitrary DNNs can generate/learn Deep Representations (DRs). However, DRs are not necessarily meaningful, i.e., it is not true that all DRs contributed to the promising performance when they replace the raw data to be fed to machine learning algorithms (e.g., classification). In fact, DRs are the outcomes which have gone through nonlinear transformations from input data more than once~\cite{delalleau2011shallow}, and are inspired by the mammalian hierarchical visual pathway~\cite{hubel1962receptive}. Mathematically, the representations of the input data $X\in \mathbb{R}^m$ are formulated by~(\ref{equ_deep_representation}) \begin{equation} \label{equ_deep_representation} \left\{ \begin{array}{rl} R_1= & f_1(W_1X) \\ R_2 =& f_2(W_2R_1) \\ & \cdots \\ R_n =& f_n(W_nR_{n-1}) \\ R =& R_n \end{array} \right. \end{equation} where $f_1,\cdots,f_n$ denote a set of element-wise nonlinear activation functions, $W_1,\cdots, W_n$ refer to a series of connection weights and $R_1, R_2, \cdots,R_n$ are the \HL{learned} representations (output) at the depth/layer $1, 2, \cdots, $ and $n$, among which $\textbf{R}=\{R_i|2\le i\le n\}$ refers to the DRs. In addition, Fig.~\ref{fig_dl_general} shows the \HL{flowchart} of deep representation learning and its role in machine learning tasks in a general case. \begin{figure}[htp] \centering \includegraphics[width=0.7\columnwidth]{dl_general} \caption{An example to illustrate a general \HL{flowchart} of deep representation learning and its relationship to machine learning tasks.}\label{fig_dl_general} \end{figure} Obviously, multiple different DRs can be \HL{learned} by varying $n$ in~(\ref{equ_deep_representation}), while we only pay attention to the ones that give the highest performance of the associated machine learning tasks. Based on literature reviews~\cite{bengio2009learning,sunlearning,sun2015explicit}, these DRs are often called \emph{meaningful representations}. Assuming $R_j$ are the meaningful representations, it is obvious that the hyper-parameter settings (e.g., the number of layers, $j$, and the chosen activation function types of $f_1,\cdots, f_j$) and parameter values (e.g., the values of each element in $\{W_1,\cdots, W_j\}$) would highly reflect the \HL{learned} $R_j$ to be meaningful or not. To this end, the Back-Propagation algorithm (BP)~\cite{rumelhart1988learning} which relies on the gradient information is the widely employed algorithm in training parameter values. However, its performance is highly affected by the initialized setting due to its local search characteristics that could be easily trapped into local minima~\cite{sutton1986two}. Although multiple implementations based on BP, such as Stochastic Gradient Descent (SGD), AdaGrad~\cite{duchi2011adaptive}, RMSProp~\cite{tieleman2012rmsprop}, and AdaDelta~\cite{zeiler2012adadelta}, have been presented to expectedly reduce the adverse impact of easily trapping into local minima, extra hyper-parameters (such as the initialization values of momentums and the balance factors) are introduced and also needed to be carefully tuned in advance. Furthermore, multiple algorithms~\cite{bergstra2011algorithms, bergstra2013making} have been proposed for optimizing the hyper-parameters, but they often require domain knowledge and are problem-dependent. To this end, the grid search method keeping its dominant position in selecting reasonable hyper-parameters was proposed~\cite{lerman1980fitting}. However, the grid search method is an exhaustive approach, and would frequently miss the best hyper-parameter combinations when the hyper-parameters are continuous numbers. \textbf{Deep Neural Networks}~~According to \HL{literature}~\cite{bengio2007scaling,lecun2015deep}, DL algorithms mainly include Convolutional NNs (CNNs), Deep Belief Networks (DBNs), and stacked Auto-Encoders (AEs). Specifically, CNNs are supervised algorithms for DL, and their numerous variants have been developed for various real-world applications~\cite{lecun1989backpropagation,lecun1998gradient,szegedy2015going,zeiler2014visualizing,simonyan2014very,he2015deep}. Although these CNN algorithms have shown promising performance in some tasks, sufficient labeled training data, which is a must for successfully training them, are not easy to acquire. For example in the ImageNet benchmark~\cite{deng2009imagenet}, there are $10^9$ pictures that can be easily downloaded from the Google and Yahoo websites. It was reported that $48,940$ workers from $167$ countries are employed to label these photos. Therefore, the unsupervised NN approaches whose training processes rely solely on unlabeled data become preferable in this situation. DBNs~\cite{hinton2006reducing} and stacked AEs~\cite{bourlard1988auto,hinton1994autoencoders} are the mainly unsupervised DL algorithms~\cite{bengio2007scaling,lecun2015deep} for learning meaningful representations. Because of the unknown in training data targets during their training phase, \HL{learned} representations from them are not necessarily to be \emph{meaningful}. Therefore, \textit{a priori} knowledge is needed to be incorporated into their training phase. For example, DBNs and stacked AEs trained with the sparsity constraint \HL{a} priori with benefits of sparse coding~\cite{olshausen1997sparse} have been proposed in~\cite{NIPS2007_3313} and~\cite{bengio2007greedy}. Furthermore, denoising AEs~\cite{vincent2008extracting} have been proposed by artificially adding noise priori to input data for improving the ability \HL{to learn} meaningful representations. In addition, Rifar \textit{et al.}~\cite{rifai2011contractive} have presented contractive AEs by introducing the term, which is the derivation of representations with respect to input data, for reducing the sensitivity \HL{a} priori of representations. \textbf{Evolutionary Algorithms for NNs}~~Evolutionary algorithms (EAs) are one class of population-based meta-heuristic optimization paradigms, and are motivated by the metaphors of biological evolution. During the period of evolution, individuals interact with each other and the beneficial traits are passed down to facilitate population adapting to the environment. Due to the nature of \emph{gradient-free} and \emph{insensitivity to local optima}, EAs are preferred in various problem domains~\cite{yao1999evolving}. Therefore, they have been extensively employed in optimizing NNs, which refers to the discipline of neuroevolution, such as for the connection weight optimization~\cite{whitley1990genetic, whitley1989genitor, montana1989training}, the architecture setting~\cite{fahlman1989cascade,frean1990upstart,sietsma1991creating} (more examples can be found in~\cite{yao1999evolving}). Generally, these algorithms employ direct or indirect methods to encode the optimized problems for the evolution. To be specific, each parameter in the connection weights is encoded by the binary numbers~\cite{whitley1990genetic} or a real number~\cite{zi2007improvement} in the direct methods, which are effective for the \HL{small-scale} problems. However, when they are used to encode the problems with a large number of parameters in connection weights, such as for processing the high-dimensional data, these methods become impractical due to the excessive length of the genotype explicitly representing each parameter no matter if coded in binary or real. To this purpose, Stanley and Miikkulainen have proposed the indirect-based Neural Evolution Augmenting Topologies (NEAT) method~\cite{stanley2002evolving} for encoding connection weights and architectures with varying lengths of chromosomes. Because NEAT employs one unit to denote combinational information of one connection in the evolved NN, it still cannot effectively solve deep NNs where a large \HL{number} of parameters exist. To this end, an improved version of NEAT (i.e., HyperNEAT) was proposed in~\cite{stanley2007compositional} in which connection weights were evolved by composing different points in a fixed coordinate system with a series of predefined nonlinear functions. Although the indirect methods can reduce the length of the genotype representation, they limit the generalization of the neural networks and the feasible architecture space~\cite{yao1999evolving}. \HL{In 2015, Gong \textit{et al.}~\cite{gong2015multiobjective} proposed a bi-objective evolutionary algorithm by using Differential Evolution~\cite{Storn1997Differential} to concurrently consider the reconstruction error and sparsity of the AE, and chose the optimal sparsity from the knee area of the Pareto front.} Recently, Liu \textit{et al.}~\cite{liu2017structure} presented a neural network connection pruning method by a multi-objective evolutionary algorithm to simultaneously consider the representation ability and the sparse measurement. Google~\cite{real2017large} proposed their work on evolving CNNs for image classifications with a direct manner over $250$ high performance \HL{servers} for more than $400$ hours. In this regard, the evolutionary approaches would \HL{surely be capable of} evolving deep NNs, although \HL{the} computational resource is not necessarily available to all interested researchers. \textbf{Contributions}~~Based on the above investigations upon prospects of unsupervised deep NNs for learning meaningful representations and the EAs in evolving deep NNs, an effective and efficient approach named Evolving Unsupervised Deep Neural Networks (EUDNN) for learning meaningful representation \HL{through evolving unsupervised deep NNs, exactly evolving the building blocks of unsupervised deep NNs, }has been proposed in this paper. In summary, the contributions of this paper are documented as follows: \HL{\begin{enumerate} \item A computationally efficient gene encoding scheme of evolutionary approaches has been suggested, which is capable of evolving deep neural networks with a large number of parameters for addressing high-dimensional data with limited computational resources. With this design, the proposed algorithm can be smoothly implemented in academic environments with limited computational resources. \item A fitness evaluation strategy has been employed to drive the unsupervised models towards usefulness in advance, which can drive the \HL{learned} representations to be meaningful without any carefully designed \HL{a} priori knowledge. \item Deep neural networks with a large number of parameters involve a large-scale global optimization problem. As a result, \HL{the} sole evolutionary scheme cannot generate the best results. To this end, the utilization of a local search strategy is proposed to be incorporated into the proposed algorithm to guarantee the \HL{desired} performance. \end{enumerate}} \textbf{Organization}~~The remaining of this paper is organized as follows. First, related works and motivations of the proposed EUDNN are illustrated in Section~\ref{section_2}. Next, the details and discussions of the proposed algorithm are presented in Section~\ref{section_3}. To evaluate the performance, a series of experiments are performed by the proposed algorithm against selected peer competitors and the results measured by the chosen performance metric are analyzed in Section~\ref{section_4}. Finally, conclusions and future work are drawn in Section~\ref{section_5}. \section{Related works and Motivations} \label{section_2} We will detail the unsupervised DL models that motive our work in this paper, highlight their deficiencies in learning \emph{meaningful} representations, and rationalize our motivations in Subsection~\ref{section_2_1}. With this same detailed manner, the evolutionary algorithms which demonstrate the potential for evolving deep NNs will be documented in Subsection~\ref{section_2_2}. \subsection{Unsupervised Deep Learning Models} \label{section_2_1} In this subsection, the unsupervised DL models are reviewed first (Subsection~\ref{section_2_1_1}). Then, their building blocks are introduced (Subsection~\ref{section_2_1_2}). Next, the mechanisms guaranteeing the \HL{learned} representations to be meaningful are formulated and commented (Subsection~\ref{section_2_1_3}). Finally, the motivations of the proposed algorithm in reducing the adverse impact of their deficiencies are elaborated (Subsection~\ref{section_2_1_4}). \subsubsection{}\label{section_2_1_1} Unsupervised DL models cover DBNs~\cite{hinton2006reducing} and variants of stacked AEs (i.e., stacked sparse AEs (SAEs)~\cite{NIPS2007_3313,bengio2007greedy}, stacked denoising AEs (DAEs)~\cite{vincent2008extracting}, and stacked contract AEs (CAEs)~\cite{rifai2011contractive}). Moreover, the building block of DBNs is a Restricted Boltzmann machine (RBM)~\cite{smolensky1986information}, and that of stacked AEs is an AE. Furthermore, the parameter values in DBNs and stacked AEs are optimized by the greedy layer-wise training method, which is composed of two phases~\cite{hinton2006fast}: pre-training and fine-tuning. Conveniently, Fig.~\ref{fig_dl_training_1} depicts the pre-training phase, where a set of three-layer (the input layer, the hidden layer, and the output layer) NNs with varying numbers of units are individually trained by minimizing reconstruction errors. In the fine-tuning phase which is illustrated by Fig.~\ref{fig_dl_training_2}, these hidden layers are first sequentially stacked together with the parameter values trained in the pre-training phase, then a classification layer (i.e., the classifier) is added to the \HL{tail} to perform the fine-tuning by optimizing the corresponding loss function determined by the particular task at hand. \begin{figure}[htp] \begin{center} \subfloat[pre-training]{\includegraphics[width=0.8\columnwidth]{dl_pre_training \label{fig_dl_training_1}} \hfil \subfloat[fine-tuning]{\includegraphics[width=0.8\columnwidth]{dl_fine_tuning \label{fig_dl_training_2}} \caption{\HL{The training} process of unsupervised deep neural networks.} \label{fig_dl_training} \end{center} \end{figure} \subsubsection{}\label{section_2_1_2} Unsupervised DL algorithms are considerably preferred mainly due to their requirements upon fewer labeled data especially in the current Big Data era\footnote{Even data is abundant in the Big Data era, most raw data collected is unlabeled for a classification task, e.g., the ImageNet classification benchmark that has been discussed in Section~\ref{section_1}.}. However, a major issue of training these models is how to guarantee the \HL{learned} representations to be meaningful. Specifically in the pre-training phase for training one NN unit (see Fig.~\ref{fig_dl_unit} as an example), let $X\in R^n$ denote the input data, \HL{$W\in R^{n\times k}$} denote the connection weight matrix from the input layer to the hidden layer, while $ W'\in R^{k\times n}$ denote the connection weight matrix from the hidden layer to the output layer. The NN unit is trying to minimize the reconstruction error $L$ between the input data $X$ and the output data $X'$ by~(\ref{equ_dl_unit})\footnote{Bias terms, which are another kind of connection weights widely existing in NNs, are incorporated into $W$ and $W'$ here for simplicity.} \begin{figure}[htp] \centering \includegraphics[width=0.6\columnwidth]{dl_unit}\\ \caption{An example of unsupervised deep neural network unit model.}\label{fig_dl_unit} \end{figure} \begin{equation} \label{equ_dl_unit} \left\{ \begin{array}{lll} R &= &f(WX) \\ X' &= &f(W'R) \\ L &= &l(X,X') \end{array} \right. \end{equation} In~(\ref{equ_dl_unit}), $R$ denotes the \HL{learned} representations (i.e., the output of the hidden layer), $f$ denotes the activation function, and $l$ denotes the function to measure the differences between $X$ and $X'$. \HL{ \subsubsection{}\label{section_2_1_3} It is obvious that the \HL{learned} representations $R$ are not necessarily meaningful only by minimizing $L$ due to no information of the associated classification task existing in this phase and arbitrary $R$ will lead to a minimal $L$, while $R$ is meaningful only when they could improve the performance of the associated classification task. To this end, \HL{literature} have presented unsupervised DL algorithms with different \HL{a} priori knowledge~\cite{olshausen1997sparse,bengio2007greedy,vincent2008extracting,rifai2011contractive} which is denoted as $\Theta$, and then the reconstruction error is transformed to $L=l(X,X')+\lambda \Theta$ where $\lambda$ denotes a balance factor to determine the weight of the associated \HL{a} priori term. Although \HL{a} prior knowledge would help the \HL{learned} representations to be meaningful, major issues remain: \begin{itemize} \item The prior knowledge is designed with different assumptions, which do not necessarily satisfy the current situations. \item The prior knowledge is presented specifically for general tasks, while it is hopeful that the performance would be improved on particular tasks. \item It is difficult to choose the most suitable a priori term for the current task. \item The balance factor $\lambda$ is a hyper-parameter whose value is not easily to be assigned~\cite{rifai2011contractive}. \end{itemize} } \HL{ \subsubsection{} \label{section_2_1_4} Considering this problem, the method that has been developed in our previous work~\cite{sun2015explicit} is employed in this proposed algorithm. To be specific, a small proportion of labeled data is employed during the fitness evaluation of EAs, and the \HL{learned} representations are directly quantified based on the classification task that is employed in the fine-tuning phase. With the environmental selection in EAs, individuals that have the positive effect on the classification task survive into the current generation and are \HL{expected} to generate offspring with better performance in the next generation, which \HL{ultimately} leads to the \HL{learned} representations to be meaningful. Because the employed labeled data can be injected from the fine-tuning phase, and the classification task is the same as that in the fine-tuning phase, this strategy for learning meaningful representations would not introduce extra cost. } \subsection{Evolutionary Algorithms for Evolving Neural Networks} \label{section_2_2} Although multiple related \HL{literature} for evolving NNs have been mentioned in Section~\ref{section_1}, only the works in~\cite{stanley2002evolving,stanley2007compositional} (i.e., the NEAT and the HyperNEAT) will be concerned here because our proposed algorithm aims at evolving \emph{deep} NNs\footnote{The works in~\cite{ whitley1990genetic, whitley1989genitor, montana1989training,fahlman1989cascade,frean1990upstart,sietsma1991creating,yao1999evolving} were proposed two decades ago and cannot be applied for deep NNs, the work in~\cite{liu2017structure} concerned only the weight pruning, and the work in~\cite{real2017large} employed a direct way for evolving and did not have a general meaning.}. In the following, the details of NEAT\HL{,} as well as HyperNEAT and their deficiencies in evolving deep NNs are documented in Subsections~\ref{section_2_2_1} and~\ref{section_2_2_2}, respectively. Combined with the challenge of EAs in evolving deep NNs, i.e., the upper bound encoding problem, the motivations of the proposed EUDNN are presented in Subsection~\ref{section_2_2_3}. In addition, another challenge, i.e., EAs cannot fully solve the optimization problems with a large number of parameters, and the corresponding motivations are given in Subsection~\ref{section_2_2_4}. \subsubsection{}\label{section_2_2_1} The NEAT~\cite{stanley2002evolving} has been proposed with an indirect method for adaptively increasing the complexity of the evolved NNs. Specifically, two types of genes, i.e., the node genes and the connection genes, exist in the NEAT. The node genes, which are used to represent all the units of the evolved NN, are encoded with the type of the unit (i.e., the input unit, the hidden unit, or the output unit) and one identification number. The connection genes that are employed to denote the connection information between the node genes, and one node gene is encoded with five elements (the numbers of the input and output units, the value of the connection, one bit indicating whether the connection is activated or not, and one innovation number which records the index of the connection gene with an increased manner). During the evolution process, the individuals are first initialized only with the input and output units of the network, and the random connections between these units. Then, individuals are recombined and mutated. To be specific, there are two types of mutations including the connection mutations and the node mutations. When the connection mutations occur, one connection gene will be added to the list of the connection genes to denote that a pair of node genes is connected. While for the node mutations, one hidden node is generated, then the corresponding connection gene is created to split one existed connection into two parts. Although the NEAT is flexible to evolve NNs, a deterministic number of the output is required, which is impractical in the DL. Furthermore, due to each connection and unit in NEAT are explicitly encoded, it is not suitable for evolving deep NNs that often have a large number connections and units. For remedying this deficiency regarding the incapacity of evolving deep NNs, the connective compositional pattern producing networks (CPNN)~\cite{stanley2006exploiting, stanley2007compositional} has been presented and led to the HyperNEAT. \subsubsection{}\label{section_2_2_2} The HyperNEAT has been proposed by combining the NEAT with the CPNN encoding scheme. Particularly, the CPNN employs one low-dimensional coordinate system to generate connections for the NEAT by a list of predefined nonlinear functions. To be specific, any point in the coordinate system is picked up, and then fed into a series of compositional functions from the list to complete the transformation from the genotype to the phenotype. Because any number of points can be selected from the low-dimensional coordinate system, numerous connections would be represented with a low computational cost. In this regard, the HyperNEAT has the most potential for evolving a deep NN, while the size of the output still needs to be set in advance, which \HL{faces} the same problem to NEAT in practice. Furthermore, all the values of the connections in the HyperNEAT are generated by the genetic operators during the evolution, which cannot guarantee the best performance in evolving a deep NN due to the nature of the large-scale global problem. In addition, the recurrent connections or the connections between the same layers are involved in this algorithm, which \HL{is} also not suitable for learning compact meaningful representations. \subsubsection{}\label{section_2_2_3} As we have discussed in Section~\ref{section_1}, the performance of DL algorithms is highly affected by the hyper-parameter settings and the parameter values. In the pre-training phases, one of the key hyper-parameters is the size of hidden layers. One problem would be naturally raised when EA approaches are employed to search for the sizes, that is how we can ensure the upper bound of the hidden layer sizes given a fixed-length gene encoding strategy. Although the indirect encoding scheme can alleviate this situation somewhat, it limits the generalization of the evolved NNs and the feasible architecture space~\cite{whitley1990genetic}. On the other hand, if we employ a larger number as the upper bound, it is difficult to determine how large it is reasonable because too large a number would consume more computational resources, otherwise deteriorate the model performance. Excitingly, Yang \emph{et al.}~\cite{yang2005kpca} have mathematically pointed out that the meaningful representations of the input data lie at its original space. Supposed that the input data is with $n$ dimension, the size of \HL{the} associated hidden layer should be no more than $n$. Furthermore, we know that $n$ orthogonal $n$-dimensional basis vectors are sufficient to span a $n$-dimensional space \HL{based on Theorem~\ref{the_1}}. Consequently, we only need to compute one basis $r_1$ of $n$-dimensional space, and the other $(n-1)$ $n$-dimensional basis vectors can be explicitly computed by~(\ref{equ_null_space}) to find the null space\HL{\footnote{\HL{Theoretically, multiple solutions could be found in computing the bases of the null space. In practice, we only accept the orthonormal basis for the corresponding null space obtained from the singular value decomposition.}}}. To this end, we can efficiently model the problem with $n^2$ parameters by employing a genetic algorithm to explicitly encode about $n$ parameters, which is a computational efficient gene encoding approach. \HL{ \begin{thm} \label{the_1} A set of orthogonal vectors $b_i\in R^n$ ($i=1,\cdots, n$) is sufficient to span the space $S\in R^n$. \end{thm} } \begin{equation} \label{equ_null_space} \text{null space}(r_1) = \{x\in R^n| r_1x=0\} \end{equation} \subsubsection{}\label{section_2_2_4} Here, we would point out another challenge to inspire our motivation for evolving deep NNs by employing GAs. In our proposed algorithm, the computationally efficient gene encoding strategy mentioned above is employed to model unsupervised deep NNs where a large number of parameters exist. Although the length of the encoded parameters has been reduced appreciably in this regard, the number of the parameters in the original problems remains constant no matter what encoding method is employed. In fact, the effects of one gene in the employed encoding strategy is equivalent to that of multiple parameters in the original problems. For example, for \HL{an} NN which has $100,000$ parameters, only $1,000$ genes are employed by the computationally efficient gene encoding strategy proposed herein. As a result, one gene represents $100$ parameters in average, and if one gene is changed with the crossover and mutation operators, it will involve the changes of $100$ parameters. Moreover, it is well known that performances of EAs are guaranteed by their exploration search (given by mutation operators) and exploitation search (given by crossover operators) which introduce the global search and local search abilities, respectively. Because a slight change of one gene in the proposed algorithm will lead to the changes of many parameters which affect the global behavior, it can be viewed as that EAs lack of the local search from the problem to be solved. In addition, the data which are processed by DL algorithms is \HL{common} with high dimension, which leads to a large number of decision variables in the encoded chromosomes of EAs, although our employed encoding strategy has saved much space compared to existing approaches. Extensive experiments have quantified that EAs are difficult to reach the best performance upon the problems with high input dimensions. To address this issue, we incorporate a local search strategy into the proposed algorithm for assuring the desirable performance. In summary, the difficulties of deep unsupervised NNs for learning meaningful representations and EAs for evolving deep NNs have been clarified first, and then addressed by our motivations in this section. In the next section, the technical details will be implemented based on these motivations. \section{Proposed Algorithm} \label{section_3} In this section, the details of the proposed EUDNN are presented. To be specific, the framework which is composed of two distinct stages is depicted at first (Subsection~\ref{section3_1}). Next the specifics of each stage are elaborated, respectively (Subsections~\ref{section3_2} and~\ref{section3_3}). Furthermore, the over-fitting problem preventing mechanism of EUDNN and the significant differences against its peer competitor are discussed (Subsection~\ref{section3_4}). \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{flow_chart}\\ \caption{The \HL{flowchart} of the proposed algorithm that is composed of two distinct stages. Especially, the first stage is for finding optimal architectures as well as \HL{desirable} initializations of the connection weight parameter values. The second stage is to fine-tune them for a \HL{potentially} better performance.}\label{fig_flowchart} \end{figure*} \subsection{Framework of EUDNN} \label{section3_1} In this subsection, the framework of the proposed EUDNN is presented. For convenience of the development, it is assuming that the \HL{learned} representations are for a classification task in which the \emph{meaningful} representations can improve its performance in term of a higher Correct Classification Rate (CCR) (the CCR upon the training data is collected during the training/optimization phase, and that upon the test data during the test/experimental phase). Moreover, given a set of data $D$ in this classification task, a portion of $D$ which is denoted by $D_{train}=\{(x_1,y_1),\cdots,(x_k,y_k)\}$ is considered as the training data in which $x_i$ denotes the input data and $y_i$ is the corresponding label, while the remaining data is regarded as the test data $D_{test}$ for checking whether the \HL{learned} representations are meaningful. Furthermore, the \HL{flowchart} of the proposed EUDNN is illustrated in Fig.~\ref{fig_flowchart}, which clearly shows the two stages of the design: 1) finding the optimal architectures in deep NNs, the desirable initialization of connection weight, and the activation functions (pre-training), and 2) fine-tuning all of the parameter values in connection weights from the desirable initialization. \begin{algorithm} \caption{Framework of the Proposed EUDNN} \label{alg_the_proposed_algorithm} \KwIn{Training data $D_{train}$; maximum number $p$ of layers; classifier $C(\cdot)$; test data $D_{test}$.} \KwOut{Predicted labels of $D_{test}$.} $i\leftarrow 0$;\\ \While{$i<p$} { \label{alg_framework_stage1_begin} $i\leftarrow i+1$;\\ $W_j, f_j(\cdot) \leftarrow$ Obtain the optimal connection weight and the corresponding activation function via evolving;\\ } \label{alg_framework_stage1_end} Fine-tune all the connection weights $W_1,\cdots, W_p$;\\ \label{alg_framework_stage2} $Y_{test}=C(f_p(W_p\times \cdots f_2(W_2\times f_1(W_1\times D_{test}))))$;\\ \label{alg_framework_predict_label} \textbf{Return} $Y_{test}$. \label{alg_framework_return_label} \end{algorithm} To this end, one genetic approach with an efficient strategy introduced in Subsection~\ref{section_2_2} is employed to encode the potential architectures and the associated large numbers of parameters in connection weights by a set of individuals, and then the EA is utilized to evolve and select the individual who has the best performance based on the fitness measures. For warranting the \HL{learned} representations being meaningful, the method introduced in Subsection~\ref{section_2_1} is employed, i.e., a small part of data $D_f$ from $D_{train}$ is randomly selected, and the representations of $D_f$ are \HL{learned} based on the models encoded by the individuals, then \HL{they} are fed with the associated classification task to select the ones which give the higher CCR for evolution. Based on the investigations in Subsection~\ref{section_2_2}, a fine-tuning approach additionally, which introduces the exploitation local search, is utilized in the second stage to archive the best performance ever found, which complements with the exploration global search in the first stage. In summary, these two stages collectively ensure the \HL{learned} representations to be meaningful through unsupervised deep NNs. In addition, the framework of the proposed EUDNN is presented in Algorithm~\ref{alg_the_proposed_algorithm}. Specifically, lines~\ref{alg_framework_stage1_begin}-\ref{alg_framework_stage1_end} describe the first stage, while line~\ref{alg_framework_stage2} defines the second stage. Finally, the predicted labels of the test data are calculated and returned in lines~\ref{alg_framework_predict_label} and~\ref{alg_framework_return_label}. Next, the details of these two stages are documented, respectively. \subsection{Obtaining Optimal Connection Weights and Activation Functions via Evolving} \label{section3_2} The process of obtaining all the optimal connection weights and their corresponding activation functions contains a series of repeated subprocesses. In this subsection, we first in Algorithm~\ref{alg_obtain_transformation_matrix} propose how to obtain one optimal connection weight and its activation function. Then, the entire process is described. \begin{algorithm} \caption{Obtain the Optimal Connection Weight and Activation Function} \label{alg_obtain_transformation_matrix} \KwIn{Input data; size of population $m$; probability of crossover $\rho$; probability of mutation $\mu$.} \KwOut{Optimal connection weight $W$; activation function $f(\cdot)$.} Initialize the population $P$ with the size $m$;\\ \label{alg_obtain_transformation_matrix_line1} \While{stopping criteria are not satisfied} { \label{alg_otme_begin} Evaluate the fitness of individuals in $P$;\\ \label{alg_obtain_transformation_matrix_line2} $Q\leftarrow$ Generate new offspring with the probability $\rho$ from two parents selected with binary tournament selection;\\ \label{alg_obtain_transformation_matrix_line3} $Q\leftarrow$ Mutate all the individuals in $Q$ with the probability $\mu$;\\ \label{alg_obtain_transformation_matrix_line4} $S\leftarrow$ Select the individual with the best fitness from $P\cup Q$;\\ \label{alg_obtain_transformation_matrix_line5} $P\leftarrow$ $S~\cup$~Select $(m-1)$ individuals from $(P\cup Q)\setminus S$ \HL{with binary tournament selection}; \label{alg_obtain_transformation_matrix_line6} } \label{alg_otme_end} Evaluate the fitness of the individuals in $P$;\\ \label{alg_obtain_transformation_matrix_line7} $ind_{best}\leftarrow$ Select the individual with the best fitness from $P$;\\ \label{alg_obtain_transformation_matrix_line8} \textbf{Return} $W$ and $f(\cdot)$ represented by $ind_{best}$. \end{algorithm} To be specific in Algorithm~\ref{alg_obtain_transformation_matrix}, $m$ individuals that encode the information of potential optimal connection weights and their corresponding activation functions are initialized first (line~\ref{alg_obtain_transformation_matrix_line1}). Then, the evolution takes effect (lines~\ref{alg_otme_begin}-\ref{alg_otme_end}) until the stopping conditions, such as exceeding the maximum generations, are met. During each generation, the fitness of all the individuals are evaluated first (line~\ref{alg_obtain_transformation_matrix_line2}). Next, new offspring are generated with the probability $\rho$, and their parents are selected from $P$ with the binary tournament selection (line~\ref{alg_obtain_transformation_matrix_line3}). Then, all the offspring in $Q$ are mutated with the probability $\mu$ (line~\ref{alg_obtain_transformation_matrix_line4}). Furthermore, lines~\ref{alg_obtain_transformation_matrix_line5}-\ref{alg_obtain_transformation_matrix_line6} describe the environmental selection in which the best individual is preserved first for the elitism\HL{, then $m-1$ individuals are selected from the remaining solutions in $P\cup Q$ with binary tournament selection. Specifically, two individuals are randomly selected from $(P\cup Q)\setminus S$ first. Then the one with better CCR is chosen, and the other is put back. With the same process, this operation is repeated $m-1$ times}. When the evolution terminates, the best solution is selected from the current population for transforming the optimal connection weight and the activation function (lines~\ref{alg_obtain_transformation_matrix_line7}-\ref{alg_obtain_transformation_matrix_line8}). Next, the details of the employed gene encoding strategy will be discussed, although its fundamental principles have been documented in Subsection~\ref{section_2_2}. It has been pointed out in~\cite{yang2005kpca} that the potential connection weight for obtaining the meaningful representations likely lies in a subspace of the original space. As a consequence, the search for the optimal connection weight can be constrained in the space of input data. Specifically, it is assuming that the input data is $n$-dimensional. First, a set of basis $S=[s_1,\cdots,s_n]$ which can span a $n$-dimensional space is given, e.g., any $n$ linear independent $n$-dimensional vectors. Then the vector $a_1$ is linearly combined by the bases in $S$ with the coefficients $b=[b_1,\cdots,b_n]$ \HL{that are randomly specified}. Next, the orthogonal complements $\{a_2,\cdots,a_n\}$ of $a_1$ are computed by~(\ref{equ_null_space}). It is obvious that $\{a_1,a_2,\cdots,a_n\}$ are capable of spanning the space of input data. Finally, a part of these bases, which span a subspace of the original space, are selected for constructing the optimal connection weight by a binary encoded string indicating whether the corresponding basis is available. Furthermore, the corresponding activation function is also encoded into the chromosome. Specifically, a list of selected activation functions with different nonlinear capacities is given, then their indexes in this list are chosen to indicate which one is selected. Moreover, Fig.~\ref{fig_encode_transformation_matrix} is provided to intuitively illustrate our intention on efficiently encoding the connection weight and activation function. When the optimal connection weight $W_i$ and its corresponding activation function $f_i$ are found for the $i$-th layer with Algorithm~\ref{alg_obtain_transformation_matrix}, then that for the $(i+1)$-th layer can be optimized with the same algorithm by setting the input data as $f_i(W_i\times R_i)$ where $R_i$ denote the representations at the $i$-th layer. \begin{figure} \centering \includegraphics[width=\columnwidth]{state1}\\ \caption{A \HL{flowchart} describes the process of encoding the potential connection weight and activation function. First, a set of basis vectors $S$ is given in the original space with $n$-dimension. Then, a set of coefficients $b$ is generated to represent the vector $a_1$ by linear combining the basis vectors. Then, the orthogonal complements $\{a_2,\cdots,a_n\}$ of $a_1$ are computed. Finally, all the information of computing $a_1$, indicating whether the basis from $\{a_2, \cdots, a_n\}$ is selected, and the activation functions are encoded into the chromosomes that are used to evolve to obtain the optimal connection weight and activation function.}\label{fig_encode_transformation_matrix} \end{figure} \begin{figure*}[htp] \centering \includegraphics[width=\textwidth]{state2}\\ \caption{The \HL{flowchart} of the second stage in the proposed EUDNN. Especially, the predicted label is computed with the connection weights and activation functions for the input data. Then the loss of the classifier is formulated between the predicted label and the true label. Next, the error is back propagated and the parameter values of the connection weights are updated. }\label{fig_finetune} \end{figure*} In the employed gene encoding approach, each coefficient \HL{of} $b$ is represented with nine bits in which the \HL{leftmost} bit denotes the positive or negative of the coefficient. Then, one bit is used to indicate whether the basis $a_j$ ($j\in[2,\cdots,n]$) is selected for the connection weight. Finally, two bits are utilized to represent the activation function. In addition to the well-adopted sigmoid and hyperbolic tangent functions, rectifier function~\cite{glorot2011deep}, which is reported recently to have a superior performance in some applications, is also considered as one candidate. As a consequence, one chromosome needs $10n+1$ bits for the $n$-dimensional input data. If the real number encoding method is employed here, a multiple of eight memory space would be taken, which is the major reason that the proposed EUDNN employs the binary encoding method being \HL{a} contribution to the so claimed computational efficient gene encoding strategy. Furthermore, the linear Support Vector Machine (SVM)~\cite{cortes1995support} is employed for evaluating the quality of individuals due to its promising computational efficiency and its linear nature for better discriminating power whether the \HL{learned} representations are meaningful or not. \HL{Next, we will give the details of the fitness evaluation by using SVM based on the design principle described in Subsection~\ref{section_2_1_4}. For convenience of the development, let $D_{train}=\{X_{train}, Y_{train}\}$ denote the training set where $X_{train}$ are the data and $Y_{train}$ are the corresponding labels, and the selected individual for fitness evaluation is denoted by $ind_{i}$. Firstly, a small fraction of data denoted by $D_{eval}=\{X_{eval}, Y_{eval}\}$ is randomly selected from $D_{train}$. Secondly, the corresponding model is transformed from the encoded individual $ind_{i}$. Thirdly, the representations (denoted by $F_{eval}$) of $X_{eval}$ are calculated based on the formulas in (\ref{equ_deep_representation}). Fourthly, $\{F_{eval}, Y_{eval}\}$ are fed to SVM and the CCR on $X_{eval}$ is estimated. Finally, the CCR is used as the fitness of $ind_i$.} \subsection{Fine-tuning Connection Weights} \label{section3_3} \HL{To further improve the performance, an exploitation mechanism implemented by local search strategy is incorporated into the second stage to fine-tune parameter values in connection weights. In this stage, the architecture is fixed with the evolved activation functions and the initialization values of the connection weights, and then a local search method is used to tune the connection weights further. Fig.~\ref{fig_finetune} shows an example of this process. Specifically, when all the connection weights and activation functions have been optimized in the first stage, all the hidden layers are connected to a list based on their orders in the first stage by adding one input layer at the top of this list. Then, the connection weights in this list are initialized with the values confirmed in the first stage. Finally, a classifier is added to the tail of this list to perform the fine-tuning process. Note here that the BP algorithm is employed for the fine-tuning. Actually, any local search algorithm can be used in the second stage. The reasons \HL{for} employing BP are largely due to two aspects: 1) the gradient information in the loss function is always analytical and the BP that is based on the gradient is naturally employed in most designs; 2) multiple libraries of BP have been implemented for accelerating the computation with the Graphics Processing Units (GPUs) and the computational cost can be reduced remarkably, especially in the situations of processing high-dimensional data. Furthermore, when the rectifier activation function that is not differentiable at the point $0$ is selected, the value $0$ is assigned according to the convention of the community~\cite{maas2013rectifier}.} \subsection{Discussions} \label{section3_4} In this subsection, we mainly discuss the over-fitting problem preventing mechanism utilized by the proposed EUDNN, and the significant differences of the proposed EUDNN against the Direct Evolutionary Feature Extraction algorithm (DEFE)~\cite{zhao2006direct} that employs a similar gene encoding strategy to EUDNN. The over-fitting problem implies the poor generalization ability of models, i.e., the trained model reaches a better CCR upon training data at the cost of a worsen CCR upon test data. Because the goal in training a classification model is for obtaining a higher CCR upon test data, the over-fitting problem should be prevented by some mechanisms. Commonly, given a number of models which are all capable of solving a particular classification task, the model with a smaller Vapnik Chervonenkis (VC) dimension\footnote{Generally, the VC dimension can be viewed as an indicator measuring the complexity of multiple models which are capable of solving one particular task~\cite{Vapnik1997The}. The smaller the VC dimension, the more simplicity is the corresponding model, and a more simplicity model is with better generalization~\cite{Vapnik2010Statistical}. Commonly, a large number and magnitude of elements in the transformation matrixes are positive to the VC dimension.}~\cite{bengio2009learning} usually has a better generalization ability, which does not lead to an over-fitting problem. Because the number of parameters \HL{is} positive to the value of a VC dimension, and deep NN architectures are generally with \HL{the} numerous number of parameters, the over-fitting problem easily \HL{occurs} in these models. \begin{figure}[htp] \centering \includegraphics[width=0.85\columnwidth]{overfitting}\\ \caption{Correct classification rates of training data and test data as training process continues.}\label{fig_overfitting} \end{figure} More specifically, Fig.~\ref{fig_overfitting} illustrates a typical instance in CCR on training data (red curve) and CCR of checking on test data (green curve) as the training process continues. Especially, CCR on both data are continuously growing until the time $t_1$, and CCR on the training data continues to increase while CCR on the test data begins to drop when the training time is greater than $t_1$, which obviously indicates the presence of an over-fitting problem. As we have claimed that the best performance of the proposed EUDNN cannot be guaranteed during the training in the first stage, and the second stage is introduced to expectedly help the proposed EUDNN arrive at the best performance. To this end, it is concluded that the over-fitting problem will not occur in the first stage of the proposed EUDNN (because the first stage terminates prior to the time $t_1$, while the over-fitting problem might occur after the time $t_1$), but may occur in the second stage. Consequently, some rules need to be utilized to prevent this problem only in the second stage. Here, the ``early stop'' approach is utilized for this purpose, i.e., a group of data $D_{validate}$ is uniformly selected from $D_{train}$ as the validate data to replace the checking upon test data in Fig.~\ref{fig_overfitting}, when we first observe the CCR of validate data begins to decrease while the CCR of training is still increasing (i.e., the particular time $t_1$ is found), the fine-tuning in the second stage is terminated and the optimal model that gives the best performance is obtained. Next, the second concern, i.e., the differences between the proposed EUDNN and the DEFE, will be discussed. It has been observed that 1) DEFE learns only linear representations and 2) shallow representations of input data. These two observations cause that DEFE cannot learn the meaningful representations~\cite{hinton2006reducing}. Next, the details of these conclusions are discussed. To be specific, the \HL{learned} representations $R$ of DEFE can be formulated as $R=WX$~\cite{zhao2006direct} where $W$ is the transformation matrix (i.e., the connection weight in deep NN models) and $X$ is the input data. It is evident that there is no nonlinear transformation upon $WX$. Consequently, only linear representations would be \HL{learned} by DEFE, while in the proposed EUDNN, a list of nonlinear activation functions with different nonlinear transformation abilities \HL{is} incorporated into the evolution for performing nonlinear representation learning. Furthermore, although multiple transformations like that in the proposed EUDNN can be implemented by DEFE to learn deep representations, deep linear transformations are equivalent to a one layer linear representation. In summary, DEFE cannot be employed for learning meaningful representations due to its linear nature, while the success of deep NNs is mainly caused by the meaningful representations \HL{learned} by deep nonlinear transformations, which have been explicitly implemented by the proposed EUDNN. \section{Experiments} \label{section_4} In order to examine the performance of the proposed EUDNN, experiments based on a set of image classification benchmarks against selected peer competitors are performed. During the comparisons, the chosen performance metric considers the CCR on the test data. In the following, the employed benchmarks are outlined first. Then the chosen peer competitors are reviewed, and the justification \HL{for} selecting them is explained further. This is followed by the descriptions of the performance metric chosen and the specifics of parameter settings employed by these compared algorithms. Finally, the quantitative as well as the qualitative experimental results are illustrated and comprehensively analyzed. \subsection{Benchmark Test Datasets} Benchmarks used by compared algorithms are the handwritten digits benchmark test dataset MNIST~\cite{lecun1998gradient}, basic MNIST dataset (MNIST-basic)~\cite{larochelle2007empirical}, a rotated version of MNIST (MNIST-rot)~\cite{larochelle2007empirical}, MNIST with random noise background (MNIST-back-rand)~\cite{larochelle2007empirical}, MNIST with random image background (MNIST-back-image)~\cite{larochelle2007empirical}, MNIST-rot with random image background (MNIST-rot-back-image)~\cite{larochelle2007empirical}, tall and wide rectangles dataset (Rectangles)~\cite{larochelle2007empirical}, rectangles dataset with random image background (Rectangles-image)~\cite{larochelle2007empirical}, convex sets recognition dataset (Convex)~\cite{larochelle2007empirical}, and the gray version of Canadian Institute for Advanced Research object recognition dataset~\cite{krizhevsky2009learning} (Cifar10-bw) over $10$ classes, i.e., airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. \begin{figure}[htp] \centering \includegraphics[width=0.8\columnwidth]{fig_mnist_example}\\ \caption{A group of digit samples ($0-9$) from the MNIST benchmark test dataset.}\label{fig_mnist_example} \end{figure} Briefly, these benchmark test datasets are categorized into three different classes based on the object types that they \HL{intend} to recognize. The first one is about the hand-written digits and covers the MNIST, MNIST-basic, MNIST-rot, MNIST-back-rand, MNIST-back-image, and MNIST-rot-back-image benchmarks. Examples from the MNIST benchmark are depicted in Fig.~\ref{fig_mnist_example} for reference. The second one is to classify the geometries and the rectangles, such as the Rectangles, Rectangles-image, and the Convex benchmarks. The last one is to identify the natural objects in Cifar10-bw. Different variants in MNIST and rectangles datasets present the algorithms dissimilar difficulties from the aspects of perturbations, \HL{the} small number of training dataset, and \HL{the} large testing dataset size. \HL{Furthermore, the dimensions, number of classes, and the sizes of training set and test set of the chosen benchmark datasets are shown in Table~\ref{tab_dataset_config}.} \HL{ \begin{table}[!htp] \caption{\HL{The configurations of the chosen benchmark datasets.}} \centering \label{tab_dataset_config} \begin{tabular}{c|c|c|c|c} \hline \multirow{2}{*}{\HL{\textbf{Benchmark}}} &\multirow{2}{*}{\HL{\textbf{Dimension}}}&{\HL{\textbf{\# of}}}&\multicolumn{2}{c}{\HL{\textbf{Size of}}}\\ \cline{3-5} &&\HL{\textbf{class}}&\HL{\textbf{training set}}&\HL{\textbf{test set}}\\ \hline \HL{MNIST}& \HL{$28\times28$}& \HL{10} & \HL{50,000}& \HL{10,000}\\ \hline \HL{MNIST-basic} &\HL{$28\times28$} & \HL{10}& \HL{12,000} &\HL{50,000}\\ \hline \HL{MNIST-rot} &\HL{$28\times28$} &\HL{10} & \HL{12,000} &\HL{50,000} \\ \hline \HL{MNIST-back-rand} & \HL{$28\times28$}&\HL{10} & \HL{12,000} &\HL{50,000} \\ \hline \HL{MNIST-back-image} &\HL{$28\times28$} &\HL{10} & \HL{12,000} &\HL{50,000} \\ \hline \HL{MNIST-rot-back-image} &\HL{$28\times28$} &\HL{10} & \HL{12,000} &\HL{50,000} \\ \hline \HL{Rectangles} &\HL{$28\times28$} & \HL{2}& \HL{1,200}& \HL{50,000}\\ \hline \HL{Rectangles-image} &\HL{$28\times28$} & \HL{2}&\HL{12,000} &\HL{50,000} \\ \hline \HL{Convex} & \HL{$28\times28$}& \HL{2}& \HL{8,000}&\HL{50,000} \\ \hline \HL{Cifar10-bw} &\HL{$32\times32$} &\HL{10} &\HL{50,000} & \HL{10,000}\\ \hline \end{tabular} \end{table} } \subsection{Performance Metric} Technically speaking, it is difficult to directly evaluate whether the \HL{learned} representations are meaningful or not because they are intermediate outcomes. A general practice for this is to feed these \HL{learned} representations to a particular classification task, and then to investigate the CCR by a classifier. Commonly, a higher CCR implies that the \HL{learned} representations are more meaningful. Because the benchmarks employed in these experiments are multi-class classification tasks, the softmax regression classifier~\cite{engel1988polytomous} is employed here to measure the corresponding CCR according to the convention adopted in the community. It is assumed that a set of training data and their corresponding labels with $k$ distinct integer values are denoted as $\{x_1, \cdots, x_m\}$, and $\{y_1,\cdots,y_m\}$, respectively, where $x_i\in \mathcal{R}^n$ and $y_i\in \{1,\cdots, k\}$. To be specific, the label of the sample $x_i~(i\in\{1,\cdots,m\})$ is predicted by~(\ref{eq_softmax_probability}) with the softmax regression, \begin{equation} \label{eq_softmax_probability} \arg\underset{j}{\max}~~p_j(x_i)= \frac{exp(\theta_j^Tx_i)}{\sum_{l=1}^k exp(\theta_l^Tx_i)} \end{equation} where $\Theta=[\theta_1,\cdots, \theta_k]^T$ are obtained by minimizing \begin{equation*} \label{eq_softmax_minimizing} J(\Theta) = -\frac{1}{m} \left [ \sum_{i=1}^m\sum_{j=1}^k f(y_i, j)log\frac{exp(\theta_j^Tx_i)}{\sum_{l=1}^k exp(\theta_l^Tx_i)} \right] \end{equation*} in which $f(y_i, j)=1$ if $y_i=j$, otherwise $f(y_i, j)=0$. \subsection{Compared Algorithms} Because of the proposed EUDNN aiming at evolving \emph{unsupervised deep neural networks} for learning \emph{meaningful representations}, algorithms related to evolving deep NNs (NEAT~\cite{stanley2002evolving}, HyperNEAT~\cite{stanley2007compositional}), unsupervised deep NNs (DBNs~\cite{hinton2006fast}, and variants of stacked AEs~\cite{bengio2007greedy}) that have been discussed in Section~\ref{section_1} should be all employed as peer competitors. However, the NEAT and the HyperNEAT cannot be used to learn meaningful representations due to the reasons that have been discussed in Section~\ref{section_1} and further analyzed in Section~\ref{section_2}. As a result, they are excluded from the selected compared algorithms. To this end, DBNs and variants of stacked AEs are employed for performing the comparison experiments. Because RBMs~\cite{smolensky1986information} and AEs~\cite{bourlard1988auto,hinton1994autoencoders,rumelhart1988learning} are the building blocks to train DBNs and stacked AEs, respectively, these two types of algorithms are considered as the peer competitors in our experiments to compare the performance of the \HL{learned} representations against that of the proposed algorithm (i.e., we will evolve RBMs and AEs as the unsupervised deep NN models, which are named EUDNN/RBM and EUDNN/AE, respectively, to perform the comparisons against considered peer competitors). Specifically, the variants of AEs, i.e., the Sparse AEs (SAEs)~\cite{olshausen1997sparse}, the Denoising EAs (DAEs)~\cite{vincent2008extracting}, and the Contractive AEs (CAEs)~\cite{rifai2011contractive}, have been proposed with different regularization terms for learning meaningful representations in recent years and also have obtained comparable performance in multiple tasks. As a consequence, they are also included as the peer competitors in the experiments, in addition to the DBNs. \subsection{Parameter Settings} For a fair comparison, multiple parameters in the second stage of the proposed EUDNN and the competing ones are the same. As a consequence, we will first give details of these generic parameter settings in this subsection. Then, the particular parameter settings are individually introduced. Because the best performance of the compared algorithms often strongly depends on the particular benchmark dataset and the corresponding parameter settings, in order to do a fair comparison, we first test these parameters from the range widely used in the community upon the corresponding training data, then the best performance upon test data of each compared algorithm is selected for comparisons. \subsubsection{Learning Rate and Batch Size} The Stochastic Gradient Descent (SGD) algorithm is chosen as the algorithm to train the SAE, the DAE, the CAE, and the softmax regression, and its learning rates as well as the batch sizes vary in $\{0.0001, 0.001, 0.01, 0.1\}$ and $\{10, 100, 200\}$, respectively, according to the community convention. \subsubsection{Number of Runs and Stop Criteria} All the compared algorithms are independently performed 30 runs. In addition, a performance monitor is injected into each epoch in training the softmax regression to record the best CCR over the test dataset as the best performance of the algorithm that feeds the 、HL{learned} representations to the softmax regression. \subsubsection{Unit Number and Depth} The number of the units for the SAE, the DAE, the CAE, and the RBM in each layer is set to be from $200$ to $3,000$ using a $log$ function with an interval $0.5$ as recommended by~\cite{hinton2010practical}, and the maximum depth is set to be $5$ \HL{(this depth is excluded from the input layer, i.e., the maximum number of hidden layers)}. \subsubsection{Statistical Significance} The results measured by the selected performance metric need to be statistically compared due to the heuristic natures of the first stage in the proposed EUDNN. In these experiments, the Mann-Whitney-Wilcoxon rank-sum test~\cite{steel1997principles} with a $5\%$ significant level is employed for this purpose according to the community convention. In addition, the sparsity of the SAE, the binary corrupted level of the DAE, and the coefficient of the contractive term in the CAE are set to be $10\%, 30\%, 50\%$ and $70\%$, respectively. Because of the nature of the RBM, the CD-$k$ algorithm~\cite{carreira2005contrastive} is selected as its training algorithm and $k$ is set to be $1$ based on the suggestion in~\cite{hinton2010practical}. In order to speed up the proposed algorithm in the first stage, a proportion (i.e., $20\%$) of the training dataset is randomly selected in each generation for the fitness evaluation. In addition, the connection weights and the biases are respectively set to be between $[-4\times 6/\sqrt{n_{number}}, 4\times 6/\sqrt{n_{number}}]$ with a uniform sampling and $0$, respectively~\cite{glorot2010understanding}, if required, where $n_{number}$ denotes the total number of the units in two adjacent layers based on the experiences suggested in~\cite{glorot2010understanding}. Because parameter settings in the second stage of the proposed EUDNN are the same as that of the peer competitors, parameter settings of the evolution related parameters in the first stage are declared next. Conveniently, one chromosome in this stage can be divided into three parts: main basis related coefficients (Part 1) which are used to represent the vector $a_1$ in Fig.~\ref{fig_encode_transformation_matrix}, projected space related coefficients (Part 2) which are employed to indicate which bases are selected for the connection weight, and the coefficients (Part 3) which denote the type of activation functions. Because Parts 1 and 2 have strong effects on the quality of the connection weight, it is hopefully that crossover operation should be promoted in these two parts for improving the exploitation local search that provides much better performance based on the exploration global search. As a consequence, one point crossover operator is employed in Parts 1 and 2. In addition, three widely used nonlinear activation functions are considered in the proposed algorithm and one is to be selected for the corresponding connection weight. Therefore, it is hopefully that the information representing the activation function is not modified often since it is hard to determine which one is the best. Consequently, Parts 2 and 3 are considered as one part to participate in the crossover operation. It is noted here that, when the value in Part 3 is invalid, a random one is chosen to reset it. \HL{Noting that the polynomial mutation~\cite{deb2001multi} is used here as the mutation operator (distribution index is set to be $20$). In addition, the population size is set to be $50$. As for the crossover probability and the mutation probability in the proposed algorithm, both of them are set to be the same as that of the community convention (i.e., $0.9$ for crossover and $0.1$ for mutation). A proportion of 10\% is randomly selected from the training set for the fitness evaluation.} Codes of the proposed EUDNN can be made available upon request through the first author. \subsection{Experimental Results} \label{section_quantitative_experiments} Based on the motivation of our design, the proposed EUDNN 1) employs evolutionary algorithm and local search strategy to ensure the \HL{learned} representations through deep NNs to be meaningful, 2) employs evolutionary approach in the first stage to help the deep NNs find the optimal architectures and the good initialized weights, which give a better starting position for the second stage, and 3) employs the local search strategy in the second stage to improve the intended performance much further. Consequently, a series of experiments are carefully crafted to evaluate the performance of the proposed design. \subsubsection{Performance of the Proposed Algorithm} In order to quantify whether the representations \HL{learned} by the proposed EUDNN are meaningful, a series of experiments are well-designed and comparisons are performed. Specifically, EUDNN/AE and EUDNN/RBM are two implementations of the proposed algorithm over the unsupervised neural network models (i.e., AEs and RBMs, respectively). Then they are used to learn the representations together with the selected peer competitors employing the configurations introduced above. Next, the softmax regression metric is employed to measure whether the \HL{learned} representations improve the associated classification tasks through CCR, which in turn indicates the \HL{learned} representations being meaningful or not. \begin{table*}[!htp] \caption{The correct classification rate of the proposed EUDNN (EUDNN/AE and EUDNN/RBM) upon MNIST, MNIST-basic, MNIST-rot, MNIST-back-rand, MNIST-back-image, MNIST-rot-back-image, Rectangles, Rectangles-image, Convex, and Cifar10-bw benchmarks against stacked denoising auto-encoder (DAE), stacked contractive auto-encoder (CAE), stacked sparse auto-encoder (SAE), and the deep belief network (DBN). Best mean values are highlighted in \HL{boldface}. The symbols ``+,'' ``-,'' and ``='' denote whether the proposed algorithm statistically are better than, worse than, and equal to that of the corresponding peer competitors, respectively, with the employed rank-sum test.} \centering \label{tab_comparison_results} \begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{\textbf{Benchmark}}&\multicolumn{2}{c|}{\textbf{EUDNN}}&\multirow{2}{*}{\textbf{DAE}}&\multirow{2}{*}{\textbf{CAE}}&\multirow{2}{*}{\textbf{SAE}}&\multirow{2}{*}{\textbf{DBN}}\\ \cline{2-3} &\textbf{AE}&\textbf{RBM}&&&&\\ \hline MNIST&0.9878(0.00751)&\textbf{0.9885(0.00255)}&0.9820(0.00506)(+)&0.9843(0.00699)(+)&0.9832(0.00891)(+)&0.9771(0.00959)(+)\\ \hline MNIST-basic&0.9674(0.00616)&0.9633(0.00473)&0.9580(0.00352)(+)&0.9635(0.00831)(+)&\textbf{0.9776(0.00585)(-)}&0.9658(0.00550)(+)\\ \hline MNIST-rot&\textbf{0.7952(0.00917)}&0.7549(0.00286)&0.7274(0.00757)(+)&0.7706(0.00754)(+)&0.7852(0.00380)(+)&0.7639(0.00568)(+)\\ \hline MNIST-back-rand&0.8843(0.00076)&0.8386(0.00054)&0.7725(0.00531)(+)&0.5741(0.00779)(+)&\textbf{0.8851(0.00934)(=)}&0.8221(0.00130)(+)\\ \hline MNIST-back-image&0.4325(0.00569)&\textbf{0.4830(0.00469)}&0.4022(0.00012)(+)&0.4010(0.00337(+)&0.4638(0.00162)(+)&0.4587(0.00794)(+)\\ \hline MNIST-rot-back-image&\textbf{0.8925(0.00906)}&0.8879(0.00815)&0.8691(0.00127)(+)&0.6574(0.00913)(+)&0.8733(0.00632)(+)&0.8830(0.00098)(=)\\ \hline Rectangles&0.9627(0.00311)&\textbf{0.9681(0.00829)}&0.9232(0.00166)(+)&0.6275(0.00602)(+)&0.9408(0.00263)(+)&0.9622(0.00154)(=)\\ \hline Rectangles-image&0.7521(0.00689)&0.7716(0.00048)&0.7598(0.00451)(+)&\textbf{0.7810(0.00784)(=)}&0.7725(0.00002)(-)&0.7628(0.00913)(+)\\ \hline Convex&\textbf{0.8113(0.00052)}&0.8085(0.00826)&0.7930(0.00538(+)&0.8016(0.00996)(+)&0.8053(0.00878)(+)&0.7895(0.00443)(+)\\ \hline Cifar10-bw&\textbf{0.4798(0.00107)}&0.4331(0.00962)&0.4309(0.00005)(+)&0.4860(0.00775)(+)&0.4423(0.00817)(+)&0.4598(0.00869)(+)\\ \hline &\multicolumn{2}{c|}{+/-/=}&10/0/0&9/0/1&7/2/1&8/0/2\\ \hline \end{tabular} \end{table*} Particularly, the mean values and standard derivations of CCR resulted by these compared algorithms over $30$ independent runs are listed in Table~\ref{tab_comparison_results} in which the best results over the same benchmark are highlighted in \HL{boldface}. In addition, the symbols ``+,'' ``-,'' and ``='' denote whether the CCR of the proposed algorithm upon the corresponding benchmarks are statistically better than, worse than, and equal to that of the associated peer competitors, respectively, with the employed rank-sum test\footnote{To do this statistically test, we first select the better CCR generated by EUDNN/AE and EUDNN/RBM with the same benchmark, then the selected results are used to do the rank-sum test.}. Furthermore, the summarizations, how many times over the considered benchmarks the proposed EUDNN are better than, worse than, and equal to the corresponding peer competitor, are listed in the last row of Table~\ref{tab_comparison_results}. \HL{In Table~\ref{tab_best_performance_config}, the first column shows the names of the chosen benchmark datasets, the second column provides the corresponding best CCRs obtained, while the third column presents the numbers of neurons of the deep models (excluding the the classifier layer) with which the best CCRs are reached on the corresponding benchmark dataset. As we have claimed in Subsection IV-D that the maximum number of building blocks investigated in this paper is set to be five. Therefore, the number of layers, which include the input layer and hidden layers, shown in Table~\ref{tab_best_performance_config} for each benchmark dataset does not exceed six. For the first row in Table~\ref{tab_best_performance_config} as an example, it indicates that the best CCR of 98.85\% on the MNIST benchmark dataset is achieved with only four building blocks where the input layer is with 784 neurons, and hidden layers are with 400, 202, 106, and 88 neurons, respectively.} It is clearly shown in Table~\ref{tab_comparison_results}\footnote{In this paper, the statistical results biases the results generated by the statistical significance toolkit, i.e., the Mann-Whitney-Wilcoxon rank-sum test [67] with a $5\%$ significant level.} that the proposed EUDNN/AE obtains the best mean values upon the MNIST-rot, the MNIST-rot-back-image, the Convex, and the Cifar10-bw benchmarks, and the best rank-sum results upon the MNIST-rot, the Convex, and the Cifar10-bw benchmarks. Moreover, the proposed EUDNN/RBM wins both the best mean values and the rank-sum results upon the MNIST, and the MNIST-back-image benchmarks. Although the best result of the proposed EUDNN (obtained by the EUDNN/AE) over the MNIST-basic benchmark is a little worse than that of the SAE, which is the winner of the best mean value and rank-sum results, EUDNN/AE outperforms all the other peer competitors. Furthermore, the SAE obtains the best mean values upon the MNIST-basic and the MNIST-back-rand benchmarks, but the best result of the proposed algorithm (obtained by the EUDNN/AE) is statistically equal to that of the SAE upon the MNIST-back-rand benchmark, and also outperforms other competing algorithms. Upon the Rectangles-image benchmarks, the best result of the proposed algorithm (obtained by the EUDNN/RBM) is worse than that of the CAE and the SAE, while the EUDNN/RBM and CAE have the same results statistically. In addition, the best results of the proposed algorithm upon the MNIST-rot-back-image (obtained by the EUDNN/AE) and the Rectangles (obtained by the EUDNN/RBM) benchmarks are all statistically equivalent to that of the DBN, while the best mean values upon these two benchmarks are obtained by the EUDNN/AE and the EUDNN/RBM, respectively. Note here that the MNIST is a widely used classification benchmark for quantifying the performance of deep learning models, and the best results are frequently obtained by supervised models, which require sufficient labeled training data during their training phases. To our best knowledge, the CCR with $98.85\%$ obtained by the proposed algorithm (EUDNN/RBM), which is an unsupervised approach is a very promising result among unsupervised deep learning models. In summary, the proposed algorithm totally wins $34$ times over the $40$ comparisons against the selected peer competitors, which reveals the superior performance of the proposed algorithm in learning \emph{meaningful representations} with \emph{unsupervised neural network models}. \subsubsection{Performance Analysis Regarding the First Stage} Since we have claimed that the first stage of the proposed algorithm helps the unsupervised NN-based models learn optimal architectures and \HL{better-initialized} parameter values, component-wise experiments over the optimal architectures and the initialized parameter values should be performed to investigate their respective effects to justify our designs. However\HL{,} the initialized parameter values are dependent on the architectures. This leads to the specific experiment by varying only the architecture configurations on investigating how the \HL{learned} architectures solely affect the performance is difficult to design. Hence, the performance regarding the initialized parameter values is mainly investigated here. To this end, we first record the architecture configurations (see Table~\ref{tab_best_performance_config}) with which the proposed algorithm presents the promising performance in best mean values of EUDNN/AE and EUDNN/RBM upon each benchmark from Table~\ref{tab_comparison_results}. Then experiments are re-performed by peer competitors with the recorded architecture configurations and randomly initialized parameter values. Finally, the \HL{learned} representations are fed to the considered performance metric to measure whether these representations are meaningful. Specifically, experimental results are depicted in Fig.~\ref{fig_same_architecture} in which the vertical axis denote the CCR while A-J in the horizontal axis represent the benchmarks MNIST, MNIST-basic, MNIST-rot, MNIST-back-rand, MNIST-back-image, MNIST-rot-back-image, Rectangles, Rectangles-image, Convex, and Cifar10-bw, respectively. It is shown in Fig.~\ref{fig_same_architecture} that most of the peer competitors employing the chosen architecture configurations listed in Table~\ref{tab_best_performance_config} obtain worse CCR upon the considered benchmarks compared to the proposed algorithm. Specifically, the proposed algorithm shows these best CCR upon MNIST, MNIST-rot, MNIST-back-image, MNIST-rot-back-image, Convex, and Cifar10-bw benchmarks, which is consistent with the findings listed in Table~\ref{tab_comparison_results}. In addition, the proposed algorithm wins the best CCR upon MNIST-basic and MNIST-back-rand benchmarks as well, with these architecture configurations. In addition to the proposed algorithm in which the initialized parameter values are set by the proposed evolutionary approach, all the results illustrated in Fig.~\ref{fig_same_architecture} are obtained by the compared algorithms with the same architecture configurations and commonly used parameter initializing methods for the second stage. As we all know that the performance of local search strategies is strongly rely on their starting position, therefore, it is reasonable to conclude that the evolutionary scheme employed by the first stage of the proposed algorithm has substantially helped the \HL{learned} representations to be meaningful. \begin{table}[!htp] \caption{The best correct classification rate (CCR) of the proposed algorithm upon MNIST, MNIST-basic, MNIST-rot, MNIST-back-rand, MNIST-back-image, MNIST-rot-back-image, Rectangles, Rectangles-image, Convex, Cifar10-bw benchmarks and the corresponding architecture configurations.} \centering \label{tab_best_performance_config} \begin{tabular}{c|c|c} \hline \textbf{Benchmark}&\textbf{Best CCR}&\textbf{Architecture configurations}\\ \hline MNIST&0.9885&784, 400, 202, 106, 88\\ \hline MNIST-basic&0.9674&784, 400, 211, 120\\ \hline MNIST-rot&0.7952&784, 400, 233, 133, 100, 81\\ \hline MNIST-back-rand&0.8843&784, 397, 202, 123\\ \hline MNIST-back-image&0.4830& 784, 386, 191, 1088, 100\\ \hline MNIST-rot-back-image& 0.8925&784, 378, 205, 106\\ \hline Rectangles&0.9681&784, 397, 205, 113, 100, 75\\ \hline Rectangles-image&0.7716& 784, 402, 214, 122, 89\\ \hline Convex&0.8113& 784, 394, 200, 110, 55, 49\\ \hline Cifar10-bw&0.4798& 1024, 502, 253, 141, 130\\ \hline \end{tabular} \end{table} \begin{figure}[!htp] \centering \includegraphics[width=0.8\columnwidth]{fig_same_structure}\\ \caption{The performance of the proposed algorithm against DAE, CAE, SAE, and DBN with the configurations on which the proposed algorithm obtains the best correct classification rates over benchmarks measured by softmax regression. Especially, A-J denote the benchmarks MNIST, MNIST-basic, MNIST-rot, MNIST-back-rand, MNIST-back-image, MNIST-rot-back-image, Rectangles, Rectangles-image, Convex, and Cifar10-bw, respectively.}\label{fig_same_architecture} \end{figure} \subsubsection{Performance Analysis Regarding the Second Stage} In this experiment, we mainly investigate whether the local search strategy employed in the second stage promotes the integral performance of the proposed algorithm compared to only the evolutionary methods used in the first stage. For this purpose, we first pick up the promising CCR obtained by the proposed algorithm from Table~\ref{tab_comparison_results} in which the results of the proposed algorithm are collectively achieved by the evolutionary method employed in the first stage and the local search strategy employed in the second stage. Then we select the corresponding results performed without the local search strategy (i.e., the results obtained by the proposed algorithm during the first stage). Finally, these results are illustrated in Fig.~\ref{fig_without_second_stage} for quantitative comparisons. Specifically in Fig.~\ref{fig_without_second_stage} the vertical axis denotes the CCR, while A-J in the horizontal axis represent the benchmarks MNIST, MNIST-basic, MNIST-rot, MNIST-back-rand, MNIST-back-image, MNIST-rot-back-image, Rectangles, Rectangles-image, Convex, and Cifar10-bw, respectively, and the bars in blue denote the results obtained by the proposed algorithm without the second stage, while the bars in red refer to that with the second stage. It is clearly shown in Fig.~\ref{fig_without_second_stage} that the performance has been improved with the second stage of the proposed EUDNN over all the considered benchmarks compared to the algorithm that only the first stage is employed. Especially, the CCR have been significantly improved by about $20\%$ upon the MNIST-rot, MNIST-back-rand, MNIST-back-image, MNIST-rot-back-image, and Cifar10-bw benchmarks and $12.83\%$ \HL{on} the MNIST benchmark. In summary, it is concluded from these experimental results that the local search strategy utilized in the second stage helps the performance of the proposed algorithm to be improved much further, which promotes the \HL{learned} representations to be meaningful and satisfies our motivation of this design. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{fig_without_bp}\\ \caption{Correct classification rate (CCR) comparisons of the proposed algorithm without (denoted by blue bars) and with (denoted by red bars) the second stage upon the MNIST, MNIST-basic, MNIST-rot, MNIST-back-rand, MNIST-back-image, MNIST-rot-back-image, Rectangles, Rectangles-image, Convex, and Cifar10-bw benchmarks, which are denoted by A-J, respectively.}\label{fig_without_second_stage} \end{figure} \subsection{Visualizations of \HL{Learned} Representations} \begin{figure*}[htp] \begin{center} \subfloat[]{\includegraphics[width=0.43\columnwidth]{visualization_layer1 \label{fig_visualization_layer1}} \hfil \subfloat[]{\includegraphics[width=0.43\columnwidth]{visualization_layer2 \label{fig_visualization_layer2}} \hfil \subfloat[]{\includegraphics[width=0.43\columnwidth]{visualization_layer3 \label{fig_visualization_layer3}} \caption{Visualizations of the proposed algorithm over MNIST dataset with depths $1$ (Fig.~\ref{fig_visualization_layer1}), $2$ (Fig.~\ref{fig_visualization_layer2}), and $3$ (Fig.~\ref{fig_visualization_layer3}) by activation maximization method.} \label{fig_visualization} \end{center} \end{figure*} In Subsection~\ref{section_quantitative_experiments}, a series of quantitative experiments has been given to highlight the performance of the proposed algorithm in learning meaningful representations with unsupervised deep NN-based models. Here, a qualitative experiment is provided for comprehensively understanding what the representations are \HL{learned} from the proposed algorithm via visualizations, which is a common approach employed by related works~\cite{bengio2009learning,vincent2008extracting,rifai2011contractive,sunlearning,sun2015explicit} to intuitively investigate the learning representations. For this purpose, the activation maximization approach~\cite{erhan2009visualizing} is utilized to visualize the \HL{learned} representations of the proposed algorithm over MNIST dataset and a number of $100$ randomly selected visualizations of the patches are illustrated~\footnote{Because visualizations of representations \HL{learned} from the depth larger than one are difficult to interpret, and that from the depth larger than three have no reference for comparisons, only representations with depths $1$, $2$, and $3$ are visualized here.} in Fig.~\ref{fig_visualization}. Furthermore, the SGD is employed during the optimization of the activation maximization with $10,000$ iterations and a fixed learning rate of $0.1$. To be specific, Fig.~\ref{fig_visualization_layer1} shows the \HL{learned} representations on depth $1$ in which the visualization is commonly describable~\cite{erhan2009visualizing}. It is clear in Fig.~\ref{fig_visualization_layer1} that some strokes are \HL{learned} in most patches and a part of the representations is similar to that of the RBM~\cite{erhan2009visualizing}, which can be viewed as the effectiveness of the proposed algorithm, because these similar representations over MNIST dataset have been reported in multiple \HL{kinds of literature}~\cite{sunlearning,sun2015explicit}. The visualizations of the representations on depths $2$ and $3$ are depicted in Figs.~\ref{fig_visualization_layer2} and~\ref{fig_visualization_layer3}, respectively. However, these representations are difficult to understand intuitively and be interpretable due to the high-level hierarchical nature~\cite{erhan2009visualizing}. But it still can be concluded that the proposed algorithm has \HL{learned} the meaningful representations by comparing them to the experiments simulated in~\cite{erhan2009visualizing} that \HL{learned} representations herein resemble those of the DAE to some extent. \HL{Noting that multiple learned features shown in Fig.~\ref{fig_visualization_layer1} seem to be random. The reason is that not all the neurons in the corresponding hidden layer have learned the meaningful features. Specifically, the visualization of features is from the 100 neurons randomly selected from the 313,600 (this number can be calculated from Table~\ref{tab_best_performance_config}), and it is not necessary that all the 313,600 neurons have \HL{learned} the meaningful features.} In summary, these visualizations give a qualitative observation to highlight that the meaningful representations have been effectively \HL{learned} by the proposed algorithm. \section{Conclusion} \label{section_5} In order to warrant the representations \HL{learned} by \emph{unsupervised} deep neural networks to be meaningful, the existing approaches for learning them need optimal combinations of hyper-parameters, appropriate parameter values, and sufficient labeled data as the training data. These approaches generally employ the exhaustive grid search method to directly optimize hyper-parameters due to their unavailable gradient information, which give an unaffordable computational complexity that increases with an order of magnitude as the number of hyper-parameter grows. Furthermore, the gradient-based training algorithms in these existing algorithms are \HL{easy} to be trapped into the local minima, which cannot guarantee them the best performance. In addition, in the current era of Big Data, the volume of labeled data is limited and obtaining sufficient data with labels is expensive, if not impossible. To address these concerning issues, we have proposed an evolving unsupervised deep neural networks method which heuristically searches for the best hyper-parameter settings and the global minima to learn the meaningful representations without sufficient labeled data. To be specific, two stages are composed in the proposed algorithm. In the first stage, all the information regarding hyper-parameter and parameter settings are encoded into the individual chromosome and the best one is selected when they go through a series of crossover, mutation, and selection operations. Furthermore, the activation functions that provide the nonlinear ability to the learning algorithm are also incorporated into the individual chromosome to go through the evolutions \HL{of} obtaining the promising performance. In addition, the orthogonal complementary techniques are employed in the proposed algorithm to reduce the computational complexity for effectively learning the deep representations. Specifically, only a limited number of labeled data is needed in the proposed algorithm to direct the search to learn representations with meaningfulness. For further improving the performance, the second stage is introduced with a local search strategy to complement with the ability of the exploitation search for training the proposed algorithm with the architecture and the activation function optimized in the first stage. These two stages collectively promote the proposed algorithm effectively learning the meaningful representations with unsupervised deep neural network-based models. To evaluate the meaningfulness of the \HL{learned} representations, a series of experiments are given against peer competitors over multiple benchmarks related classification tasks. The results measured by the softmax regression show the considerable competitiveness of the proposed algorithm in learning meaningful representations. In near future, we will place more focus on the efficient encoding methods as well as the way measuring the quality of the representation during the evolution \HL{of} a larger scale and higher dimensional data. In addition, we would also investigate how to effectively evolve deep supervised neural networks, such as CNNs. \ifCLASSOPTIONcaptionsoff \newpage \fi
{ "timestamp": "2018-02-27T02:03:40", "yymm": "1712", "arxiv_id": "1712.05043", "language": "en", "url": "https://arxiv.org/abs/1712.05043" }
\section{Introduction} There are growing evidences for the co-evolution between the central supermassive black holes and their host galaxies \citep{Kormendy:13}, including the strong correlation between the mass of the black hole and the luminosity, stellar velocity dispersion, or the stellar mass in the galaxy spheroid \citep{Magorrian:98, Tremaine:02, Haring:04, Gultekin:09}. It is generally believed that AGN feedback plays an important role in the evolution of galaxies and results in the co-evolution between black hole growth and galaxy evolution \citep{Fabian:12, Kormendy:13}. Over the past decade many theoretical studies aiming at AGN feedback in galaxy formation and evolution have been performed, especially using the approach of hydrodynamical numerical simulation \citep[e.g.][]{DiMatteo:05, Springel:05, Sijacki:07, Ciotti:07, Booth:09, Ciotti:10, Ciotti:17, Ostriker:10, Debuhr:11, Hirschmann:14, Gan:14, Eisenreich:17, Weinberger:17a}. The most serious challenge of these studies is the large dynamical range: a proper simulation involving both the large-scale structure of the universe and the black hole accretion scales would require a spatial range of over ten orders of magnitude, from the black hole radius ($R_s$) of $\sim 10^{-5}\,{\rm pc}$ for a $10^8M_{\odot}$ mass black hole, to a scale of hundreds of Mpc. This is still technically infeasible even with modern supercomputers. Consequently, if the study focuses mainly on large scales, such as the galaxy or larger scales, as it is the case for most AGN feedback works so far, since the scale of black hole accretion cannot be resolved, so-called sub-grid models have to be used. This ``sub-grid'' model means that some simplifications and approximations are used when describing the black hole accretion and its outputs. The first issue the ``sub-grid'' model needs to deal with is the determination of the mass accretion rate of the central black hole. The exact determination of the mass accretion rate is crucial for the study of AGN feedback since it determines the strength of the AGN outputs such as radiation, jet, and wind. Most works adopt the Bondi solution \citep{Bondi:52, Frank:02} to calculate the mass accretion rate (e.g., \citealt{Springel:05, Booth:09, McCarthy:10, McCarthy:17, Choi:12, Choi:15, LeBrun:14, Schaye:15}; see recent review by \citealt{Negri:17}). But this approach suffers from the following problems. First, the accretion gas is usually turbulent and inhomogeneous \citep[e.g,][]{Gaspari:13}. Second, the Bondi solution neglects the angular momentum of the gas \citep{Hopkins:11, Tremmel:17}. Perhaps the most serious problem is that, since the Bondi radius cannot be resolved, some form of extrapolations to the density and temperature have to be used. This is in practice achieved by introducing a constant ``accretion rate boost parameter $\alpha$'' with a typical value of 100 \citep[e.g.,][]{Springel:05}. The underlying idea is that at large radii, the density may be underestimated while the temperature overestimated compared to the values at the Bondi radius, so the accretion rate will be underestimated. Obviously, the value of $\alpha$ has very large uncertainties. In fact, different works choose quite different values, from $\alpha\sim 1$ to $\sim 300$ \citep[e.g.,][]{Booth:09}. Most recently, taking the AGN feedback in an isolated galaxy as an example, \citet{Negri:17} carefully studied how well the Bondi approximation describes the mass accretion rate. They have performed many runs with different resolutions using the Bondi approximation, and compared these results with a high-resolution run which can well resolve the Bondi radius thus precisely determining the accretion rate. They find that approximated Bondi formalism can lead to both over- and underestimations of the black hole growth, depending on the resolution and how the variables entering into the formalism are calculated. So how to determine the mass accretion rate of the central AGN is still an unsolved problem. In this paper, we focus on discussing another aspect of the ``sub-grid'' model. That is, for a given mass accretion rate, what will be the exact outputs, i.e., radiation, wind, and jet, from the AGN? In many large-scale cosmological simulation works, once they estimate the mass accretion rate, they estimate the total energy output of the accretion flow and then simply assume some fraction of these energy is converted into the thermal energy of the surrounding ISM \citep[e.g.,][]{Booth:09}, except in some recent works in which the importance of mechanical feedback by wind has been recognized and taken into account \citep[e.g.,][]{Choi:15, Weinberger:17a, Weinberger:17b, Gaspari:17}. In reality, however, the output of black hole accretion is much more complicated. As we will discuss in detail in \S\ref{sec:agnphysics}, there are two kinds of accretion modes, namely cold and hot, and in each mode the three types of outputs are quite different. In the best cases, the two different accretion (feedback) modes are discriminated, but the most updated accretion physics describing their outputs in each mode has not been properly taken into account. This is especially the case for the hot accretion mode, in which case much progress has been made in recent several years. For example, when calculating the Compton heating, usually the same Compton temperature value is used for both accretion modes. This is not correct because the typical spectrum emitted in the two modes is different (\citealt{Xie:17}; see also \S\ref{sec:agnphysics} below). The properties of wind in the two modes are also often described by the same parameterized way, while in recent couple of years the properties of wind launched from the hot accretion flow have been intensively studied and well understood, which is found to be very different from the wind from cold accretion disks (see \S\ref{sec:agnphysics} below for details). By performing two-dimensional hydrodynamical numerical simulations, we in this paper study the evolution of an isolated elliptical galaxy. The inner boundary is chosen to resolve the Bondi radius so that the mass accretion rate can be calculated precisely. We consider two accretion modes, in each mode we consider both the radiation and wind. Jets are neglected as we will explain later in this paper. Heating and cooling, star formation, Type Ia and Type II supernovae are included in the simulation. These features are the same as those of some previous works \citep[e.g.,][]{Novak:11, Ciotti:12, Gan:14, Negri:17, Ciotti:17}. However, different from these previous works, in this paper we will adopt the most recent progresses in AGN physics which give correct descriptions of radiation and wind. As we will see in \S\ref{sec:agnphysics}, these progresses are especially in the regime of hot accretion flows but also in the description of wind from cold accretion disk. The paper is organized as follows. In \S\ref{sec:agnphysics}, we present the most updated AGN physics we use in the simulation. The other physics of the model and the details of the model setup are described in \S\ref{sec:Model}. The results of the simulation, namely the effects of the AGN feedback in the galaxy evolution, are described in detail in \S\ref{sec:Results}. We will also compare our results with those obtained in previous works to investigate the effects of the AGN physics. Attention will be also paid on the respective roles of radiative and mechanical feedback. The last section is devoted to a summary and conclusion. As the first paper of a series in this project, in this work we assume the specific angular momentum of the gas in the galaxy is low. In a following paper \citep{Yoon:18}, we will investigate the effect of AGN feedback on the evolution of elliptical galaxies with a high specific angular momentum. \section{The AGN physics} \label{sec:agnphysics} As in previous works, the feedback effects are considered by injecting radiation and wind launched by the AGN in the innermost grids of the simulation domain. The subsequent interaction of radiation and wind with the ISM in the galaxy is self-consistently calculated. In this section, we describe in detail how the properties of radiation and wind are calculated for a given accretion rate. This AGN physics is also often called ``feedback physics'' or ``sub-grid physics'' in literature. We emphasize that the most recent progresses in the field of AGN study will be taken into account. \subsection{The accretion rate and two accretion modes} Since our simulation can resolve the Bondi radius, we can calculate the accretion rate at that radius directly. Note that we should not put an additional constraint of ``Eddington limit'' to the mass accretion rate as in some papers. The Eddington limit only applies for a pure spherical accretion which is almost always an oversimplification when we consider accretion flow close to the black hole. Once the accretion flow has some angular momentum, the accretion rate can be of any value. Then our question is, for a given accretion rate, what will be the output of AGN, i.e., what are the properties of emitted radiation and wind? After several decades' efforts, tremendous progresses have been made in the theory of black hole accretion \citep[see reviews by][]{Pringle:81, Frank:02, Blaes:14, Yuan:14}. We know that according to the temperature of the accretion flow, we have two series of solutions, i.e., cold (e.g., the standard thin disk; \citet{Shakura:73}) and hot (e.g., the advection-dominated accretion flow; \citet{Narayan:95}). Mathematically, these two series of solutions have a large overlap in terms of the corresponding accretion rate. In other words, for a given accretion rate, both the cold and hot accretion solutions are available for some accretion rates \citep{Narayan:95, Yuan:14}. Fortunately, such a theoretical degeneracy is broken by our observations of black hole X-ray binaries. Black hole X-ray binaries come in two main ``states'' -- soft and hard states -- which are described by the standard thin disk and the hot accretion flow, respectively. During the decay phase of an outburst of a black hole X-ray binary, the luminosity decreases with time. It is interesting to note that the source always transits from the soft to the hard state once the luminosity passes through a critical luminosity \begin{equation} L_{\rm c}\approx 2\% L_{\rm Edd} \label{criticall} \end{equation} \citep{McClintock:06}. In other words, we never observe a soft state with $L_{\rm BH} \la 2\%L_{\rm Edd}$\footnote{This is correct only to zeroth order. Here we do not discuss the complication due to ``hysteresis'' in the hard-to-soft state transition.}. So this ``critical'' luminosity is a boundary set by nature between the cold and hot accretion flows. We have the almost same situation in the case of AGNs since the accretion physics does not depend on the mass of the black hole. For AGNs, we now know that luminous AGNs such as quasars correspond to the soft state, and are described by the cold accretion flow. In this accretion mode, accretion flow produces radiation and wind but not jet\footnote{The absence of jet in the cold accretion mode is based on the observations of the soft state of black hole X-ray binaries. It is widely believed that a standard thin disk is operating in this state, but no jets are observed. However, we do observe strong radio emission from the radio-loud quasar in which we also believe a cold accretion disk is operating. It is still an open question how to understand radio-loud quasar.}. Low-luminosity AGNs correspond to the hard state and are described by hot accretion flows \citep{Ho:08, Yuan:14}. In this accretion mode, accretion flow produces three kinds of output, namely radiation, wind, and jet \citep{Yuan:14}. The output from the two modes of accretion is quite different and this obviously will determine the different feedback effects. In the literature of AGN feedback, the corresponding feedback are called quasar (or radiative) and radio (or kinetic, or maintenance, or jet) modes, respectively. The names of the two feedback mode are not only diverse, but also confusing. They were invented perhaps to emphasize the dominance of some forms of AGN output in an accretion mode, for example the radiation in the cold accretion mode and the jet in the hot accretion mode. However, as we can see from \S\ref{subsec:comparison}, it is not obvious why we can neglect the other forms of output in a given accretion mode, e.g., the wind output in the cold accretion mode and the radiation output in the hot accretion mode, at least when we compare the energy and momentum fluxes of various AGN outputs (wind and radiation). It is even on the opposite with what we think if we consider the feedback effects. For example, in the quasar mode, at least in terms of controlling the black hole growth, wind feedback is much more important than radiation (\S\ref{subsec:BHgrowth})\footnote{In the present work, the role of dust is not considered. However, the inclusion of dust in the simulation can significantly increase the coupling between radiation and gas, thus the potential role of radiation in AGN feedback, including in controlling the black hole growth, will be significantly enhanced\citep{Novak:12, Ishibashi:15, Bieri:17,Costa:17}.}. While in the radio mode, when the accretion rate is low, radiative feedback seems to be at least equally important with wind in controlling the accretion rate of the black hole (\S\ref{subsec:lightcurve}). Therefore, in this paper, we suggest to simply follow the names of black hole accretion mode and replace these complicated feedback names by ``cold feedback mode'' and ``hot feedback mode'', representing the feedback occurred in the two accretion modes respectively. The mass accretion rate corresponding to the critical luminosity $L_{\rm c}$ is: \begin{equation} \dot{M}_{\rm c}\approx \frac{L_{\rm c}}{\epsilon_{\rm EM,cold}c^2}. \label{criticalrate} \end{equation} Here $\epsilon_{\rm EM,cold}$ is the radiative efficiency of a cold accretion disk. Note that this accretion rate is the rate at the innermost region of the accretion disk where most of the radiation comes from. Since winds exist in a cold disk, the accretion rate at the Bondi radius must be larger than this rate. For our description of wind launched from a cold disk (refer to \S\ref{subsubsec:colddiskwind}), the ratio of accretion rate close to the black hole and the wind mass flux is a weak function of accretion rate. For typical parameters of our model, the ratio is close to unity, i.e., $\dot{M}_{\rm BH}\approx \dot{M}_{\rm W}$. So the accretion rate at the innermost radius of the simulation domain ($r_{\rm in}$) is only 2 times of that close to the black hole horizon. Therefore, in our simulation, we compare the accretion rate at $r_{\rm in}$ (calculated by Eq. \ref{mdotbondi}) with $\dot{M}_{\rm c}$ in Eq. (\ref{criticalrate}) to decide which accretion mode the accretion flow should choose. In the next two subsections, we describe the output from the cold and hot accretion modes. In this work, we neglect the effect of jet because we assume that for the feedback study of a single galaxy as in the present work, the jet may simply pierce through the galaxy and has negligible interaction with the galaxy since it is well collimated, although it should be important for the evolution of large scale structure such as galaxy clusters. But we note that there are debates on this topic (e.g., see \citet{Gaibler:12} for a different opinion) so it is necessary to examine this assumption in the future works. \subsection{Cold accretion (feedback) mode} \label{subsec:coldmode} \subsubsection{The accretion within the Bondi radius} Our simulation can resolve the Bondi radius, which is the outer boundary of the accretion flow. But within Bondi radius, the accretion onto the black hole still cannot be resolved and must be treated as ``sub-grid'' physics. The dynamics of accretion gas within Bondi radius is likely complicated and is a good project of future research. In this work, we assume the following simple scenario. Since the specific angular momentum of the gas is low, the gas will first freely fall until a small accretion disk is formed with the size of the circularization radius. Note that the disk should gradually grow with time because of the angular momentum transport of the accretion flow\citep{Bu:14}. Wind will be launched from the disk and will affect the black hole accretion rate. In the following we estimate the black hole accretion rate according to the above scenario. The approach is largely adopted from \citet{Ciotti:12} but with some modifications. The mass accretion rate at the innermost radial grid ($\dot{M}(r_{\rm in})$) of our simulation is calculated by Eq. (\ref{mdotbondi}). The infall timescale from $r_{\rm in}$ is calculated by \begin{equation} \tau_{\rm ff} = \frac{\rm r_{in}}{v_{\rm ff}}, ~~~ v_{\rm ff} \equiv \left( \frac{2\,G M_{\rm BH}}{r_{\rm in}} \right)^{1/2} , \end{equation} where $\tau_{\rm ff}$ and $\,v_{\rm ff}$ are free-fall time scale and velocity at the inner-most grid $r_{\rm in}$. The effective accretion rate at which gas feeds the small accretion disk is then obtained from \begin{equation} \frac{d\,\dot{M}^{\rm eff}}{d\,t} = \frac{\dot{M}(r_{\rm in}) - \dot{M}^{\rm eff}}{\tau_{\rm ff}}. \end{equation} This equation implies that when $\dot{M}(r_{\rm in})$ drops to zero the small accretion disk experiences a fueling declining exponentially with time. Given our fiducial numerical set-up (i.e. $R_{\rm in}$=2.5 pc and $M_{\rm BH,init}\sim 10^{9}\,M_{\odot}$), the free-fall time is $\tau_{\rm ff}\sim 10^{3}$ yr. Once the gas reaches the circularization radius, $R_{\rm cir}$, it forms an accretion disk and fuels the black hole with the accretion (viscous) timescale. With the computed total mass of the gas in the disk, $M_{\rm dg}$, the mass inflow rate at $R_{\rm cir}$ in the disk can be estimated as \begin{equation}\label{eq:disk} \dot{M}_{\rm d,inflow} = \frac{M_{\rm dg}}{\tau_{\rm vis}}, \end{equation} where $\tau_{\rm vis}$ is the instantaneous viscous timescale at $R_{\rm cir}$, which is described in \citet{Kato:08}, \begin{equation} \tau_{\rm vis}\approx 1.2\times 10^{6}\, {\rm yr} \left(\frac{\alpha}{0.1}\right)^{-1} \left( \frac{R_{\rm cir}}{100\,r_{\rm s}}\right)^{7/2} \left( \frac{M_{\rm BH}}{10^{9}\,M_{\odot}}\right), \label{eq:viscous} \end{equation} where we set the viscosity parameter to $\alpha=0.1$. We note that in the simulation, we set $R_{\rm cir}$ as a free parameter and adopt the value of $R_{\rm cir} = 100\, r_{\rm s}$, where $r_{\rm s} \equiv 2G M_{\rm BH}/{c^{2}}$ is the Schwarzschild radius. In this way, the growth of $R_{\rm cir}$ with time is absorbed in this parameter. While the gas approaches the central black hole via the accretion disk, a fraction of the gas is ejected as a form of wind. So the final black hole mass accretion rate can be obtained from \begin{equation} \dot{M}_{\rm BH} = \dot{M}_{\rm d,inflow} - \dot{M}_{\rm W,C}, \label{eq:BHaccretionrate} \end{equation} where $\dot{M}_{\rm W,C}$ is the wind mass flux at the cold accretion disk(see eq.~(\ref{eq:coldwindmass})). We will discuss the wind model in \S\ref{subsubsec:colddiskwind}. The total amount of disk gas mass, $M_{\rm dg}$, evolves as \begin{equation}\label{eq:dg} \frac{d\,M_{\rm dg}}{d\,t} = \dot{M}^{\rm eff} - \dot{M}_{\rm BH} - \dot{M}_{\rm W,C}. \end{equation} Combining eqs.(\ref{eq:disk})-(\ref{eq:dg}), we can estimate the black hole accretion rate $\dot{M}_{\rm BH}$. We note that if the disk is in hot accretion mode, the viscosity time scale is many orders of magnitudes shorter than that in the cold mode. So both $\tau_{\rm ff}$ and $\tau_{\rm vis}$ are very small. Hence, we simply ignore the above-mentioned time lag effects. \subsubsection{Wind} \label{subsubsec:colddiskwind} Cold accretion disk can produce strong winds, which have been widely observed in luminous AGNs \citep[e.g.,][]{Crenshaw:03, Tombesi:10, Tombesi:14, King:15, Liu:15} and black hole X-ray binaries (e.g., \citealt{Neilsen:12, Homan:16}, see review by \citealt{DiazTrigo:16}). Three mechanisms have been proposed to explain the production of wind, namely thermal, radiation, and magnetic ones. Debates still exist for which one is the dominant mechanism in various conditions. In principle by performing numerical simulations we could combine all these three mechanisms to calculate the wind properties. Unfortunately, it is still infeasible due to the technical difficulties of globally simulating a thin disk. Therefore we will obtain the properties of wind to be used in the current work from observations. Some parameters of wind have been measured, such as the velocity, although the value is diverse. Recently, \citet{Gofford:15} have analysed a sample of 51 {\it Suzaku}-observed AGNs and presented the properties of wind as a function of the bolometric luminosity of AGNs. Their results indicate that more luminous AGN tends to harbour faster and more energetic winds. In this paper, we adopt the fitted formulas presented in that work to describe the wind. The mass, momentum and energy fluxes of the winds are described by\footnote{Note that in the Table 4 of \citet{Gofford:15}, the value of the power index in the mass flux fitting formula is 0.9. The value of 0.85 we adopt here is within the truncation error of their value and is more consistent with the fitted line in their plots.}, \begin{equation}\label{eq:coldwindmass} \dot{M}_{\rm W,C} = 0.28\, \,\left( \frac{L_{\rm BH}}{10^{45}\,\rm erg\,s^{-1}} \right)^{0.85}M_{\odot}\,{\rm yr^{-1}}, \end{equation} \begin{equation}\label{eq:coldwindmom} \dot{P}_{\rm W,C} = \dot{M}_{\rm W,C}\,v_{\rm W,C}, \end{equation} \begin{equation}\label{eq:coldwindenergy} \dot{E}_{\rm W,C} = \frac{1}{2}\, \dot{P}_{\rm W,C}\, v_{\rm W,C}. \end{equation} Here $L_{\rm BH}$ is the bolometric luminosity of the AGN. The velocity of wind is described by, \begin{equation} v_{\rm W,C} = 2.5\times10^{4}\, \,\left( \frac{L_{\rm BH}}{10^{45}\,\rm erg\,s^{-1}} \right)^{0.4}{\rm km\,s^{-1}}. \label{coldwindvelocity} \end{equation} An upper limit of $10^5{\rm km\,s^{-1}}$ is adopted since observations indicate a saturation of the wind velocity at this value \citep{Gofford:15}. In the literature, observed value of wind velocity are quite diverse, from the very large value for ultra-fast outflow to the small value for molecular outflows\citep[e.g.,][]{Crenshaw:03, Blustin:07, Hamann:08, Tombesi:10}. This is likely because the winds are detected at different distances from the black hole: the wind velocity is larger at smaller distance. The diversity of wind mass flux is possibly also because of the different detection locations. Specifically, the mass flux of wind increases with distance because of mass entrainment of the interstellar medium during the outward propagation of wind. In the observations of \citet{Gofford:15} the winds are detected at a distance of $\sim 10^{2-4} r_s$ from the black hole. The innermost grid in our simulation is 2.5 pc (refer to \S\ref{simulationsetup}), which corresponds to $\sim 10^4r_s$ for a typical black hole mass of $2\times 10^9M_{\odot}$ in our simulations. So our adoption of the observational results of \citet{Gofford:15} is justified. Eq.~(\ref{eq:coldwindmass}) can be rewritten as \begin{equation} \dot{M}_{\rm W,C} = 0.28 \,\left( \frac{L_{\rm BH}}{L_{\rm Edd}}\right)^{0.85} \left( \frac{L_{\rm Edd}}{10^{45}\,\rm erg\,s^{-1}} \right)^{0.85}\, M_{\odot}\,{\rm yr^{-1}}. \end{equation} For the Eddington ratio of $l \equiv L_{\rm BH} / L_{\rm Edd} = 0.1$ and black hole mass of $M_{\rm BH}=10^{9}\,M_{\odot}$, the mass flux of wind is $\dot{M}_{W,C}\approx 2.5\, {M}_{\odot} \,{\rm yr^{-1}}$. This is quite similar to the mass accretion rate of the black hole, which is $\dot{M}_{\rm BH}=0.1\dot{M}_{\rm Edd}\approx 2\, {M}_{\odot}\,{\rm yr^{-1}}$. From eq. (\ref{eq:coldwindenergy}), the power of wind is $\dot{E}_{W,C} = 0.02\, \dot{M}_{\rm BH}\, c^{2}$. This is five times smaller than the bolometric luminosity, which is $L_{\rm BH}=0.1\dot{M}_{\rm BH}c^2$. The above settings of wind properties are different from previous works \citep[e.g.][]{Novak:11,Gan:14,Ciotti:17}. In these works, the wind velocity is usually set to be a constant value of $v_{\rm W} = 10^{4} \, {\rm km\,s^{-1}}$, while the mass flux or the power of wind is usually described by a parameter called mechanical efficiency ($\epsilon_{\rm W}$), which is defined as the fraction of mechanical power of wind in the total accretion power. The value of $\epsilon_W$ is highly uncertain, its scaling with luminosity is qualitatively similar to but quantitatively different from what we have just introduced above. Typically, the wind power described by the above formula is much stronger than that adopted in previous works. The next question is the spatial (angular) distribution of the mass flux of wind. In reality, most of mass flux may concentrate close to the surface of the accretion disk. Since our galaxy model is almost spherically symmetric, and since the relative orientation of the accretion disk and the galactic disk is random\citep{Schmitt:01}, the exact description of distribution of wind flux may be not so important so we simply adopt the same description as in previous works \citep[e.g.,][]{Novak:11, Gan:14, Ciotti:17}. According to this description, the mass flux of wind $\propto {\rm cos}^2(\theta)$. Thus, the half-opening angle enclosing half of the mechanical energy is $\approx 45^{\circ}$, and most of the flux concentrates close to the rotation axis of the accretion disk. \subsubsection{Radiation} Since the mass flux of wind is similar to the mass accretion rate of the black hole, the radiation output from the thin disk can be approximated as, \begin{equation} L_{\rm BH}=\epsilon_{\rm EM,cold} \dot{M}_{\rm BH}c^2 \label{coldflowrad} \end{equation} Here $\dot{M}_{\rm BH}$ is the accretion rate close to the black hole calculated by eq. (\ref{eq:BHaccretionrate}). For a cold thin disk, the radiative efficiency $\epsilon_{\rm EM,cold}$ is only a function of black hole spin. In this work we usually set \begin{equation} \epsilon_{\rm EM,cold}=0.1, \label{coldfloweff} \end{equation} which means that we assume the black hole is moderately spinning \citep{Wu:13}. To examine the effect of stronger radiation, sometimes we also consider the case of $\epsilon_{\rm EM,cold}\approx 0.3$, which corresponds to a rapidly spinning black hole. The emitted spectrum in the cold accretion mode is represented by the observed multi-waveband spectrum of quasars. The radiation carries with them energy and momentum. It will heat the ISM via photoionization and Compton scattering. It will also be able to push the gas via electron scattering and photoionization. The radiative heating and cooling we consider in this work are computed using the formulae presented in \citet{Sazonov:05}, which describe the net heating and cooling per unit volume of the gas in photoionization equilibrium. It includes Compton heating/cooling, bremsstrahlung cooling, photoionization, and line and recombination cooling. The Compton heating/cooling is usually calculated in terms of ``Compton temperature'', \begin{equation} H_{\rm Compton} = 4.1 \times 10^{-35}n^2\xi (T_C-T)~{\rm erg~cm^{-3} s^{-1}}, \label{comptonheating} \end{equation} where $\xi=4\pi L_{\rm BH}/n$ is the ionization parameter, $F$ is the flux of photoionizing photons, $n$ is the number density of the ISM, $T$ is the temperature of the ISM, $T_C$ is the Compton temperature of the photons, which physically means the energy-weighted average photon energy of the radiation emitted by the AGN. For the cold feedback mode, it can be calculated from the observed spectrum of quasars \citep{Sazonov:04}. The result is \begin{equation} T_{\rm C,cold} = 2\times 10^7{\rm K}. \label{coldtemperature} \end{equation} \subsection{Hot accretion (feedback) mode} Compared with the cold mode, the hot accretion mode corresponds to lower accretion rates\citep{Yuan:14}, so both the radiation and the wind emitted from this mode are weaker than those from the cold mode, as shown by Fig. \ref{fig:windradcomp}. But the hot mode feedback is still potentially very important. This is because, compared to the cold mode, the central AGNs reside in the hot mode for a much longer time than in the cold mode (e.g., \citealt{Greene:07}; Figs.~\ref{fig:ldot} \& \ref{fig:dutycycle} of the present work). So we expect to have an important cumulative effect for the feedback in the hot mode. \subsubsection{Geometry configuration of accretion flow} Before we discuss the radiation and wind in the hot accretion mode, we first need to discuss the geometry of the accretion flow when the accretion rate at the Bondi radius is smaller than $\dot{M}_{\rm c}$ (Eq. (\ref{criticalrate})). This issue has been well studied in the cases of the hard state of black hole X-ray binaries and low-luminosity AGNs \citep[see][for a review]{Yuan:14}. The result is that at large radii, the accretion flow is in the form of thin disk; but at a certain radius, $r_{\rm tr}$, the thin disk will be truncated and transits into a hot accretion flow. The value of the transition radius $r_{\rm tr}$ can be described by (\citealt{Liu:99, Manmoto:00, Gu:00,Yuan:04}; see review by \citealt{Yuan:14}) \begin{equation} r_{\rm tr} = 3r_s\left[\frac{2\times 10^{-2}\dot{M}_{\rm Edd}}{\dot{M}(r_{\rm in})}\right]^2. \label{transitionradius} \end{equation} Here $\dot{M}(r_{\rm in})$ is the accretion rate at the innermost grid $r_{\rm in}$ of simulation calculated by eq. (\ref{mdotbondi}). The transition radius will be small if $\dot{M}(r_{\rm in})$ is large. The largest value of $r_{\rm tr}$ is set to be $r_{\rm in}$. \subsubsection{Wind} \label{subsubsec:hotwind} The status of study of wind from hot accretion flows is quite different from the case of a cold accretion disk. In this case, the observational data is much fewer compared to the case of cold disk. This is mainly because the gas in the wind from a hot accretion flow is very hot thus generally fully ionized. So it is very difficult to detect them by the usual absorption-line spectroscopy. But still, in recent years, we have gradually accumulated more and more observational evidences for wind from low-luminosity sources in which we believe a hot accretion flow is operating, including low-luminosity AGNs and radio galaxies \citep[e.g.,][]{Crenshaw:12, Tombesi:10, Tombesi:14, Cheung:16}, the supermassive black hole in our Galactic center, Sgr A* \citep{Wang:13}, and the hard state of black hole X-ray binaries \citep[e.g.,][]{Homan:16}. In contrast to the rarity of observational results, we have much better theoretical understanding to the wind launched from hot accretion flows, mainly attributed to the rapid development of numerical simulations in recent years \citep{Yuan:12a, Narayan:12, Li:13, Yuan:15, Bu:16}. This is also partly because radiation is in general dynamically not important in hot accretion flow, and technically it is easy to simulate a hot accretion flow since it is geometrically thick. Especially, the detailed properties of wind from a hot accretion flow have been carefully studied in \citet{Yuan:15} based on three-dimensional general relativity MHD simulation data. In this work, to discriminate the turbulent outflow from real wind, a ``virtual particle trajectory'' approach has been proposed. Compared to the streamline approach often used in literature, this new approach can give a much more reliable calculation to the mass flux of wind. In the present work, we will use the results obtained by \citet{Yuan:15}. We briefly summarize the most relevant results as follows. We assume that the mass accretion rate at the innermost radius of the simulation domain $\dot{M}(r_{\rm in})$ is roughly equal to the accretion rate at the outer boundary of the hot accretion flow $\dot{M}(r_{\rm tr})$. The outer truncated thin disk should also be able to produce wind, but in this paper we neglect this part of wind. In this case, the accretion rate close to the black hole horizon, which determines the emitted luminosity, is described by \citep{Yuan:15}: \begin{equation} \dot{M}_{\rm BH,hot}=\dot{M}(r_{\rm tr})\left(\frac{3r_s}{r_{\rm tr}}\right)^{0.5}\approx \dot{M}(r_{\rm in})\left(\frac{3r_s}{r_{\rm tr}}\right)^{0.5}. \label{hotaccretionrate} \end{equation} So the mass flux of wind launched from the hot accretion flow is described by \begin{equation} \dot{M}_{\rm W,H}\approx \dot{M}(r_{\rm in})\left[1-\left(\frac{3r_s}{r_{\rm tr}}\right)^{0.5}\right]. \label{hotwindflux} \end{equation} Compared to jets, the opening angle of wind is much larger (refer to Fig. 1 in \citealt{Yuan:15}), which makes the interaction between wind and ISM very efficient. According to the detailed analysis by \citet{Yuan:15} (refer to their Fig. 3), the mass flux of wind is distributed within $\theta\sim 30^{\circ}-70^{\circ}$ and $\theta\sim 110^{\circ}-150^{\circ}$ above and below the equatorial plane respectively. Since the hot accretion flow occupies roughly $\theta\sim 70^{\circ}-110^{\circ}$, such a distribution implies that the wind is along the surface of the hot accretion flow. Within the above-mentioned two ranges of $\theta$ angle, the mass flux of wind is assumed to be independent of $\theta$. The speed of wind when they just leave the transition radius $r_{\rm tr}$ is $v_{\rm W,H}\approx 0.2 v_{\rm K}(r_{\rm tr})$ (refer to Eq. (7) in \citealt{Yuan:15}). Here $v_{\rm K}(r_{\rm tr})$ is the Keplerian velocity at the transition radius $r_{\rm tr}$. When the wind propagates outward, gravitational force will decelerate it, while gradient force of gas pressure and magnetic force will accelerate it. The overall result is that the poloidal velocity of wind roughly keeps constant, and its value can be approximated as (\citealt{Yuan:15}, Eq. (8); Cui, Yuan \& Li 2018, in preparation): \begin{equation} v_{\rm W,H}\approx (0.2-0.4) v_{\rm K}(r_{\rm tr}). \label{windvelocity} \end{equation} In this paper we adopt $v_{\rm W,H}=0.2 v_{\rm K}(r_{\rm tr})$. The fluxes of energy and momentum of wind are described by \begin{equation} \dot{E}_{\rm W,H}=\frac{1}{2}\dot{M}_{\rm W,H} v_{\rm W,H}^2, \label{windpower} \end{equation} \begin{equation} \dot{P}_{\rm W,H}=\frac{2\dot{E}_{\rm W,H}}{v_{\rm W,H}}. \label{windmomentum} \end{equation} The properties of wind described above are quite different from those adopted in previous works \citep[e.g.,][]{Novak:11, Gan:14, Ciotti:17}. In these previous works, the velocity of wind is usually set to be a value with reference to the observations of wind in the cold mode. The mass flux or the power of wind in the hot feedback mode are again determined by the ``mechanical efficiency'' parameter $\epsilon_W$. Its value is now much more uncertain, because observational constrains to wind in the hot mode are not good. So usually in these works the wind parameters and their scaling with luminosity are assumed to be the same as those in the cold mode. Consequently, we find that the wind power we adopt in the present work is much stronger than that in previous works. This causes significant differences of simulation results, as we will describe later in this paper. \subsubsection{Radiation} The radiation output from hot accretion flows has been well studied \citep[see][for a review]{Yuan:14}. Different from a thin disk, the radiative efficiency not only depends on the black hole spin, but more importantly on the accretion rate. The radiative efficiency as a function of accretion rate has been investigated by \citet{Xie:12}. They fit their numerical calculation results of efficiency by a piecewise power-law function of accretion rate, \begin{equation} \epsilon_{\rm EM,hot}(\dot{M}_{\rm BH})=\epsilon_0\left(\frac{\dot{M}_{\rm BH}}{0.1L_{\rm Edd}/c^2}\right)^a, \label{radefficiency} \end{equation} where the values of $\epsilon_0$ and $a$ are given in Table 1 of \citet{Xie:12}. For convenience, we copy their results here\footnote{The results depend also on a parameter of the hot accretion flow model $\delta$. In this work we adopt $\delta=0.1$.}, \begin{eqnarray} (\epsilon_0, a) &=& \left\{ \begin{array}{ll} (0.2,0.59), & \dot{M}_{\rm BH}/\dot{M}_{\rm Edd}\la 9.4\times 10^{-5} \\ (0.045,0.27), & 9.4\times 10^{-5} \lesssim \dot{M}_{\rm BH}/\dot{M}_{\rm Edd} \lesssim 5\times 10^{-3} \\ (0.88,4.53), & 5\times 10^{-3}\lesssim \dot{M}_{\rm BH}/\dot{M}_{\rm Edd} \lesssim 6.6\times 10^{-3} \\ (0.1,0), & 6.6\times 10^{-3}\lesssim \dot{M}_{\rm BH}/\dot{M}_{\rm Edd} \lesssim 2\times 10^{-2} \end{array} \right. \label{efficiencyfit} \end{eqnarray} Here $\dot{M}_{\rm Edd}\equiv 10 L_{\rm Edd}/c^2$ is the Eddington accretion rate. Note that the calculation of \citet{Xie:12} is for a Schwarzschild black hole. Since in the present paper we assume that the black hole is moderately spinning, we multiply $\epsilon_0$ in \citet{Xie:12} by a factor of $0.1/0.057$. As shown by \citet{Xie:12}, the radiative efficiency quickly (and of course also luminosity) increases with the accretion rate, and finally becomes almost equal to the efficiency of a thin disk at the highest accretion rate of the hot accretion flow, i.e., the ``boundary accretion rate'' between the hot and cold modes, as shown by Fig. \ref{fig:windradcomp}. The spectrum emitted from a hot accretion flow is quite different from that from a cold thin disk, e.g., the lack of the big-blue-bump which is present in the typical spectrum of quasars \citep{Ho:99, Ho:08}. For a given luminosity, the spectrum from a hot accretion flow will have more hard photons compared to that from a cold disk. This makes the radiative heating to ISM via Compton scattering in the hot mode more effective than in a cold mode for the same luminosity. \citet{Xie:17} recently studied this problem in detail and calculated the Compton temperature based on the spectral energy distribution of low-luminosity AGNs combined from literature. The result is \begin{eqnarray} T_{\rm C,hot} &=& \left\{ \begin{array}{ll} 10^{8}\,{\rm K}, & 10^{-3} \lesssim L_{\rm BH}/L_{\rm Edd} \lesssim 0.02 \\ 5\times 10^{7}\,{\rm K}. ~~& L_{\rm BH}/L_{\rm Edd} \lesssim 10^{-3} \end{array} \right. \label{hottemperature} \end{eqnarray} This is several times higher than $T_{\rm C,cold}$. This value is smaller than that adopted in \citet{Gan:14}, in which they adopt $T_{\rm C,hot}=10^9{\rm K}$ by simple estimation. \subsection{Comparison of energy and momentum fluxes of wind and radiation in cold and hot modes} \label{subsec:comparison} \begin{figure*}[!htbp] \begin{center}$ \begin{array}{cc} \includegraphics[width=0.45\textwidth]{powercomp.pdf} & \includegraphics[width=0.45\textwidth]{momfluxcomp.pdf} \end{array}$ \end{center} \caption{The comparison of the power (left) and momentum flux (right) of wind (orange lines) and radiation (blue lines) from hot (solid lines) and cold (dashed lines) accretion modes of AGNs. For comparison, the power and momentum flux of radiation and wind from \citet{Gan:14} are also shown by the dot-dashed lines. } \label{fig:windradcomp} \end{figure*} We now compare the fluxes of energy and momentum of wind and radiation in both cold and hot feedback modes. For the cold feedback mode, the energy and momentum fluxes of radiation can be obtained by Eq. (\ref{coldflowrad}) and $L_{\rm BH}/c$, respectively. The energy and momentum fluxes of wind can be calculated from eqs. (\ref{eq:coldwindenergy}) and (\ref{eq:coldwindmom}). For the hot feedback mode, the energy and momentum fluxes of wind can be calculated from eqs. (\ref{windpower}) \& (\ref{windmomentum}). The wind velocity $v_{\rm W,H}$ can be obtained from eq. (\ref{windvelocity}) while the value of transition radius $r_{\rm tr}$ can be obtained by combining eqs. (\ref{transitionradius}) \& (\ref{mdotbondi}). The energy and momentum fluxes of radiation are $L_{\rm BH}=\epsilon_{\rm EM,hot}\dot{M}_{\rm BH}c^2$ and $L_{\rm BH}/c$, respectively. The value of $\epsilon_{\rm EM,hot}$ can be calculated from eqs. (\ref{radefficiency}) \& (\ref{efficiencyfit}). The results of such a comparison are shown in Fig. \ref{fig:windradcomp}. The left and right panels denote the power and momentum fluxes, respectively. In the hot mode, it is somewhat surprising to note that the power of radiation is larger than that of wind. This is because radiation comes from the innermost region of the accretion flow where gravitational energy release is the largest, while wind production is suppressed at that region due to the suppression of turbulence \citep{Yuan:15}. But on the other hand, the momentum flux of wind is in general larger than that of radiation. This is consistent with the fact that wind production in the hot accretion flow is not driven by radiation, but by the combination of magnetic and thermal mechanisms \citep{Yuan:15}. It is interesting to note that in the cold accretion mode, the momentum flux of wind is also significantly larger than that of radiation. Generally we think the wind in the cold mode is mainly driven by radiation. This result indicates that magnetic field likely also plays an important role. The large momentum flux of wind in both the hot and cold modes suggests that wind will be important in pushing the gas surrounding the AGN away. This is confirmed by detailed simulation as we will describe later in this work. In order to see clearly the differences between our current descriptions of radiation and wind and those in previous works, we have also shown the AGN model from \citet{Gan:14} in Fig.~\ref{fig:windradcomp}. We can see from the figure that in the hot mode, both the radiation and wind are much stronger in the present work than in \citet{Gan:14} in terms of both power and momentum flux. In the cold mode, the wind adopted in the current work is also significantly stronger than in \citet{Gan:14}, but the radiation almost remains unchanged. We note that because of the intrinsic difference between radiation and wind, especially when they interact with the ISM, we should not judge which one is more important in the feedback simply from their magnitude of power and momentum flux. This is because the cross section of photon-particle and particle-particle interaction differs by orders of magnitude, so it will take very different distances for wind and radiation to convert their energy and momentum to the ISM. Let's now estimate their ``typical length-scale of feedback'', $l_{\rm rad}$ and $l_{\rm wind}$, which we define as the distance within which they can transport a significant fraction of their momentum or power to the ISM. This is basically the mean free path of photons and wind particles. Note that these are only low limits of the spatial range within which radiation or wind can affect the ISM. The real range should be much larger, especially for wind. For radiation, $l_{\rm rad}$ is the distance where the scattering optical depth is equal to one, i.e., $\sigma_{T}\rho l_{\rm rad}/m_p=1$, here $m_p$ is the proton mass and $\sigma_T=6.65\times 10^{-25} {\rm cm}^2$ is the Thompson electron scattering cross section. So we have \begin{equation} l_{\rm rad}\sim \frac{m_p}{\sigma_T\rho}\sim 10 {\rho}_{-24}^{-1}~{\rm kpc}. \label{radlength} \end{equation} Here $\rho_{-24}\equiv \rho/10^{-24}{\rm g}{\rm cm}^{-3}$, the typical mass density in the central region of the galaxy is assumed to be $10^{-24}{\rm g}{\rm cm}^{-3}$ (refer to the right panel of Fig. \ref{fig:agnoutburst}). We only consider Thompson electron scattering when estimating the value of $l_{\rm rad}$. Its value will be smaller when line absorption is taken into account. For interaction between wind and ISM, the cross section due to Coulomb collision is $\sigma_C\sim \pi r_e^2\sim \pi e^4/k^2T^2\sim 10^{-4}T^{-2}{\rm cm}^{2}$, with $e$ being the electron charge, $k$ the Boltzmann constant, and $T$ typical temperature of ISM \citep[e.g.,][]{Ogilvie:16}. So we have \begin{equation} l_{\rm wind}\sim \frac{m_p}{\sigma_C \rho}\sim 0.5{\rho}_{-24}^{-1}T_{7}^2~{\rm pc}. \label{windlength} \end{equation} Here $T_7\equiv T/10^7K$. The typical length scale of feedback for wind is much smaller than that for radiation for typical parameters of our problem. This is because the Coulomb cross section is orders of magnitude larger than the Thompson scattering cross section, $\sigma_C\gg\sigma_T$. This result indicates that wind can more easily deposit its energy and momentum to the ISM than radiation. So we expect that wind should be in general more effective to control the mass accretion rate of the black hole than radiation since the accretion rate is determined mainly by the properties of the gas very close to the black hole. This suggests we should especially pay more attention to the role of wind in the hot feedback mode, because as we will see from Fig. \ref{fig:ldot}, the AGN spends most of its time in the hot mode. In fact, the important role of wind from hot feedback mode has begun to be gradually recognized by researchers. One such an example is a most recent work by \citet{Weinberger:17a}. In this important work, they invoke wind from the hot accretion flow to achieve a sufficiently rapid reddening of moderately massive galaxies without expelling too many baryons. But radiative feedback potentially has its ``advantage'' compared to the feedback by wind. Most importantly, radiation is more powerful than wind by a factor of few in the cold mode, as we can see from Fig. \ref{fig:windradcomp}. However, how efficiently the radiation can deposit its energy to the ISM in the galaxy, or what fraction of the radiation energy can be converted to the ISM, depends also on the optical depth of the galaxy. In our current model, from the right panel of Fig. \ref{fig:agnoutburst}, the spatially-averaged mass density is $\sim (10^{-25}-10^{-26}){\rm g}{\rm cm}^{-3}$, so the scattering optical depth of the whole galaxy is \begin{equation} \tau\sim \sigma_T/m_p l_{\rm galaxy}\sim (0.01-0.1). \end{equation} This implies that the radiation from the AGN can only deposit $\sim (1-10)\%$ of its energy to the ISM of the host galaxy. This suggests that radiation feedback may be less important compared to wind in our present work. As we will see, this is confirmed by simulations in the present work. We would like to point out two caveats here. One is that in the present paper we only consider an isolated galaxy. If we take into account external gas supply, or more generally if we consider some more gas-rich galaxy, the density of the ISM will increase so the optical depth of the galaxy will become much larger, therefore radiative feedback will become more important. Another caveat is that in the current work, we ignore the effect of dust in the ISM. If the dust were included, the opacity could be orders of magnitude larger than the electron-scattering opacity \citep{Novak:12}, thus a much larger portion of radiation could be deposited to the ISM. One characteristic output of LLAGNs in the hot accretion mode is jets \citep{Yuan:14}. The comparison between the jet power and the radiative power has been done in the case of the hard state of black hole X-ray binaries based on observational data \citep{Fender:03}. It was found that the radiation power is larger than the jet power when the luminosity is not too low, $L_{\rm BH}\ga (10^{-3}\sim 10^{-4})L_{\rm Edd}$. In the theoretical aspect, \citet{Yuan:15} have compared the power and momentum flux between jet and wind. However, since that work deals with a non-spinning black hole, only the disk-jet, which is powered by the rotation of the accretion flow rather than the black hole spin, is considered; while the Blandford-Znajek jet is not. In that case, it is found that both the power and the momentum flux of wind is much larger than that of jet. In the case of a rapidly spinning black hole, it is expected that the jet power will be much larger. It is unclear how the comparison between jet and wind (and radiation) will change and this requires future investigation (Yuan et al. in preparation). In addition to the direct comparison of their power, another important factor when we compare their importance of feedback is their coupling efficiency with the ISM. As we argue above, it looks that only a small fraction of the radiation power is converted to the ISM. It is desirable to study the case of jet. \section{Model}\label{sec:Model} In this section, we discuss the other aspects of the model, such as the galaxy model, the treatment of stellar evolution, and the hydrodynamical equations. These are same as in \citet{Gan:14}. For completeness, here we briefly describe them as follows. \subsection{Model of galaxy and stellar evolution } Our galaxy model refers to an isolated elliptical galaxy. The gravitational potential consists of the contributions by a dark matter halo, a stellar spheroid embedded in it, plus a central black hole. They dominate the gravity beyond 10 ${\rm kpc}$, $0.1-10~{\rm kpc}$, and within $0.1 ~{\rm kpc}$, respectively. The self-gravity of ISM is ignored in our simulation. For consistency with previous works and ease of comparison, and for simplicity, we assume that the galaxy evolves in isolation without any external fuel source, either from accretion from the intergalactic medium or from acquisition by mergers. Following previous works, we also neglect the initial ISM in our work, and all the gas resource fueling the black hole comes from the stellar evolution, including stellar wind and supernovae. This is an important caveat of our present work. In this case, some results shown are simply for qualitative illustration, not rigorous comparison with observations. In the future we will include the effects of external gas supply and mergers. The calculation of the stellar evolution in our simulation follows the description presented in \citet{Ciotti:12}. In fact, over a cosmological time span, the total stellar wind injected can reach $20\% \sim 30\%$ of the total initial stellar mass, which is two orders of magnitude larger than the black hole mass. Both the stellar wind and supernova explosion will provide sources of mass and energy into the galaxy and these effects will be taken into account in our simulations. This gas, when it cools due to radiation, will form stars. Some newly formed massive stars evolve quickly and explode via Type II supernovae. These processes are considered in the simulation and their calculations are described in \S~\ref{subsec:starformation}. The stellar distribution is described by the Jaffe profile \citep{Jaffe:83}, \begin{equation} \rho_{\star} = \frac{M_{\star}\,r_{\star}}{4\pi r^{2} (r_{\star}+r)^{2}} \label{stellardis} \end{equation} where $M_{\star}$ is the total stellar mass, and $r_{\star}$ is the scale length of the galaxy, which corresponds to the projected half-mass radius (i.e., effective radius) of $R_{e} = 0.7447\,r_{\star} = 6.9$ kpc \citep{Ciotti:09}. The density profile of the dark matter halo is set so that the total mass profile decreases as $r^{-2}$, as observed \citep[e.g.,][]{Rusin:05, Czoske:08, Dye:08}. The values of model parameters are chosen so that the galaxy obeys the edge-on view of the fundamental plane \citep{Djorgovski:87} and the Faber-Jackson relation \citep{Faber:76}. The total stellar mass $M_\star=3\times 10^{11}M_{\odot}$, the velocity dispersion is set to be $\sigma = 260\,{\rm km\,s^{-1}}$, the stellar mass-to-light ratio is $M_{\star}/L_{\rm B}=5.8$, where the total $B$-band luminosity is $L_{\rm B} = 5\times 10^{10} \,L_{\rm B\odot}$. The initial black hole mass is determined according to the correlation between the black hole mass and galaxy mass \citep[e.g.,][]{Magorrian:98, Kormendy:13}. In this paper, we adopt the more updated correlation given in \citet{Kormendy:13}, which gives the initial mass of the black hole of $M_{\rm BH}=6\times 10^{-3}M_\star$ for $M_{\star}=3\times 10^{11}M_{\odot}$. But simply to examine the effect of the black hole mass, sometimes we also run several models using the ``old'' \citep{Magorrian:98} correlation which gives $M_{\rm BH}=10^{-3}M_\star$ for comparison purpose. Most of gas is provided by stellar evolution in our work. So the initial angular momentum of gas ejected from the star is determined by the stellar rotation. In this paper, we assume that the stars rotate slowly. The rotation profile is described by \citet{Novak:11}, \begin{equation} \frac{1}{v_{\phi}(R)} = \frac{d}{\sigma_{0}\,R} + \frac{1}{f\sigma_{0}} + \frac{R}{j}, \end{equation} where $v_{\phi}$ is the rotation velocity, $R$ is the distance to the $z$-axis, and $\sigma_{0}$ is the central one-dimensional line-of-sight velocity dispersion for the galaxy model. Here, $d,f,$ and $j$ are parameters that control the angular momentum profile. In this model, the stars rotate with solid body at $R<d$ and with constant specific angular momentum of $j$ at larger radii. We adjust the parameter to avoid forming a rotationally supported gas disk inside the innermost grid cell of our simulation domain. In the companion paper \citep{Yoon:18}, we will consider the high angular momentum case. \subsection{Energy and momentum interaction between radiation and ISM} \label{subsec:radiativefeedback} The radiation emitted from the central AGN will heat or cool the ISM, and also exerts a radiation force to the ISM. We calculate the radiative heating and cooling based on the formulae presented in \citet{Sazonov:05}, which describe the net heating or cooling rate per unit volume of a cosmic plasma in photoionization equilibrium. The processes considered include Compton heating and cooling, bremsstrahlung loss, photoionization, line and recombination heating and cooling. In particular, the calculation of the Compton heating or cooling is described by eq. (\ref{comptonheating}) in terms of the Compton temperature. As we emphasize in \S~\ref{sec:agnphysics}, since the typical SED emitted in the cold and hot accretion (feedback) modes are very different, the corresponding Compton temperature in the hot mode is several times higher than that in the cold mode (refer to eqs. (\ref{coldtemperature}) and (\ref{hottemperature})). Some simplifications are adopted. We neglect the effect of dust. The radiative transport is also considered, but in an approximated way by assuming the flow is optically thin. For the momentum interaction, i.e., the radiation force, we follow \citet{Novak:11} and consider both the electron scattering and the absorption of photons by atomic lines. The former is described by \begin{equation} (\nabla p_{\rm rad})_{\rm es}=-\frac{\rho \kappa_{\rm es}}{c}\frac{L_{\rm BH}}{4\pi r^2}. \end{equation} Here $\kappa_{\rm es}=0.35~{\rm cm}^2~{\rm g}^{-1}$ is the electron scattering opacity. The latter is described by \begin{equation} (\nabla p_{\rm rad})_{\rm photo}=\frac{H}{c}. \end{equation} Here $H$ is the radiative heating rate per unit volume. \subsection{Star formation} \label{subsec:starformation} Star formation is implemented by subtracting mass, momentum and energy from the grid (see \citealt{Novak:11} for details). The star formation rate per unit volume is determined by \begin{equation} \dot{\rho}_{\rm SF} = \frac{\eta_{\rm SF}\,\rho}{\tau_{\rm SF}}, \end{equation} where we adopt the SF efficiency of $\eta_{\rm SF}=0.1$, and the SF time scale, $\tau_{\rm SF}$ is \begin{equation} \tau_{\rm SF} = \max(\tau_{\rm cool},\tau_{\rm dyn}), \end{equation} where the cooling time scale, $\tau_{\rm cool}$, and the dynamical time scale, $\tau_{\rm dyn}$, are \begin{equation} \tau_{\rm cool} = \frac{E}{C},~~ \tau_{\rm dyn}=\min(\tau_{\rm ff},\tau_{\rm rot}) \end{equation} with \begin{equation} \tau_{\rm ff} = \sqrt{\frac{3\,\pi}{32G\rho}},~~ \tau_{\rm rot}=\sqrt{r\frac{\partial\,\Phi(r)}{\partial\,r}}, \end{equation} where $E$ is the internal energy density, $C$ is the cooling rate per unit volume, and $\Phi(r)$ is the gravitational potential at a given radius. The corresponding loss rates of energy and momentum due to star formation are \begin{equation} \dot{E}_{\rm SF}=\frac{\eta_{\rm SF}E}{\tau_{\rm SF}}, ~~~~\dot{\mathbf{m}}_{\rm SF}=\frac{\eta_{\rm SF}\mathbf{m}}{\tau_{\rm SF}}=\dot{\rho}_{\rm SF} \mathbf{v} \end{equation} Here $\mathbf{m}$ is the momentum density of the ISM and $\mathbf{v}$ is the velocity vector of the ISM. On the other hand, among the newly formed stars, there is a population of massive stars. The massive stars have a relatively short lifetime and will finally evolve to Type II supernovae (SN II) on a relatively short timescale. They will then eject mass and energy into ISM at some rates. This has also been considered in our simulation. We note that there is a caveat in our simulations that we do not take into account the migration of stars; instead they keep their location all the time. \subsection{Hydrodynamics} The evolution of the galactic gas flow, given all the above physical processes including star formation and AGN feedback, is described by the following time-dependent Eulerian equations for mass, momentum, and energy conservations \citep[e.g.,][]{Ciotti:12}: \begin{equation} \frac{\partial \rho}{\partial t} + \nabla \cdot \left( \rho \mathbf{v} \right) = \alpha\,\rho_{\star} + \dot{\rho}_{II}-\dot{\rho}_{\star}^{+}, \end{equation} \begin{equation} \frac{\partial \mathbf{m}}{\partial t} + \nabla \cdot \left( \mathbf{m v} \right) = -\nabla p_{\rm gas} + \rho\mathbf{g} -\nabla p_{\rm rad} - \dot{\mathbf{m}}_{\star}^{+}, \end{equation} \begin{equation} \frac{\partial E}{\partial t} + \nabla \cdot \left( E \mathbf{v} \right) = -p_{\rm gas}\nabla \cdot \mathbf{v} + H - C + \dot{E}_{S} + \dot{E}_{I} + \dot{E}_{II} - \dot{E}_{\star}^{+}, \end{equation} where $\rho,\,\mathbf{m},$ and $E$ are the gas mass, momentum and internal energy per unit volume, respectively. $\mathbf {v}$ is the velocity, $p_{\rm gas}= (\gamma -1)E$ is the gas pressure, the adiabatic index $\gamma = 5/3$, and $\mathbf {g}$ is the gravitational field of the galaxy (i.e., stars, dark matter, plus the time-dependent contribution of the growing central SMBH). $\alpha\,\rho_{\star}$ is the mass source from the stellar evolution, $\dot{E}_{\rm S}$ corresponds to the thermalization of the stellar wind due to stellar velocity dispersion, as the ejected gas collides with the mass lost from other stars and/or with the ambient gas \citep{Parriott:08}. This process provides heat to the ISM at a rate, $\dot{E}_{\rm S} = 1/2\,\alpha\,\rho_{\star} \rm Tr(\sigma^{2})$, where $\sigma$ is the isotropic one-dimensional stellar velocity dispersion without the contribution of the central black hole \citep{Ciotti:09}. \begin{equation} \sigma^{2}(s) = \sigma_{0}^{2}(1+s)^{2}s^{2} \left[ 6 \ln\left( \frac{1+s}{s} \right) + \frac{1-3s-6s^{2}}{s^{2}(1+s)} \right], \end{equation} where $s\equiv r/r_{\star}$. The term $\dot{\rho}_{II}$ in the mass equation denotes the mass return from SNe II, while $\dot{\rho}^+_*$, $\mathbf{m}^+_*$ and $\dot{E}^+_*$ denote the sink terms of mass, momentum, and energy due to star formation, respectively. In the energy equation, $\dot{E}_I$ and $\dot{E}_{II}$ are the feedback rates of energy from SNe I and SNe II, respectively. Finally, $H$ and $C$ denote the radiative heating and cooling rates (\S~\ref{subsec:radiativefeedback}). So totally we have three different heating mechanisms, i.e,. AGN heating, stellar heating, and supernova heating. It is then an interesting question which one dominates over the others. This is the topic of another paper \citep{Li:18}. We find that the answer depends on the region in the galaxy, the time, and the properties of galaxy. Roughly speaking, stellar heating processes ($\dot{E}_{S},\, \dot{E}_{I},\,{\rm and}\, \dot{E}_{II}$) likely dominates over the AGN heating at the galactic outskirt, while supernova heating is more important than stellar heating. \subsection{Simulation Setup} \label{simulationsetup} We employ the parallel ZEUS-MP/2 code \citep{Hayes:06}, using two dimensional axisymmetric spherical coordinates ($r,\theta, \phi$). Following \citet{Novak:11}, in the $\theta$ direction the mesh is divided homogeneously into 30 grids; while in the radial direction, covering the radial range of 2.5 pc -- 250 kpc, we use a logarithmic mesh with 120 grids. A small range of $\theta$ around the axis is excluded to avoid singularity there. With such grids, the finest resolution is at the inner-most grid, which is $\sim$ 0.3 pc. Such kind of configuration is obviously essential since the innermost region is the place where radiation and wind from AGN originate, and thus most important. In particular, the inner boundary radius is chosen to resolve the Bondi radius. For the gas with sound speed of $c_s$, the Bondi radius is estimated to be \begin{equation} r_B=\frac{GM_{\rm BH}}{c_s^2}. \end{equation} In the general case, the accretion flow is not homogeneous but mixture of cold clumps and hot gas. The highest temperature the gas can reach for the hot phase gas can be roughly estimated by the Compton temperature $T_{\rm C}$. In the hot feedback mode, we have $T_{\rm C,hot}\approx 10^8{\rm K}$. Considering a typical black hole mass of $M_{\rm BH}=2\times 10^9M_{\odot}$ (refer to Fig. \ref{fig:bhmass}), we have $r_B=6 {\rm pc}$, which is larger than the radius of the inner boundary of our simulation domain (2.5 pc). For the cold mode, the Bondi radius will be even larger thus more easily resolved. The accretion rate at the innermost radius of the simulation $r_{\rm in}$ is calculated by, \begin{equation} \dot{M}(r_{\rm in}) = 2\pi r_{\rm in}^2\int^{\pi}_0 \rho(r_{\rm in},\theta) ~ {\rm min} \left[v_{r}(r_{\rm in},\theta),0\right] \sin{\theta}\,d\theta. \label{mdotbondi} \end{equation} Note that both the hot and cold phases of the gas are included in the above calculation. Such a calculation of accretion rate is obviously much more precise than that given by Bondi accretion rate formula often adopted in literature, which assumes an accretion of single-phase and non-rotating gas (see \citealt{Negri:17} for a summary of the problems of using the simple Bondi accretion rate formula; see also \citealt{Gaspari:18} for the discussion of ``chaotic cold accretion''.). As for the boundary condition, in the inner and outer radial boundary we use the standard ``outflow boundary condition'' in the ZEUS code (see \citealt{Stone:92} for more details), so that the gas is free to flow in and out at the boundary. For $\theta$ direction, a ``reflecting boundary condition'' is set at each pole. A temperature floor of $10^4{\rm K}$ is adopted in the cooling functions, since the gas cannot reach these low temperatures by radiative cooling alone \citep{Sazonov:05, Novak:11}. \begin{deluxetable*}{cccccccc}[!htbp] \tablecolumns{8}\tabletypesize{\scriptsize}\tablewidth{0pt} \tablecaption{Description of the simulations \label{tab:model}} \tablehead{ \colhead{model} & $M_{\rm BH}(M_{\odot})$ & $\epsilon_{\rm EM,max}$ & \colhead{Mechanical} & \colhead{Radiative} & \colhead{$M_{\rm BH,final} (M_{\odot})$} & \colhead{$\Delta M_{\rm \star,final} (M_{\odot})$} & \colhead{duty cycle (\%)} \\ \colhead{} & \colhead{} & \colhead{} & \colhead{Feedback} & \colhead{Feedback} & \colhead{} & \colhead{} & \colhead{}} \startdata fullFB & $1.8\times 10^9$ & 0.1 & o & o & $2.1\times 10^9$ & $6.5\times 10^9$ & $2.4\times10^{-2}$ \\ windFB & $1.8\times 10^9$ & 0.1 & o & x & $2\times 10^9$ & $6.6\times 10^9$ & $2.6\times10^{-2}$ \\ radFB & $1.8\times 10^9$ & 0.1 & x & o & $1.8\times 10^{10}$ & $1\times 10^{10}$ & 7.4 \\ noFB & $1.8\times 10^9$ & 0.1 & x & x & $1.3\times 10^{10}$ & $1.2\times 10^{10}$ & -- \\ fullFBem03 & $1.8\times 10^9$ & 0.3 & o & o & $1.9\times 10^{9}$ & $5.8\times 10^{9}$ & $9.9\times10^{-3}$ \\ windFBem03 & $1.8\times 10^9$ & 0.3 & o & x & $1.8\times 10^{9}$ & $7.2\times 10^{9}$ & $9.8\times10^{-3}$ \\ fullFBmag & $3\times 10^8$ & 0.1 & o & o & $3.3\times 10^{8}$ & $8.6\times 10^{9}$ & $3.8\times10^{-2}$ \\ \enddata \end{deluxetable*} ~~~~~ \section{Results} \label{sec:Results} \begin{figure*}[!htbp] \begin{center}$ \begin{array}{cc} \includegraphics[width=0.45\textwidth]{c1_mag_zoom1.pdf} & \includegraphics[width=0.45\textwidth]{c1_mag_zoom2.pdf} \end{array}$ \end{center} \caption{Spatial distribution of density (upper), temperature (middle), and radial velocity (bottom) at three different times in correspondence with an outburst of the AGN. The left column is for an outburst occurring in the cold feedback mode. The left, middle, and right plots correspond to $t=1.516$ Gyr (immediately before the outburst; density and temperature are both quite smooth in the central region of the galaxy), 1.525 Gyr (close to the peak of the outburst; two outflowing low-density wind region are evident in the two polar directions, up to 100 pc; dense and low-temperature gas around the equatorial plane is inflowing to fuel the black hole), and 1.54 Gyr (just after the outburst; the density and temperature become smooth again in the central region of the galaxy, but different from the epoch before outburst, the gas in most of the central region of the galaxy is still outflowing.), respectively. The right column is for an outburst occurring in the hot feedback mode. The left, middle, and right plots correspond to $t=1.815$ Gyr (immediately before the outburst; in the central region of the galaxy density and temperature are less smooth compared to the cold-mode outburst in the left column, and more gas is outflowing), 1.82 Gyr (close to the peak of the outburst; compared to the cold-mode outburst in the left column, in two outflowing wind region are also evident but weaker; while around the equatorial plane, the gas is also inflowing to fuel the black hole, but the gas is less dense and temperature is higher), and 1.83 Gyr (just after the outburst; compared to the cold-mode outburst in the left column, the density and temperature also become smooth again in the central region of the galaxy, but the gas is mainly inflowing), respectively.\\ } \label{fig:agnoutburst} \end{figure*} In this work, we consider both the cold and hot feedback modes, and in each mode the feedback by radiation and wind are taken into account. In order to understand the respective roles of radiation and wind, we carry out four runs: one with both mechanical and radiative feedbacks (fullFB), one with only mechanical feedback (windFB), one with only radiative feedback (radFB), and the last one with no feedback (noFB). In addition, we also perform a run with higher radiative efficiency of $\epsilon_{\rm EM}=0.3$ (fullFBem03), which corresponds to the case of a rapidly spinning black hole. The model with a smaller initial black hole mass based on the \citet{Magorrian:98} correlation is also calculated for comparison and denoted as ``fullFBmag''. All these models are listed in Table~\ref{tab:model}. The final values of black hole mass, the accumulated mass of new stars, and duty cycle (the ratio of the duration in the cold mode and the total duration of AGN) have also been given in the table. In order to investigate the effects of different AGN physics, we will compare our results with relevant previous works by \citet{Gan:14} and \citet{Ciotti:17}. The model framework of these two works are very similar to our present paper, except the AGN physics in both the cold and hot feedback modes. We specifically choose to compare our results with the model ``B05v'' in \citet{Gan:14}. \subsection{Overview of the Evolution} \label{overallscenario} Our simulation starts from an age of 2 Gyr of the stellar population. If there were no AGN feedback, the galaxy would evolve smoothly. When AGN feedback is included, the overall evolution of the galaxy is similar to previous works \citep[e.g.,][]{Novak:11, Gan:14, Ciotti:17}. In this case, on the one hand, the radiation and wind from the AGN will interact with the gas in the galaxy and change their properties, especially the spatial distributions of density and temperature as we will see from Fig. \ref{fig:agnoutburst}. The changes of density and temperature will subsequently result in the change of star formation and the whole evolution of the galaxy. On the other hand, the change of the properties of the gas will also affect the fueling and activity of the AGN and the black hole growth. Especially, the activity of the AGN will strongly fluctuate, as we will explain in the following paragraphs. This results in the duty-cycle of AGN. In the following subsections, we will discuss these issues one by one. In this subsection, we focus on introducing the general scenario of the evolution of AGN activity and the feedback effects on the gas in the galaxy. For this aim, we have drawn Fig. \ref{fig:agnoutburst}. There are two columns in this figure, with the left and right one corresponding to an outburst occurred in the cold mode (left) and the hot mode (right), respectively. In each column, from top to bottom, we show the evolution of density, temperature, and radial velocity of the gas in the galaxy before, during, and after the outburst. The three plots in the left column correspond to $t=1.516$ Gyr (immediately before the outburst), 1.525 Gyr (close to the peak of the outburst), and 1.54 Gyr (just after the outburst), respectively. The maximum accretion rate in this interval can reach 0.41$\dot{M}_{\rm Edd}$. Before the outburst, since the accretion rate of AGN is low, the radiation and wind from the central AGN are weak, thus the galaxy is hardly disturbed. So we can see from the figure that, in the central region of the galaxy, $r\la 100$ pc, the spatial distributions of both density and temperature of the gas are quite smooth, and the gas are all inflowing toward the black hole. These gas will cool by radiation so density will become higher. We can see from the figure that there are many cold dense clumps and filaments outside of $\sim 100$ pc. They are formed by thermal instability and Rayleigh-Taylor instability of the gas. They are obviously the ideal place of star formation. The fall of these clumps will significantly increase the accretion rate of the black hole and causes the AGN to enter into the outburst phase on a timescale of $\sim 100~{\rm pc}/v_{\rm ff}(100{\rm pc})\sim 1 {\rm Myr}$. Here $v_{\rm ff}(100{\rm pc}) $ is the free-fall velocity at 100 pc. During the outburst, the accretion rate is much higher so the radiation and wind from the central AGN become much stronger. Consequently, as shown by the plots in the middle column, two low-density and high-temperature outflowing regions are quite evident in the polar region. This is clearly driven by the wind. The temperature of the wind region is as high as $\sim 10^9$ K; such a high temperature is reached because the kinetic energy of the wind is converted into the thermal energy. We can see from the figure that the wind region extends up to $\sim 100$ pc. Since this figure is a snapshot, actually the wind can reach much further away, $\sim 20$ kpc, as we will discuss later. Star formation in the wind region will be strongly suppressed. Close to the equatorial plane of the galaxy, there are many high-density and low-temperature gas clouds, which are partly formed by the squeezing due to the wind. This place is ideal for star formation. This gas is fueling the black hole and causes the high accretion rate of the AGN. The strong mechanical feedback by wind and radiative feedback by radiation will make the accretion rate of the AGN strongly decrease, and thus the outburst quickly decays. The decaying phase is shown by the right plots. We can see that again the spatial distributions of density and temperature of the gas in the central region of the galaxy become smooth, similar to the phase before the outburst. But different from it, in most of the region within several hundred pc, the gas is outflowing. This is due to the AGN feedback. The three plots in the right column correspond to $t=1.815$ Gyr (immediately before the outburst), 1.82 Gyr (close to the peak of the outburst), and 1.83 Gyr (just after the outburst), respectively. The minimum and maximum accretion rates in this time interval are $ 10^{-4} \dot{M}_{\rm Edd}$ and $10^{-2} \dot{M}_{\rm Edd}$, respectively. Before the outburst, the spatial distribution of density and temperature are also smooth, although not as smooth as in the left column. We see from the radial velocity plot that the gas in the central region is outflowing. This is because winds exist in the hot accretion mode. This also explains why the calculated accretion rate is so low although density is relatively high. With the time elapsing, the gas becomes cooler due to radiation, so the accretion rate increases and the AGN enters into the outburst phase (the middle plot). Similar to the case of the left column, we can also clearly see the two low-density and high-temperature outflowing region, which is obviously driven by the wind in the hot mode. The difference is that now the wind region is less obvious in the figure. This is of course because the accretion rate is much lower so the wind is weaker in the hot mode. Another difference between this plot and that in the left column is that the temperature of the gas around the equatorial plane is now higher, $\sim 10^8$ K. Such a temperature is also close to the Compton temperature in the hot accretion mode (refer to eq. (\ref{hottemperature})). This is likely because of the Compton heating. The decaying phase is shown by the right plots. Compared to the cold-mode outburst in the left column, the density and temperature also become smooth again in the central region of the galaxy. Similar to the left column, the gas within several tens of pc is also outflowing; but in this case, the outflowing velocity becomes smaller and the outflowing region also shrinks. \subsection{Light Curve of AGN Luminosity} \label{subsec:lightcurve} \begin{figure*} \centering \includegraphics[width=0.45\textwidth]{lightcurves.pdf} \includegraphics[width=0.45\textwidth]{lightcurves_zoom.pdf} \vspace{0.1cm} \caption{Light curves of AGN luminosity as a function of time for various models. The left panel is for the whole simulation time while the right panel is for a zoom-in time made between 2.93 and 3.03 Gyr. The data dump time intervals are 2.5 Myr (left panel) and 0.1 Myr (right panel), respectively.\\~~~~~} \label{fig:ldot} \end{figure*} \begin{figure}[!htbp] \centering \includegraphics[width=0.5\textwidth]{AGNduration_FullFB.pdf} \caption{The duration (or lifetime) of AGN outbursts as a function of evolution time for the fullFB model. } \label{fig:agnlifetime} \end{figure} Fig.~\ref{fig:ldot} shows the light curves of the central AGN for each model. For comparison, the ``fullFBmag'' and ``B05v'' models are also included. The left panel is for the whole simulation time, while the right one is the zoom-in part made between 2.93 and 3.03 Gyr. Note that for the clarity, when we draw the light curves in the left panel, we choose the data point so that the two adjacent ones have a relatively large time interval; in this case some outbursts are filtered out. But for the right panel of this figure we use the exact simulation data. We see that the light curve of AGN for the noFB model is featureless; once AGN feedback is included in the model, as we explain in the last subsection, the AGN luminosity strongly fluctuates. From the right zoom-in panel for the fullFB model, we can see that the AGN spends most of its time in the low-luminosity phase, with the typical $L_{\rm BH}\sim 10^{-4}L_{\rm Edd}$. We will discuss the AGN duty-cycle in detail in \S\ref{dutycycle}. We can see from the figure that the variability amplitudes for the radFB and windFB models are roughly similar, and both of them are similar to the fullFB model. This result indicates that both the mechanical feedback by wind and the radiative feedback by radiation can cause similar amplitude of the AGN variability. However, there is also an important difference. In the time-average sense, especially from the right zoom-in plot, we can see that the AGN luminosity in the radFB model is $\sim 10^{-2} L_{\rm Edd}$, almost two orders of magnitude larger than that in the windFB model, which is $\sim 10^{-4}L_{\rm Edd}$. The main reason for such a big difference is because of the difference of the ``typical length scale of feedback''. The length scale for wind (eq. \ref{windlength}) is several orders of magnitude shorter than that for radiation (eq. \ref{radlength}). Therefore, wind can efficiently deposit its momentum and energy into the ISM in a small volume around the black hole, and thus significantly reduce its mass accretion rate. Radiation can only deposit its energy and momentum to the ISM within a much larger scale, and thus is not efficient in reducing the accretion rate. In addition, as shown by Fig. \ref{fig:windradcomp}, in the cold mode, the momentum flux of wind is larger than that of radiation. This means wind can more effectively push the gas away from the black hole to reduce the accretion rate. We can see that in the fullFB model, the ``baseline'' AGN luminosity is very similar to that of the windFB model. This indicates that the mass accretion rate of the black hole is controlled by the wind feedback rather than by the radiation. This is what we expect from our analysis presented in \S\ref{subsec:comparison}. This also explains that the growth of the black hole mass in the windFB model will be much smaller than that in the radFB model, as we will discuss in \S\ref{subsec:BHgrowth}. However, by comparing the right zoom in plots of the fullFB and windFB models, we can see that their light curves are still different, with more outbursts in the fullFB model than in the windFB. This indicates that feedback by radiation and wind may couple together and neither of them can be neglected. The AGN variability amplitude in both the windFB and radFB models suddenly becomes much smaller after $\sim 8$ Gyr. In addition, different from the epoch before 8 Gyr during which the AGN oscillates between the cold and hot accretion modes, after 8 Gyr the AGN always stays in the low-luminosity hot accretion mode. From Fig. \ref{fig:agnoutburst}, we see that the outbursts of the AGN is because of the accretion of dense gas such as cold clumps. The radiation and especially wind from the AGN is very helpful to the formation of such clumps, since they can perturb the ISM and make its density distribution highly inhomogeneous. Obviously, such kind of perturbation is most strong in the case of cold accretion mode. This argument also explains why the AGN can reach luminosities as low as $\sim 10^{-9}L_{\rm Edd}$ before $\sim 8$ Gyr, which is also because of the strong interaction between AGN and ISM. Because the mass-loss rate from the stellar evolution gradually decays with time and because of the mass lost in the galaxy wind, the gas in the galaxy fueling the black hole becomes fewer with time. This is verified by the gradual decrease of the light curve of the noFB model. Consequently, after $\sim$ 8 Gyr, the AGN can no longer reach the cold accretion mode, thus the perturbation to the ISM becomes much weaker so the clumps are rarely formed. This explains the disappearance of the outbursts after $\sim$ 8 Gyr in both the windFB and radFB models. For the fulFB model, however, we can still find a few outbursts after 8 Gyr. This is because in this model we have both radiative and mechanical feedback thus the perturbation to the ISM is stronger compared to the radFB and windFB models. Now let us focus on the late epoch of the windFB and radFB models. We can see from the figure that when $t\ga 8$ Gyrs, the AGN luminosity $L_{\rm BH}\la 10^{-3}L_{\rm Edd}$ thus AGN always stays in the hot accretion mode for both models. We can see some variability of AGN in both light curves. The variability amplitude in both cases is small. The main reason is that the radiation and wind are very weak at such a low luminosity. Another reason is that the gas temperature may be high and density low, so that the typical interaction length scale for radiation and wind become very large so the interaction with ISM is not so efficient. The presence of variability in the two models indicates that both wind and radiation have some feedback effects even when they are very weak, at least in terms of modulating the accretion rate. But their respective mechanism may be different. From Fig. \ref{fig:windradcomp}, we can see that the momentum flux of wind is larger than radiation but the power of radiation is larger than wind. So wind feedback may play its role by momentum interaction while radiation is by energy interaction (radiative heating). The amplitude of variability in the two models are similar, which suggests that the importance of wind and radiation may be similar. It will be an important project to study systematically the importance of feedback by wind and radiation in the hot mode. Now let us compare the fullFB model with the fullFBmag model. The only difference between these two models is the initial black hole mass. The most significant difference of their light curves is that in the fullFBmag model, the AGN stays in the high-luminosity outburst phase for a longer duration than in the fullFB model. This is explained as follows. Remember that the AGN mass accretion rate is controlled by the wind feedback. When the wind is stronger, the accretion rate is more strongly reduced thus it is harder for the AGN to recover to the high-luminosity outburst phase. When the black hole mass is higher, its accretion rate will be higher thus the bolometric luminosity higher. From eqs. (\ref{eq:coldwindmass}) and (\ref{coldwindvelocity}), both the mass flux and velocity of wind are proportional to the bolometric luminosity. So a heavier black hole has stronger wind, thus the duration for the AGN to stay in the high-luminosity outburst phase is shorter. The fullFBmag and B05v models have the exactly same initial black hole mass, but the AGN physics adopted in the two models is quite different. Such a difference produces very different AGN light curves. From the right zoom in plots, we can see that the typical AGN luminosity of the fullFBmag model is $10^{-4}-10^{-5}L_{\rm Edd}$; while for the B05v model it is more than two orders of magnitude higher, $\sim 10^{-2}L_{\rm Edd}$. The main physical reason for this difference is that the wind adopted in our current paper in both the cold and hot feedback modes are much stronger than that in \citet{Gan:14}. A minor reason is that the high value of $T_{\rm C,hot}$ adopted in \citet{Gan:14} makes the temperature of the gas surrounding the black hole as high as $10^9~{\rm K}$, so $l_{\rm wind}$ becomes much larger, $\sim 5{\rm kpc}$, thus the mechanical feedback becomes much less effective. The less powerful wind and its low feedback efficiency in the B05v model result in a consequence that the AGN variability is dominated by the radiation instead of wind. This is confirmed by the rather similar pattern of light curves between the radFB and B05v models (refer to the right zoom in plots). In contrast to the fullFBmag and fullFB models, the B05v model predicts that the AGN will spend a high fraction of its time staying in the high-luminosity phase. This is not consistent with observations. This indicates the importance of having a correct AGN physics. Another important consequence of changing the AGN physics is the effect on the typical AGN lifetime (or duration). The AGN lifetime for the fullFB model is shown in Fig. \ref{fig:agnlifetime}. The typical lifetime of the fullFB model is $\sim 10^5{\rm yr}$. We define the ``on'' and ``off'' of the AGN by comparing its luminosity with the baseline luminosity of the fullFB model, which is $\sim 10^{-4}L_{\rm Edd}$ according to the zoom in plot of fullFB in Fig. \ref{fig:ldot}. As a comparison, the AGN lifetime of the B05v models in \citet{Gan:14} is roughly $\sim 10^7 {\rm yr}$ (refer to their Fig. 2 and the right panel of Fig. \ref{fig:ldot}). So with the new AGN physics adopted in this paper, the AGN lifetime becomes much shorter. This new value is consistent with the observations \citep[e.g.,][]{Martini:03,Keel:12,Schawinski:15}. For example, based on the time lag between an AGN switching on and the time the AGN requires to photoionize a large fraction of the host galaxy, \citet{Schawinski:15} estimate that the AGN typically lasts $\sim 10^5{\rm yr}$. \subsection{Mass Growth of the Black Hole} \label{subsec:BHgrowth} \begin{figure*}[!htbp] \begin{center}$ \begin{array}{cc} \includegraphics[width=0.53\textwidth]{mbh_growth.pdf} & \includegraphics[width=0.45\textwidth]{mbh_growth_mag.pdf} \end{array}$ \end{center} \caption{Evolution of black hole mass for various models. In the left panel, the upper plot shows the black hole mass growth for the models with an initial black hole mass of $1.8\times10^{9} \, M_{\odot}$, and the bottom plot shows the black hole mass growth for fullFB, windFB, and fullFBem03 models in detail with linear scale. The right panel is for the two models with an initial black hole mass of $3\times10^{8}\,M_{\odot}$.} \label{fig:bhmass} \end{figure*} \begin{figure}[!htbp] \centering \includegraphics[width=0.5\textwidth]{den_bndry_c1_norad.pdf} \caption{Time evolution of the gas density at the inner boundary of the simulation for two models. The horizontal dashed lines indicate the time-averaged density. } \label{fig:denin} \end{figure} Fig.~\ref{fig:bhmass} shows the evolution of black hole mass for each model. Remember that the initial mass of the black hole is set to be $M_{\rm BH,init} = 6\times 10^{-3}\, M_{\star,\rm init}=1.8\times 10^9M_{\odot}$ \citep{Kormendy:13}. For the noFB model, at the end of evolution the black hole mass reaches as high as over $10^{10} M_{\odot}$, which is obviously too large compared to observations. Since there is no AGN feedback, the gas keeps accreting onto the black hole with little disturbance. In such a situation, mainly only star formation can deplete some gas in the galaxy and reduce the black hole accretion rate which is not very efficient. This is why the black hole can grow to a very large mass. In the radFB model where only radiative feedback is considered, the black hole mass becomes even slightly larger than the noFB case. This is surprising at the first sight, since one may think the energy input to the gas by AGN radiation should reduce the accretion rate. While this effect must be there, another effect seems to be more important. That is, when the AGN radiation is included, the star formation is suppressed to some degree, mainly due to the radiative heating to the ISM (refer to Fig. \ref{fig:newst}). Consequently, there will be more gas left and falling onto the inner region of the galaxy to feed the black hole so the accretion rate is higher. The growth of the black hole in the windFB model is substantially suppressed compared to the noFB model. In fact, the final black hole mass in the windFB model is $2\times 10^9M_{\odot}$, only slightly larger than its initial value. This indicates the high efficiency and dominant role of the wind feedback in suppressing the mass accretion rate. The reason has been explained in detail in \S\ref{subsec:lightcurve}. Such a result is in good agreement with \citet{Gan:14} and \citet{Ciotti:17}. However, it is apparently surprising to see from the figure that the black hole mass in the fullFB model is slightly larger than that in the windFB model. One would expect that with the inclusion of radiation, more energy is deposited in the ISM so the accretion rate in the fullFB model should be smaller than windFB model. The reason for the smaller black hole mass in the windFB model is exactly same with what we have just proposed above to explain why the black hole mass is larger in the radFB model compared to noFB model. That is, when radiation is included, star formation becomes weaker thus more gas is left to fuel the black hole. This is confirmed by Fig. \ref{fig:denin}, which shows the density of the gas at the inner boundary of the simulation for fullFB and windFB models. From this figure we can see that the average gas density in the windFB model is smaller than that in the fullFB model. However, we would caution readers that such a trend of black hole mass change with the feedback models may not be a universal result. In fact, \citet{Ciotti:17} have found both positive and negative change of black hole mass, depending on the mass of galaxies. This is likely because the physics involved is highly non-linear and complicated. In the model with higher radiative efficiency (fullFBem03), the mass growth of black hole is further suppressed compared with the fullFB model. We think this is not because of the stronger radiative heating due to higher luminosity, but because of the stronger wind in the cold feedback mode, since wind strength is proportional to the luminosity (eqs. (\ref{eq:coldwindmass}) \& (\ref{coldwindvelocity})). To check this speculation, we have run another model named ``windFBem03'' (refer to Table 1). The mass growth of black hole for this model is shown in Figure~\ref{fig:bhmass}. Both ``windFB'' and ``windFBem03'' models include only wind feedback (i.e., no radiation) and the only difference between them is the radiative efficiency. The result shows that the final black hole mass in windFBem03 is smaller than in windFB. The right plot of Fig. \ref{fig:bhmass} shows the growth of black hole mass for fullFBmag and B05v models. Similar to the fullFB model, the growth of mass in the fullFBmag model is also very small. On the other hand, the growth of black hole mass in the B05v model which has the same initial black hole mass with, but different AGN physics from, the fullFBmag model, is nearly ten times larger. This indicates that the growth of black hole mass is mainly controlled by the AGN physics instead of the black hole mass. An interesting question is that whether the model can explain the observed correlation between black hole mass and the total mass of stars for elliptical galaxies \citep[e.g.,][]{McConnell:13}. \citet{Ciotti:17} investigated this problem based on their model and found that the model can explain the observation quite well within the uncertainties. Since the AGN feedback physics adopted in this paper is different from \citet{Ciotti:17}, it is necessary to investigate this problem again based on our model. We plan to pursue this problem in our future work. \subsection{Star Formation} Fig.~\ref{fig:dnewst} shows the density of newly born stars which was accumulated to the end of our simulation. In general, stars form massively at the cold shells or filaments. The star formation rate is highest at the central region where density is higher, and is quite spherically symmetric due to the low angular momentum of the gas in the galaxy. \begin{figure}[!htbp] \centering \includegraphics[width=0.4\textwidth]{show_c1_dnewst.pdf} \caption{Time-integrated density of newly born stars at the end of the run for the fullFB model. \\~~~~} \label{fig:dnewst} \end{figure} \begin{figure*}[!htbp] \begin{center}$ \begin{array}{ccc} \includegraphics[width=0.32\textwidth]{newst_c1.pdf} & \includegraphics[width=0.33\textwidth]{newst_c1_dens.pdf} & \includegraphics[width=0.32\textwidth]{newst_cum_c1.pdf} \end{array}$ \end{center} \caption{Effects of AGN feedback on star formation for various models. Left panel: Time-integrated mass of newly born stars at a given radius. Note that the peaks of each curve are simply due to geometry effect; see the text for details. Middle panel: Time-integrated mass of newly born stars at a given radius per unit volume. Right panel: Enclosed mass of the newly born stars within a given radius at the end of the simulation.\\~~~~} \label{fig:newst} \end{figure*} \begin{figure}[!htbp] \centering \includegraphics[width=0.5\textwidth]{sSFR_c1.pdf} \caption{The specific star formation rate over time for various models. \\~~~~} \label{fig:sSFR} \end{figure} Fig.~\ref{fig:newst} shows the time- and $\theta$-integrated total mass of newly born star in our simulation. The left panel shows the mass in each radial grid $\Delta r_i$ (note that $\Delta r_i\propto r$) as a function of radius, while the right panel shows the enclosed mass of the newly born stars within a given radius. In the left panel, the peak of each curve appears at $r\sim 10-30$ kpc. We emphasize that this peak is only an apparent effect and does not mean strong star formation at this radius. Its presence is because the total mass of new stars is integrated within a radial grid $\Delta r_i$ while $\Delta r_i \propto r$, thus the volume of each grid $\propto r^3$. The radius of the peak, i.e., $r\sim 10-30$ kpc, corresponds to the length scale of the galaxy $r_*$ (eq. (\ref{stellardis})) in the stellar distribution. Beyond this radius the stellar number density sharply decreases thus there are very few gas from stellar wind for the formation of stars. If we normalize the mass of new stars by the volume, the peak will disappear, as shown by the middle panel of Fig. ~\ref{fig:newst}. Now let us see some details of the effects of AGN feedback on star formation. The black dashed line in the left panel denotes model noFB, its peak appears at $\sim 10$ kpc, and there is a rapid increase at the innermost region with the decreasing radius because of the increase of density there. For the radFB model, denoted by the green line, it peaks almost at the same radius with the black dashed line. At the region $r\la$ 600 pc, star formation is significantly suppressed compared with the noFB model. This decrease is perhaps caused by the radiative heating. This radius ($\sim$ 600 pc) is roughly equal to the typical length scale of radiative feedback $l_{\rm rad}$ if $\rho\sim 10^{-23} {\rm g~cm^{-3}}$ (eq.~(\ref{radlength})). Note that here we adopt a larger density than that shown in Fig. \ref{fig:agnoutburst}; this is because, on the one hand, the density in the radFB model should be higher than that in fullFB since in radFB there is weaker star formation and there is no AGN wind blowing gas out. On the other hand, density varies with time while we should give more weight to the high density since star formation is easier in that case. Beyond 600 pc, the radiation power has been used up thus radiative heating is very weak. In this region, we can see some enhancement of the star formation compared to the noFB mode. This is because radiation force pushes the gas from within $\sim 600$ pc to this region. For the windFB model, we can see from the left plot that compared to the noFB and also radFB models, star formation is strongly suppressed all the way up to $r\sim 20$ kpc. The peak of the curve also moves outward. The suppressing of star formation is obviously because of the momentum feedback of wind, i.e., winds push the gas away from the central region beyond $\sim 20$ kpc thus the gas density and subsequently star formation is significantly reduced. This is consistent with the suppression of the accretion rate by wind, as we have analyzed in \S\ref{subsec:lightcurve}. At the region $r\ga 20$ kpc, these gas is accumulated there, so star formation is significantly enhanced. Our simulation result that the wind can reach a distance as far as $\sim 20$ kpc is fully consistent with the Gemini Integral Field Unit observations to a sample of radio-quiet quasars \citep[e.g.,][]{Liu:13a} The solid blue line in the figure denotes the fullFB model. It is quite similar to the windFB model, which indicates that wind feedback is dominant in suppressing the star formation. In the region of $2-20$ kpc, star formation in the fullFB model is slightly stronger than that in the windFB model. The reason is because of the momentum feedback of radiation. Radiation can push the gas away from the region within 2 kpc to this spatial range. This radius (i.e., 2 kpc) is roughly equal to the typical length scale of radiative feedback $l_{\rm rad}$ if $\rho\sim 10^{-24} {\rm g~cm^{-3}}$ (eq.~(\ref{radlength})). Here $10^{-24} {\rm g~cm^{-3}}$ is assumed to be the typical density of ISM. It is lower than that in the fullFB since because both wind and radiation are present in the fullFB model. Comparing the fullFB model with the noFB model, we can see from the left plot of Fig. \ref{fig:newst} that, star formation is suppressed in the inner region of the galaxy, when $r\la 15$ kpc. This is usually called the ``negative feedback''. But at the outer region, when $r\ga 15$ kpc, star formation is enhanced. This is the so-called ``positive feedback''. We note that the positive feedback effect is consistent with the theoretical result by \citet{Liu:13b} and observations by \citet{Cresci:15}, which argued that SF can be enhanced locally at the ahead of the AGN wind, where gas is compressed. Although whether the AGN feedback is positive or negative depends on the location of the galaxy, the overall effect of AGN feedback on star formation in the whole galaxy is negative, as shown by the right plot of Fig. (\ref{fig:newst}). This result is different from \citet{Ciotti:17} where they find the total effect on star formation is positive. The discrepancy must be attributed to the difference of the AGN physics adopted in the two works. In literature, many observational papers try to study the AGN feedback by investigating the correlation between star formation and AGN activity (see reviews by \citealt{Harrison:17} and \citealt{Xue:17}). From our simulation, we see that the effect of AGN feedback on star formation is very complicated. As we state above, in the time-averaged sense, the effect can be positive or negative, depending on the location of the galaxy. In addition to the spatial complication, we also have temporal complications. AGN activity varies on the timescale of $\tau_{\rm AGN} \lesssim 1$ Myr), which is orders of magnitude shorter than the typical timescale of star formation episodes ($\tau_{\rm SF} \gtrsim 100$ Myrs) \citep{Harrison:17}. For the whole galaxy, star formation sometimes is enhanced but sometimes suppressed. To illustrate this point, Fig.~\ref{fig:sSFR} shows the specific star formation rate (sSFR) over the galactic evolution, i.e. star formation rate normalized by the stellar mass of the galaxy, which is a quantity widely used in literature. We can see from the figure that the timescale of variability of sSFR is much longer than the variability of AGN, as we explain above. In general, the sSFR in the models with AGN feedback is suppressed compared to noFB model, but occasionally sSFR with AGN feedback can also be enhanced. Another important feature we can see is that the curve of the windFB model is not synchronous with that of fullFB model. While their general patterns are similar, there is an obvious offset between them and the ``amplitudes'' of the fullFB model are also larger. This indicates that, although from the time-integrated sense wind seems to be much more important than radiation in suppressing star formation, as we see from Fig. \ref{fig:newst}, radiation definitely also plays a very important role. The wind and radiation likely couple together in affecting star formation. At last, we note that for the windFB and fullFB models, the sSFR sharply decreases at $t\ga 8$ Gyr and $t\ga$ 10 Gyr, respectively. Such sharp decreases correspond to the sudden change of the amplitudes of AGN light curves at the same time for the two models shown in Fig. \ref{fig:ldot}. As we have explained in that section, in that case, there are very few high-density low-temperature clumps in the ISM, and thus star formation becomes very weak. \subsection{AGN Duty Cycle} \label{dutycycle} \begin{figure*}[!htbp] \begin{center}$ \begin{array}{cc} \includegraphics[width=0.5\textwidth]{duty_t_c1_below.pdf} & \includegraphics[width=0.5\textwidth]{duty_t_c1_above.pdf} \end{array}$ \end{center} \caption{Percentage of the total simulation time spent below (left) and above (right) the values of the Eddington ratio of the central AGN for various models. In the left panel, vertical dotted lines indicate the Eddington ratio below which each model spends 80\% of the total time. The solid lines are the values for the entire time, and the dashed lines in the right panel are the values for the last 2 Gyrs, which can be compared to observations [square: \citet{Ho:09}; circle: \citet{Greene:07}; upward-pointing triangles: \citet{Kauffmann:09}; downward-pointing triangles: \citet{Heckman:04}; star: \citet{Steidel:03}.].\\ ~~~~} \label{fig:dutycycle} \end{figure*} \begin{figure}[!htbp] \centering \includegraphics[width=0.5\textwidth]{duty_L_c1_above.pdf} \caption{Percentage of the total energy emitted above the values of the Eddington ratios. The horizontal dotted lines represent the portion of emitted energy above the Eddington ratio of 0.02. \\~~~~} \label{fig:duty_L} \end{figure} Fig.~\ref{fig:dutycycle} shows the percentage of the total simulation time spent below (left panel) and above (right panel) a given Eddington ratio. In the left panel, the vertical dotted lines indicate the Eddington ratio below which the AGN spends 80\% of its time. Comparing various models, we can see that from radFB to fullFB (windFB is similar), with the increase of AGN feedback ``strength'', this Eddington ratio becomes smaller. Among all models, the ratio is the smallest for the fullFBmag model. This is consistent with that shown in the right panel of Fig. \ref{fig:ldot} in which the fullFBmag model has the lowest typical Eddington ratio. For the fullFB model, the AGN spends over 80\% of its evolution time with Eddington ratios below $2 \times 10^{-4}$ and it spends most of its time in the hot accretion (feedback) mode. This results suggests that potential importance of feedback effects by low-luminosity AGNs. In the right panel of Fig.~\ref{fig:dutycycle}, we compare the simulation results with observations. The solid and dashed lines represent the time spent above the given Eddington ratios for entire evolution time and for the last 2 Gyrs, respectively. The observational data points are compiled from low-redshift sources, which are suitable to be compared with the dashed lines. For the blue dashed line, which denotes the fullFB model in the late epoch, the AGN spends very little of its time being in the luminosity of $L_{\rm BH}/L_{\rm Edd}\ga 1\%$. This can also be roughly seen from the left panel of Fig. \ref{fig:ldot}. This result is consistent with observations \citep[e.g.,][]{Steidel:03, Heckman:04, Greene:07, Ho:09, Kauffmann:09}. It is believed that AGNs spend most of their time in the low-luminosity AGN phase, but emit most of their energy during their high-luminosity AGN phase \citep{Soltan:82, Yu:02, Kollmeier:06}. To examine this issue with our simulations, Fig.~\ref{fig:duty_L} shows the percentage of the total energy emitted above the values of the Eddington ratios for various models. Specifically, the horizontal dotted lines mark the fraction of emitted energy with the Eddington ratio above 0.02. For the fullFB models, the AGN emits $6\%$ of the entire energy at the Eddington ratio above 0.02. The percentage for the fullFB model is much lower than what observations seem to suggest. There are two main reasons for this discrepancy. One is that our simulations begin from 2 Gyr. Before 2 Gyr, the activity of AGN should be much stronger. Another reason is that the present paper only focuses on an isolated galaxy. The percentage of energy emitted is expected to increase significantly if we also include the cosmic accretion of cold gas from outside of the galaxy. We note that for the fullFBmag model, AGN emits $50\%$ of the entire energy at the Eddington ratio above 0.02. This is much closer to the observation compared to model fullFB. There are two reasons for such a discrepancy between ``fullFB'' and ``fullFBmag''. One is that when the black hole mass is smaller as in fullFBmag, the AGN becomes weaker thus it becomes more difficult to push the gas surrounding the black hole away so the mass accretion rate is in general larger. Another reason is that the critical luminosity (eq. (\ref{criticall})) separating the hot and cold modes is proportional to the black hole mass. When the black hole mass is smaller, it is easier for the AGN to switch from the hot to the cold accretion modes. These two reasons make the AGN with a smaller black hole mass spend more time above a given Eddington ratio, as shown by the right plot of Fig. \ref{fig:dutycycle}, and subsequently emit more energy above a given Eddington ratio. The two models fullFBmag and B05v have the same black hole mass, we can see they are similar in Fig. \ref{fig:duty_L}. However, these two models are quite different in Fig. \ref{fig:dutycycle}. The fullFBmag model spends much fewer time above a given Eddington ratio compared to model B05v. Such a discrepancy is caused by the difference of AGN physics. Specifically, the wind in fullFBmag model is much stronger than in B05v thus the black hole accretion rate in fullFBmag is generally much smaller. \subsection{X-ray Properties of the Gas} \begin{figure}[!htbp] \centering \includegraphics[width=0.5\textwidth]{xlum_c1.pdf} \caption{The evolution of the X-ray luminosity of the galaxy in the 0.3-8 keV band for various models.\\~~~~} \label{fig:xlum} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=0.5\textwidth]{xlum_comp_noEisen.pdf} \caption{Comparison of the X-ray luminosity of the hot gas in the 0.5-2 keV band between our simulations and the observations by \citet{Anderson:15}. The black dots with error bars are the observational data, while the orange and blue segments denote the simulation results of the noFB and fullFB models.} \label{fig:xlumcomp} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=0.5\textwidth]{xsurf_c1_mag_aci.pdf} \caption{X-ray surface brightness in the 0.3-8 keV band for the fullFB model.} \label{fig:xsurf} \end{figure} The X-ray luminosity of the galactic hot gas content, which is mainly produced by bremsstrahlung, is observable for active galaxies \citep[and references therein]{Brandt:15}. It is calculated over the energy range of 0.3-8 keV (the {\it Chandra} sensitive band), which is \begin{equation} L_{\rm X} = 4\,\pi \int^{\infty}_{0} \, \varepsilon \left( r \right) \,r^{2}\,dr, \end{equation} where the emissivity is given by $\varepsilon (r)=n_{e}(r)n_{\rm H}(r)\Lambda \left[T (r) \right]$, with $n_{e}$ and $n_{\rm H}$ are the number densities of electron and hydrogen, and $\Lambda \left( T \right)$ is the cooling function. We fix the metallicity to the solar abundance. The cooling function is calculated by the spectral fitting package XSPEC\footnote{\url{http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/}} (spectral model APEC) with the assumption of the collisional ionization equilibrium \citep{Smith:01}, and the volume integrals are performed over the whole computational mesh. Fig.~\ref{fig:xlum} shows the evolution of the X-ray luminosity, $L_{\rm X}$, for various models. For the models with AGN feedback, $L_{\rm X}$ oscillates in phase with the nuclear luminosity, which is consistent with \citet{Pellegrini:12}: radiative and mechanical feedback change the density and temperature of the gas in the galactic center, where most of $L_{\rm X}$ is emitted. On the contrary, in the noFB model, the light curve of $L_{\rm X}$ is quite smooth and monotonously decreases as a consequence of gas depletion due to star formation. The average value of the X-ray luminosity for various models follows this order: noFB $>$ radFB $>$ fullFB (windFB) $>$ fullFBmag. This sequence is consistent with the light curves shown for various models in Fig. \ref{fig:ldot}. It is interesting to note that the pattern or the shape of the X-ray luminosity for the windFB and fullFB models is very similar to the sSFR shown in Fig. \ref{fig:sSFR}. This is reasonable since both the sSFR and the X-ray luminosity depend on the local properties of gas on large scales. \citet{Anderson:15} have stacked X-ray emission from {\it ROSAT} All-Sky Survey in the X-ray band range of 0.5-2 keV, and showed the power-law relationship between the X-ray luminosity and the stellar mass of the central galaxy. To compare with this observation, we have calculated the X-ray luminosity of the hot gas in the 0.5-2 keV band. The result is shown in Fig. \ref{fig:xlumcomp}. Since the X-ray luminosity is variable, we show in the figure the full range of luminosity. We can see from the figure that the noFB model predicts a higher luminosity than the fullFB model, as we expect, and both of their predictions are marginally consistent with the observations. However, considering that our current work focuses only on isolated galaxy and has not taken into account the gas supply from outside of the galaxy while the X-ray luminosity values taken from \citet{Anderson:15} are for central galaxies, the fullFB model is more promising to be consistent with observations while the noFB model will likely over-predict the luminosity once we more properly consider the additional external gas supply. As comparison, \citet{Eisenreich:17} recently have also compared their numerical simulation result with the observations of \citet{Anderson:15}. They find that the simulated value is a factor of few too high. They speculated that the reason may be because of their initial condition of simulation. But another possible reason is the AGN physics they adopt. In order to calculate X-ray surface brightness, we first generate three-dimensional data based on our two-dimensional numerical simulation data by assuming axisymmetry and then integrating the emissivity along the line-of-sight. Fig.~\ref{fig:xsurf} shows the X-ray surface brightness for the model fullFB during an AGN outburst. We can see from the figure that an X-ray cavity forms when AGN feedback expels and heats the gas in the galactic center, and such a cavity looks very similar to those observed (e.g., \citealt{Fabian:12} and references therein). We have also checked the windFB and radFB models and found that the cavity is formed almost fully by the interaction between wind and ISM, not because of radiative heating to the ISM. We note that in literature people usually think that the observed X-ray cavity is formed by the interaction between jet and ISM \citep{Guo:16,Guo:18}. The reason may be because jets are very common but winds are harder to be detected. However, we would like to emphasize that non-detection does not mean non-existence. We now know that jet is an indispensable ingredient of hot accretion flows \citep{Yuan:14}; while in hot accretion flows we now have compelling evidences for the existence of wind, from both theoretical \citep{Yuan:12a,Narayan:12,Yuan:14,Yuan:15} and observational (see references in \S\ref{subsubsec:hotwind}) aspects. In other words, whenever we observe jet, wind must also exist. So the formation of X-ray cavity shown in Fig. \ref{fig:xsurf} suggests that it is worthwhile to investigate the possibility that the cavities usually observed are formed by wind. In fact, winds launched from hot accretion flow have been used to successfully explain the formation of the Fermi bubbles observed in the Galaxy \citep{Mou:14, Mou:15}. Compared with other models of the Fermi bubbles \citep[e.g.,][]{Guo:12}, this ``wind'' model has the following advantages: 1) the main parameters of the model such as mass flux and velocity of wind are well constrained by the small-scale MHD numerical simulation \citep{Yuan:15}, thus has much less freedom; 2) the model can successfully explain some observations of the Fermi bubbles that are hard to be explained by other models (see \citealt{Mou:15} for details). \section{Summary and Conclusions} In this paper, by performing two-dimensional high-resolution hydrodynamical numerical simulations, we have investigated the effects of AGN feedback in the evolution of its host galaxy. The galaxy is an isolated elliptical galaxy and we assume in this work that the specific angular momentum of the gas is low. Physical processes like star formation, Type Ia and Type II supernovae are taken into account. The inner boundary of the simulation is chosen so that the Bondi radius is resolved, which is crucial for the precise determination of the mass accretion rate of the AGN. According to the theory of black hole accretion, black hole accretion has two intrinsically different modes, cold and hot ones. They have quite different radiation and wind outputs and thus naturally correspond to two different feedback modes. They have been carefully discriminated and taken into account in the present work. We consider the feedback effects by both radiation and wind in each mode. The two feedback modes have quite diverse names in literature, e.g., the quasar (or radiative) and radio (or kinetic or maintenance) mode. Our present work indicates that these names are not only diverse, but sometimes also misleading. For example, in terms of regulating the accretion rate of the black hole and star formation in the galaxy, feedback by wind is always much more important that radiation. So we suggest to simply follow the names of black hole accretion mode and call them ``cold feedback mode'' and ``hot feedback mode''. The most important distinctive feature of the present work is that we adopt the most updated AGN physics. This is especially the case for the radiation and wind from hot accretion flows for which many progresses have been made in recent years but have not been taken into account in most AGN feedback works \citep[see a recent review by][]{Yuan:14}. It also includes more precise descriptions of wind in the cold mode which are obtained from recent new observations. These most updated AGN physics are summarized in the present paper. For the feedback effects, we have investigated the light curve of the AGN, the black hole growth, star formation, the AGN duty-cycle, and the surface brightness of the galaxy. We have compared these results with previous works which have very similar model framework with our work but have different AGN physics. Significantly different results have been found in almost every aspect mentioned above. This indicates the crucial importance of having correct AGN physics. The main results obtained in this paper are summarized below. \begin{itemize} \item We have compared the energy and momentum fluxes of wind and radiation in the two feedback modes (Fig. \ref{fig:windradcomp}). Roughly speaking, in both modes, the power of radiation is larger than wind while the momentum flux is on the opposite. However, the magnitude of energy or momentum fluxes is not the only factor to determine which component, radiation or wind, is more important in the feedback. This is because the cross section of photon-particle interaction is orders of magnitude smaller than that of particle-particle interaction. For typical parameters of our problem, we find that wind can deposit its momentum within a very small ``typical length scale of wind feedback'', $l_{\rm wind}\sim 0.5$ pc (Eq. \ref{windlength}); while the ``typical length scale of radiation feedback'' is much larger, $l_{\rm rad}\sim 10$ kpc (Eq. \ref{radlength}). Consequently, in our model the accretion rate and the mass growth of black hole are mainly suppressed by wind rather than radiation (compare the zoom in plots of windFB, fullFB, and radFB in Fig. \ref{fig:ldot}; and windFB, fullFB, and radFB in Fig. \ref{fig:bhmass}). Such a result is in good agreement with \citet{Gan:14} and \citet{Ciotti:17}. But we note there are two caveats here. One is that in the present paper we consider an isolated galaxy without external gas supply. Another one is that we do not consider dust. If we consider these two factors, the radiative feedback should become relatively more important. \item One characteristic consequence when the AGN feedback is included in the galaxy evolution model is that the AGN activity becomes strongly variable (Fig. \ref{fig:ldot}). The reason is because of the interaction between wind \& radiation and ISM (\S\ref{overallscenario}). Both radiation or wind can cause the variability of AGN but their mechanisms are different. Wind is by momentum interaction while radiation is by radiative heating. Because the typical length scale of wind feedback is much shorter than that of radiation, the accretion rate of AGN is reduced to a much lower time-averaged value in the windFB model than in the radFB model (right panel of Fig. \ref{fig:ldot}). But radiation also plays an important role, maybe by coupling with the wind feedback. Comparison with previous works which have different AGN physics indicates that stronger wind in the AGN physics results in a time-averaged weaker AGN (compare the zoom in plot of fullFB and B05v in Fig. \ref{fig:ldot}). \item The typical lifetime of the AGN obtained by our simulation is $\sim 10^5$ yr, fully consistent with the most recent observations (Fig. \ref{fig:agnlifetime}). As a comparison, previous works which have different AGN physics, such as \citet{Gan:14}, obtain a much longer AGN lifetime. \item Both radiation and wind can suppress the star formation in the galaxy. In our model, radiation can affect the region of ``unity optical depth'', $\sim 1$ kpc, while wind can affect a much larger region, $\la 20$ kpc (the left plot of Fig. \ref{fig:newst}). Not only the affected scale is different, the wind can also suppress the star formation much more strongly than radiation (the left plot of Fig. \ref{fig:newst}). Beyond $20~{\rm kpc}$, star formation is enhanced, because of the accumulation of gas pushed out by wind. Overall, we find that the time-integrated total effect of AGN feedback on star formation is negative (right panel of Fig. \ref{fig:newst}). But we would like to emphasize that whether the feedback can suppress or enhance star formation not only depends on the spatial location but also on time. The spatially-integrated specific star formation rate (sSFR) as a function of time shows two important features. One is that depending on the evolution time, sSFR can be enhanced or suppressed compared to the model without AGN feedback. The second one is that radiation also plays a very important role in affecting star formation (compare windFB and fullFB models in Fig. \ref{fig:sSFR}). \item When all the feedback mechanisms have been considered, we find that the AGNs spend over 80\% of their time with Eddington ratio below $2\times 10^{-4}$, i.e., in the very low-luminosity regime (the left panel of Fig. \ref{fig:dutycycle}). This suggests the importance of considering the hot mode feedback in the galaxy evolution. We have also compared the simulated percentage of the last 2 Gyrs spent above a given Eddington ratio with observations. We find that the AGN has very little of its time being in the range of $L_{\rm BH}/L_{\rm Edd}\ga 1\%$, consistent with observations (right panel of Fig. \ref{fig:dutycycle}). We have also calculated the percentage of the total energy emitted above a given Eddington ratio. For our fullFB model, it is only $6$\% at Eddington ratio above 0.02. This value is not consistent with what we believe. An important reason is that we assume an isolated galaxy and have not taken into account the external gas supply in our simulations. \item We have calculated the X-ray luminosity of the hot gas in the galaxy in the 0.5-2 keV band and compared the result with observations. Both models with and without AGN feedback are consistent with observations given their large error bars (Fig. \ref{fig:xlumcomp}). But once we include external gas supply in the future, the noFB model may overpredict the observed value. \item The X-ray surface brightness for the model fullFB during an AGN outburst is calculated (Fig. \ref{fig:xsurf}). An X-ray cavity surrounding the AGN is evident, which is formed by the interaction between wind and ISM. It looks very similar to the X-ray cavities observed in galaxy clusters \citep[e.g.,][]{Fabian:12}. Usually we think these X-ray cavities are formed by jets. So this result suggests us to consider the possibility that they are formed by wind. \end{itemize} \section*{Acknowledgements} We thank the referee for his/her careful reading and constructive comments, which have significantly improved our paper. We are grateful to Jerry Ostriker and Luca Ciotti who kindly sent us the early version of the code in which many basic physics involved in the modeling are presented. This work is supported in part by the National Key Research and Development Program of China (grants 2016YFA0400702 and 2016YFA0400704), the Natural Science Foundation of China (grants 11573051, 11633006, 11650110427, 11661161012, 11303008, 11473002), and the Key Research Program of Frontier Sciences of CAS (grants QYZDJSSW- SYS008 and QYZDB-SSW-SYS033 ). This work has made use of the High Performance Computing Resource in the Core Facility for Advanced Research Computing at Shanghai Astronomical Observatory. \bibliographystyle{aasjournal}
{ "timestamp": "2018-03-22T01:07:50", "yymm": "1712", "arxiv_id": "1712.04964", "language": "en", "url": "https://arxiv.org/abs/1712.04964" }
\section{Introduction}} At the core of image based 3D reconstruction systems~\cite{Agarwal_11,Heinly_CVPR15}, one fundamental task is to establish reliable point correspondences across multiple images of the reconstructed scene, which are captured from different viewpoints, positions, and usually at different time. Popular and still the dominant solution to this problem refers to matching keypoints by comparing their local descriptors. There are three typical steps involved in this procedure: extracting keypoints from images~(feature extraction), constructing local descriptors for keypoints~(feature description), and establishing point correspondences across different images according to distances of their descriptors~(feature matching). In the past decade, various methods have been proposed to obtain keypoints and local descriptors as alternatives to the classical SIFT~\cite{LOWE_IJCV04} and SURF~\cite{Bay_CVIU08}. These methods either focus on the whole pipeline of feature extraction and description such as ORB~\cite{Rublee_ICCV11}, BRISK~\cite{Leutenegger_ICCV11}, FRIF~\cite{Wang_BMVC13}, KAZE~\cite{KAZE_ECCV12}, LIFT~\cite{LIFT_ECCV16}, or only focus on the descriptor, e.g., LIOP~\cite{Wang_ICCV11}, LDB~\cite{Yang_PAMI14}, VGGDesc~\cite{Simonyan_PAMI14}, BinBoost~\cite{Trzcinski_PAMI15}, DeepDesc~\cite{Edgar_ICCV15}, L2Net~\cite{L2Net_CVPR17}, and so on~\cite{Fan_TIP14,Wang_PAMI16,HardNet_NIPS17}. However, SIFT is still the major choice for the task of image based 3D reconstruction. Since all these follow-ups of SIFT have been claimed to outperform SIFT's performance on image matching and sometimes even with better computational efficiency~(for instance, in the case of binary features, ORB, BRISK, FRIF, etc.), it is straightforward to replace SIFT with these keypoints or local descriptors. What is the reason making the community does not to do so, at least up to nowadays? In this paper, we try to give an experimental study to answer this question. Specifically, we evaluate different combinations of keypoints and local descriptors for establishing point correspondences and embed matching points to an image based 3D reconstruction system. By doing so, we can obtain an end-to-end performance comparison of different keypoints and descriptors. Due to the large number of methods existed in this area, we choose to evaluate on the recent advances except for the classical SIFT, which is served as the baseline. To be specific, our evaluation covers both hand-crafted and learning based features with two different types: traditional float type ones and the emerging binary ones. For float type descriptors, it includes SIFT and LIOP as representative handcrafted ones and covers the learning based ones that use the traditional learning technique~(VGGDesc) and the recently popular CNNs~(DeepDesc, L2Net, LIFT). All these evaluated methods, except for SIFT and LIFT which have their own keypoint detectors, are merely feature description methods and so they have to be used with a keypoint detector. In this paper, we use SIFT keypoint for its popularity and also because that it is already used along with SIFT in the baseline. That is to say, except for LIFT, all the evaluated float type descriptors are based on SIFT keypoints, while LIFT is based on its own keypoints~(i.e., LIFT keypoints). For binary descriptors, we choose to evaluate the most recent ones, e.g., BRISK, FRIF, LDB, RFD~\cite{Fan_TIP14} and BinBoost. The former two are handcrafted features while the latter three are learned features. Among them, BRISK and FRIF contain both keypoint detector and binary descriptor. As a result, we use both of these two kinds of keypoints and combine them with all the evaluated binary descriptors respectively. It is worth to point out that although there are many works on local feature evaluation in the literature, most of them are limited to the image matching level~\cite{Mikolajczyk_PAMI05,Aanaes_IJCV12,Miksik_ICPR12,Heinly_ECCV12}.\\ For this comparative study, a basic but typical 3D reconstruction system is implemented\footnote{We will make our system and evaluation code public available.}. The system is based on the linear time incremental structure from motion~\cite{ChangChangWu_SFM_3DV13} and PMVS~\cite{Furukawa_PAMI10} by taking the matching keypoints across different images as input. We use different combinations of keypoint and local descriptor to generate different inputs to the system so as to obtain different reconstruction results. Two different types of datasets are used in our evaluation. The first one is a recently proposed multiview stereo dataset~(DTU MVS)~\cite{Jensen_CVPR14}, which contains more than 100 different scenes with high resolution images captured from 49 or 64 fixed viewpoints. Meanwhile, groundtruth 3D points are available. This dataset has a large diversity in scene types with a moderate number of images for each scene, while at the meantime still providing the groundtruth 3D points to facilitate an objective evaluation of reconstruction accuracy. The second dataset contains three large structure from motion~(SFM) subsets~\cite{Wilson_ECCV14}, which contains thousands of unordered images and many distracted images per scene. These two datasets stand for two typical image collection situations for 3D reconstruction applications. One is the controlled case where images are captured at selected viewpoints, and so is widely used for applications about reconstructing a very specific scene or object. In this case, all images cover a part of the scene and have moderate overlaps. The other case does not have any constraint on the used images, and so is widely used for applications about reconstructing a very large scale place such as a landmark or a city. In this case, it resorts to collect images from the Internet, instead of spending huge labors to capture high quality images with specially considered imaging viewpoints as in the first case. In this case, it inevitably contains many unrelated and low quality images as well as non-overlapping images, thus is more challenging. The remaining parts of this paper are organized as follows. Section~\ref{sec:related_work} reviews the existing local features and their performance evaluations. In Section~\ref{sec:pipeline}, we briefly describe our implemented 3D reconstruction system. Then, the evaluated local features are introduced in Section~\ref{sec:float_feature} and Section~\ref{sec:binary_feature}. The evaluation results and analysis on the two used datasets are presented in Section~\ref{sec:MVS_result} and Section~\ref{sec:SFM_result} respectively. Finally, Section~\ref{sec:conclusion} concludes this paper. \section{Related Work} \label{sec:related_work} \subsection{Keypoints and Local Image Descriptors} Keypoint and local image descriptor are two critical parts of local feature. Keypoint detection aims to find re-detectable~(sparse) points in different images of a same scene. Such a re-detectable property, which is also known as the repeatability, is the principal consideration for designing a keypoint detector. In the literature, there are mainly two kinds of keypoints, corner points or blob points. Briefly speaking, they detect different types of image structures. For corner keypoints, the detectors seek local image structures that have large variance for different directions. Widely used methods include Harris~\cite{Harris_1988}, FAST~\cite{FAST_PAMI10} and AGAST~\cite{AGAST_ECCV10}. Due to the computational efficiency, FAST and AGAST have been used to detect scale invariant keypoints in scale spaces in recent binary features, e.g., ORB~\cite{Rublee_ICCV11} and BRISK~\cite{Leutenegger_ICCV11}. To detect blob-like image structures, the response of Laplacian of Gaussian~(LoG) filter~\cite{Lindeberg_IJCV98} and the determinant of Hessian matrix~\cite{Bay_CVIU08} are two widely used indicators. They are usually used in a scale space to search for local extrema so as to detect keypoints along with their characteristic scales. To accelerate the process of keypoint detection, several methods have been proposed to approximate the LoG detector, among which the most famous one is SIFT~\cite{LOWE_IJCV04}. CenSurE~\cite{Agrawal_ECCV08} approximated LoG by using the Bi-Level Laplacian of Gaussian. FRIF~\cite{Wang_BMVC13} proposed to use several box filters to approximate the LoG filter. Both CenSurE and FRIF use integral images for fast computation. In order to deal with severe viewpoint changes, local affine adaption techniques have been applied to keypoints in order to detect the so called interest regions, such as Hessian-Affine $\&$ Harris-Affine~\cite{Mikolajczyk_IJCV04}. Another well known interest region is MSER~\cite{MSER_BMVC02}, which detects stable gray-scale regions. Since MSER could be with any shape, it is usually to fit MSER by ellipse based on second order moments. While all these methods are build up on a formal definition of keypoints, some other works leverage on labeled data to learn keypoint detectors, such as~\cite{TILDE_CVPR15,LIFT_ECCV16,Zhang_CVPR17}. To match keypoints or interest regions, the common practice is to construct a local image descriptor for each of them, and then build correspondence between them based on the descriptors' distances. For this purpose, a local image descriptor is expected to be designed with high robustness in order to tolerate with various photometric or geometric transformations among the corresponding local regions. In this way, keypoints corresponding to the same physical points can be correctly matched. At the meantime, it is also expected to be with high distinctiveness so that keypoints corresponding to different physical points can be easily distinguished. The community has made great efforts to achieve these two goals simultaneously. The milestone work is no doubt the SIFT~\cite{LOWE_IJCV04}, after which many handcrafted local descriptors have been proposed, such as SURF~\cite{Bay_CVIU08}, LIOP~\cite{Wang_ICCV11}, KAZE~\cite{KAZE_ECCV12}, and so on~\cite{Marko_PR09,Fan_PAMI12,Wang_PAMI16}. All these descriptors were reported with better performance than SIFT in some aspects, for example, dealing with complex brightness changes or image blur, or computational efficiency. With the access of a huge number of matching and non-matching local patches~\cite{Matthew_PAMI11}, researchers have gradually moved their interest from handcrafted methods to the learning based ones. Matthew et al.~\cite{Matthew_PAMI11} proposed to learn discriminative local descriptors by optimizing over the combination of low-level features and spatial pooling methods as well as their parameters. The dimension of the learned descriptors can be further reduced to very small by applying subspace embedding. Following this work, Simonyan et al.~\cite{Simonyan_PAMI14} reformulated the learning problem as a sparse constrained convex optimization problem. Recently, deep learning has been applied to learn high matching performance local descriptors. Han et al.~\cite{MatchNet_CVPR15} proposed the MatchNet to unify descriptor learning and metric learning in a framework by maximizing the descriptor distance between non-matching patches and minimizing that of matching patches. MatchNet not only learns the patch descriptor, but also their distance metric. Similar learning paradigm has been used by Zagoruyko and Komodakis~\cite{DeepCompare_CVPR15} and Kumar et al.~\cite{Kumar_CVPR16}. Although these methods lead to high matching performance, they have to be used along with the learned metric. Using the learned descriptor alone can not guarantee the good matching performance. Such a constraint largely limits their applications and a drop-in replacement of the previous handcrafted descriptors is highly required. For this purpose, learning patch descriptor that can be directly matched in the Euclidean space has received great interest in the recent two years. The representative works include DeepDesc~\cite{Edgar_ICCV15}, TFeat~\cite{TFeat_BMVC16}, L2Net~\cite{L2Net_CVPR17} and HardNet~\cite{HardNet_NIPS17}. \subsection{Performance Evaluation of Local Features} Accompany with the flourish of local features, many works have been conducted to evaluate performance of various local features under the scope of different applications. Mikolajczyk et al.~\cite{Mikolajczyk_PAMI05,Mikolajczyk_IJCV05} evaluated the matching performance of different local descriptors and affine invariant interest regions in the task of matching images of planar scenes. Moreels and Perona~\cite{Moreels_IJCV07} extended Mikolajczyk's evaluations to images of 3D objects captured on a turntable. These evaluations demonstrated the higher distinctiveness of SIFT than its previous methods, thus promoting the development of SIFT-like local features, i.e., histogram-based handcrafted features such as SURF~\cite{Bay_CVIU08}, DAISY~\cite{DAISY_PAMI10}, CS-LBP~\cite{Marko_PR09}, KAZE~\cite{KAZE_ECCV12}. Aan{\ae}s et al.~\cite{Aanaes_IJCV12} revised Mikolajczyk's and Moreels's works by introducing a more comprehensive dataset with known spatial correspondence of points, while at the meantime to cover various situations for interest point matching. Although most detectors in their evaluation has been evaluated before~\cite{Mikolajczyk_PAMI05,Moreels_IJCV07}, their evaluation was more thorough and convincing because the newly introduced dataset is more realistically challenging. Their evaluation re-emphasized the importance of detecting feature points in scale space and showed that the affine adaption proposed by Mikolajczyk and Schmid~\cite{Mikolajczyk_IJCV04} has a little influence on feature detector itself, but is useful for the descriptor, thus is helpful in the whole pipeline of feature matching. Recently, with the development of binary descriptors, some researchers evaluated different local features under the same evaluation protocol of image matching as~\cite{Mikolajczyk_PAMI05} but with an emphasize on the compactness and speed of the tested methods. For this purpose, Miksik and Mikolajczyk~\cite{Miksik_ICPR12} shown that binary features such as ORB~\cite{Rublee_ICCV11} and BRIEF~\cite{Calonder_PAMI11} are efficient for both feature extraction and matching for image matching due to the fast computation of Hamming distance. On the other hand, state of the art handcrafted descriptors such as LIOP~\cite{Wang_ICCV11} and MROGH~\cite{Fan_PAMI12} could result in better matching performance but with much higher computational burden. Similarly, Heinly et al.~\cite{Heinly_ECCV12} gave a comparative evaluation of binary features by considering not only the classical performance metrics such as precision and recall, but also introducing new metrics such as the spatial distribution of the features as well as the frequency of candidate matches. All the above evaluations were conducted for the task of image matching. For other applications, Gauglitz et al.~\cite{Gauglitz_IJCV11} evaluated different interest points and local descriptors for visual tracking. Bauml and Stiefelhagen~\cite{Bauml_AVSS11} evaluated different local features for person re-identification in image sequences. Madeo and Bober~\cite{Madeo_TMM17} conducted a comparative study on using binary descriptors for mobile applications. Liu et al.~\cite{Liu_ECCV16,Liu_PR17} conducted evaluations of local binary features for texture classification. Similar to this paper, Fan et al.~\cite{Fan_CVPRW16} and Schonberger et al.~\cite{Schonberger_CVPR17} studied performance of different local features for image based 3D reconstruction systems. However, Fan et al.~\cite{Fan_CVPRW16} only evaluated three binary features~(ORB, BRISK and FRIF) that contain both feature detector and descriptor while Schonberger et al.~\cite{Schonberger_CVPR17} were mainly focused on the learned float type descriptors. On the contrary, this paper extensively evaluates different combinations of existing binary descriptors and feature detectors. Besides traditional handcrafted ones, these binary descriptors also include learning based ones, e.g., BinBoost~\cite{Trzcinski_PAMI15}, LDB~\cite{Yang_PAMI14} and RFD~\cite{Fan_TIP14}, which have been shown with superior performance on standard image matching benchmarks. Moreover, a comparative study of the state of the art float type descriptors is conducted in this work too. Therefore, the evaluation of this work is more comprehensive compared to the previous works, covering the state of the arts in both binary and float type local features, and ranging from handcrafted features to the learning based ones. Many of these features are not evaluated before. What is more, about the evaluation datasets, we use both the DTU MVS dataset~\cite{Jensen_CVPR14} used in Fan et al.~\cite{Fan_CVPRW16} and the large scale SFM dataset~\cite{Wilson_ECCV14} used in Schonberger et al.~\cite{Schonberger_CVPR17}. In this way, our evaluation covers two typical cases for 3D reconstruction, i.e., 1) controlled image capturing with moderate number of images, and 2) free image capturing with a large number of images and many distracted images. For the former case, we rely on the supplied groundtruth to study the performance~(accuracy and completeness of the reconstruction) of different feature combinations. While for the latter, the ability of reconstructing scene from as many images as possible is what we pursue. \section{Pipeline of 3D Reconstruction} \label{sec:pipeline} To obtain the 3D points of an object or a scene by only using a number of images, the popular solutions~\cite{Agarwal_11,Heinly_CVPR15} usually include three steps: feature matching across images, structure from motion~\cite{MRF_SFM_PAMI13,ChangChangWu_SFM_3DV13} and dense reconstruction~\cite{Furukawa_PAMI10}. Feature matching aims to find the so called feature tracks. In essential, a feature track corresponds to a 3D point, containing point correspondences across different images. For unordered and very large scale image collection, there is usually an additional preprocessing step, aiming to quickly find out possible overlapping image pairs so as to conduct feature matching only on these pairs to save matching time~\cite{Lou_ECCV12,Schonberger_CVPR15_1}. Structure from motion takes a number of feature tracks as input, and outputs a number of 3D points as well as some camera parameters of the input images. With the recovered cameras, dense reconstruction is applied to obtain a dense 3D point cloud as the reconstruction result. In a word, a typical 3D reconstruction system outputs include a number of 3D points of the scene and the estimated camera parameters of the input images. By comparing these outputs to the groundtruth, one can evaluate how good the system is, e.g., in terms of 3D reconstruction accuracy, completeness and successfully recovered cameras. In this paper, we focus on the step of feature matching, studying its performance when using different local features. As a result, we fix the last two steps with typical methods: linear time incremental structure from motion~\cite{ChangChangWu_SFM_3DV13} and PMVS~\cite{Furukawa_PAMI10} for dense reconstruction. Their source codes are provided and can be downloaded from their websites. Meanwhile, no preprocessing is used, i.e., feature matching is extensively conducted for all possible image pairs. In the following, we give a brief introduction to the evaluated features first and then move to the evaluation. \section{Float Type Features} \label{sec:float_feature} Local feature has been an active and persistent topic in computer vision community. To keep this evaluation thorough and up to data, we choose recently proposed methods, including both handcrafted descriptors and the recent popular learning based ones. For reference, we also include the classical SIFT in our evaluation as baseline. \subsection{SIFT} SIFT constructs a Difference of Gaussian~(DoG) scale space to detect extrema across both spatial and scale spaces as keypoints. DoG scale space is constructed by subtracting neighboring images of a Gaussian scale space of the input image. The keypoint orientation is computed by accumulating a histogram of gradient orientations from a local circular region around the keypoint to achieve rotation invariance. The orientation corresponding to the largest bin in this histogram is taken as the keypoint orientation. Meanwhile, other orientations corresponding to the peak bins which are within 80\% of the largest one are also taken as the keypoint's orientations. For feature description, SIFT divides the scale and rotation normalized local patch around a keypoint into $4 \times 4$ grids. In each grid, it computes a histogram of gradient orientations with 8 bins. All these histograms are concatenated together and normalized to get a 128 dimensional float vector as the SIFT descriptor. To improve robustness, the trilinear interpolation among spatial and orientation bins is utilized and a Gaussian weight is assigned to each pixel in the local patch. \subsection{LIOP} In SIFT and and its variants~\cite{Mikolajczyk_PAMI05,Bay_CVIU08,KAZE_ECCV12,Marko_PR09}, they rely on dominant orientations to achieve rotation invariance. Fan et al.~\cite{Fan_PAMI12} observed that the dominant orientations estimated from local image context are unreliable, and thus they proposed to construct local image descriptors by intensity order pooling to achieve intrinsic rotation invariance. Under this framework, Wang et al.~\cite{Wang_ICCV11} proposed the LIOP descriptor by pooling a kind of low level feature based on the local ordinal information around a pixel in the support region. The local intensity order can explore the relative relationship of intensities among all neighboring points around a pixel, not merely the relationship between two points which is often used by LBP invariants~\cite{Liu_TIP16,Liu_InforSci16,Chen_BMVC13,Chen_TIP13}. As a result, LIOP was reported with higher performance than its previous methods. For this reason, we choose to include LIOP in our evaluation as a representative handcrafted local feature. \subsection{VGGDesc} While traditional methods for local image description are handcrafted, learning good local descriptors has been extensively explored in recent years. One representative work of this type is proposed by the Visual Geometry Group~(VGG) in the Oxford University. Following Brown et al.'s work on discriminative learning of local image descriptors~\cite{Matthew_PAMI11}, Simonyan et al.~\cite{Simonyan_PAMI14} proposed to formulate the descriptor learning problem in a convex optimization framework based on the hinge loss with sparsity constraint. They used the RDA~\cite{RDA_JMLR10} to efficiently solve the involved sparse constrained optimization problem with large scale training set. They first learned a high dimensional descriptor by selecting discriminative pooling areas through sparse constraint. Then, they pursued a linear subspace of the learned high dimensional descriptor to obtain the final compact descriptor with powerful discriminative ability. \subsection{DeepDesc} With the popularity of using Convolutional Neural Networks~(CNNs) in various vision tasks, it has also been used in descriptor learning. Although initial works on using CNNs to learn patch descriptors are usually combined with additional metric layers to achieve good matching performance~\cite{MatchNet_CVPR15,DeepCompare_CVPR15,Kumar_CVPR16}, researchers gradually move to the more practical case, i.e., learning a patch descriptor than can be directly operated in the Euclidean space. This is because that this kind of descriptor can be used as a drop-in replacement for the widely used handcrafted descriptors, thus has wider applications. One representative work of learning patch descriptors without additional metric layers is the DeepDesc proposed by Edgar et al.~\cite{Edgar_ICCV15}. They used a Siamese Network structure and minimized a hinge-like loss when training the network. With a carefully designed network structure and a hard sample mining strategy for network training, they finally obtained a 128 dimensional float type descriptor that can be measured in the Euclidean space. \subsection{L2Net} A very recent work on learning discriminative patch descriptor in the Euclidean space by CNNs is the L2Net~\cite{L2Net_CVPR17}, which is specially designed for the matching task and incorporates supervision information of intermediate layers to improve its generalization ability. It takes a fully convolutional architecture with 7 convolutional layers, each of which is followed by a batch normalization layer with fixed parameters. Like DeepDesc, it finally outputs a 128 dimensional vector as the descriptor to serve as a drop-in replacement of SIFT for various applications. L2Net was the rank one method for the competition of local features held in ECCV'16 and obtained the top performance on the widely used patch matching dataset~(i.e., the Brown dataset~\cite{Matthew_PAMI11}). Due to its superior performance, we choose to include it in our evaluation. \subsection{LIFT} We also include LIFT~\cite{LIFT_ECCV16} in this evaluation as the state of the art method for the whole pipeline of feature detection and description. Inspired by the success of deep learning and identical to the SIFT's pipeline, LIFT combines all necessary components~(i.e., keypoint detector, orientation estimator, and local patch descriptor) of a local feature altogether in an end-to-end manner based on the deep convolutional architecture. Specifically, it uses TILDE~\cite{TILDE_CVPR15} as the keypoint detector because TILDE is convolutional, differentiable and with good performance. After detecting keypoints, it estimates the orientations of those patches around the detected keypoints by a CNN which is trained to minimize the generated descriptors' distance of matching patches~\cite{Yi_CVPR15}. Finally, the DeepDesc is used to extract feature descriptors for the scale and rotation normalized patches. To crop, resize, and rotate the local patch around a keypoint, LIFT uses the spatial transform network~\cite{SPN_NIPS15} as connector since it is differentiable. As a result, the whole pipeline of LIFT is differentiable and so can be trained in an end-to-end manner. In practice, the authors trained LIFT one component by one component started from the descriptor part and then finetuned the whole pipeline. \subsection{Implementation Details} For SIFT, we use the implementation supplied in VLFeat~\cite{vlfeat}. For the other float type descriptors, we use the implementations provided by their authors\footnote{\scriptsize{ LIFT: \href{https://github.com/cvlab-epfl/LIFT}{https://github.com/cvlab-epfl/LIFT} LIOP: \href{https://github.com/foelin/IntensityOrderFeature}{https://github.com/foelin/IntensityOrderFeature} L2Net: \href{https://github.com/yuruntian/L2-Net}{https://github.com/yuruntian/L2-Net} DeepDesc: \href{https://github.com/etrulls/deepdesc-release}{https://github.com/etrulls/deepdesc-release} VGGDesc: \href{http://www.robots.ox.ac.uk/~vgg/software/learn_desc/}{http://www.robots.ox.ac.uk/~vgg/software/learn\_desc/} }}. SIFT keypoints~(i.e., DoG) are used for all these descriptors except for LIFT, which has its own keypoints. The low dimensional descriptor learned on the 'Liberty' of the Pacth Dataset~\cite{Matthew_PAMI11} is used for the VGG descriptor. Similarly, the evaluated L2Net is also trained on the 'Liberty'. While for the DeepDesc, we use the authors' suggested model that was trained on a subset of 'Liberty', 'Notre Dame' and 'Yosemite' of the Patch Dataset. For LIFT, it wa trained with a SFM dataset~(Piccadilly Circus dataset~\cite{Wilson_ECCV14}), and we use the public available model supplied by the authors. Please see Table~\ref{tab:feature} for a summary of all these local features. Identical to the Lowe's ratio test~\cite{LOWE_IJCV04}, the Nearest Neighbor Distance Ratio~(NNDR) is used for matching keypoints, where the ratio threshold is set as 0.8 for all the tested descriptors. To find the nearest and the second nearest neighbors, we use the open source ANN library~\cite{ANN} for the fast approximate nearest neighbor search. \begin{table*}[!htb] \renewcommand{\arraystretch}{1.1} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline keypoint & descriptor & dimension & data type & handcrafted & learned & training set \\ \hline \multirow{5}{*}{FRIF or BRISK} & FRIF~\cite{Wang_BMVC13} & 512 & binary & $\surd$ & $\times$ & $\times$ \\ \cline{2-7} & BRISK~\cite{Leutenegger_ICCV11} & 512 & binary & $\surd$ & $\times$ & $\times$ \\ \cline{2-7} & LDB~\cite{Yang_PAMI14} & 256 & binary & $\times$ & $\surd$ & Liberty~\cite{Matthew_PAMI11} \\ \cline{2-7} & RFD~\cite{Fan_TIP14} & 288 & binary & $\times$ & $\surd$ & Liberty~\cite{Matthew_PAMI11} \\ \cline{2-7} & BinBoost~\cite{Trzcinski_PAMI15} & 256 & binary & $\times$ & $\surd$ & Liberty~\cite{Matthew_PAMI11} \\ \hline \multirow{5}{*}{DoG~(SIFT)} & SIFT~\cite{LOWE_IJCV04} & 128 & float & $\surd$ & $\times$ & $\times$ \\ \cline{2-7} & LIOP~\cite{Wang_ICCV11} & 144 & float & $\surd$ & $\times$ & $\times$ \\ \cline{2-7} & VGGDesc~\cite{Simonyan_PAMI14} & 128 & float & $\times$ & $\surd$ & Liberty~\cite{Matthew_PAMI11} \\ \cline{2-7} & DeepDesc~\cite{Edgar_ICCV15} & 77 & float & $\times$ & $\surd$ & \tabincell{c}{subset of \\\{Liberty,NotreDame,Yosemite\}~\cite{Matthew_PAMI11}} \\ \cline{2-7} & L2Net~\cite{L2Net_CVPR17} & 128 & float & $\times$ & $\surd$ & Liberty~\cite{Matthew_PAMI11} \\ \hline LIFT & LIFT~\cite{LIFT_ECCV16} & 128 & float & $\times$ & $\surd$ & Piccadilly~\cite{Wilson_ECCV14} \\ \hline \end{tabular} \caption{Summary of the evaluated local features. \label{tab:feature}} \end{table*} \section{Binary Features} \label{sec:binary_feature} To reduce the memory footprint of float type descriptors, binary descriptors have been widely studied in recent years. These binary descriptors have been used in some light weight tasks, such as template based object detection~\cite{Calonder_PAMI11} and SLAM~\cite{ORB_SLAM_TR15}, which usually involve matching only several hundreds of keypoints. However, they have not yet been used or evaluated for tasks involving extensively keypoint matching, such as the one we studied in this paper. In this work, we choose typical binary features to evaluate their performance on 3D reconstruction. For comprehensiveness, we cover both handcrafted ones and the learning based ones as summarized on Table~\ref{tab:feature}. \subsection{BRISK} BRISK contains a scale and rotation invariant keypoint detector and a binary feature descriptor. For the keypoint detector, BRISK implements a scale space by using two pyramids alternately, one for the octaves and the other for the intra-octaves, to trade-off the computation and scale estimation accuracy. The keypoints are detected in each level of the scale space based on the AGAST~\cite{AGAST_ECCV10}, which is an effective extension of the FAST corner detector~\cite{FAST_PAMI10}. Based on the position and scale of the detected keypoint, a sampling pattern with 60 points regularly sampled from 4 concentric circles are used to compute the keypoint's orientation as well as its binary descriptor. Specifically, the point pairs generated by these sampling points are divided into long-distance pairs and short-distance ones. The long-distance pairs are used to compute an average local gradient to define the orientation of the keypoint, while the short-distance pairs are used for intensity tests to construct the binary descriptor. To deal with aliasing effects, the intensity of a sampling point is computed by filtering with a Gaussian kernel whose standard deviation is proportional to its distance to the keypoint, i.e., the central point of the sampling pattern. \subsection{FRIF} While BRISK resorts to FAST detector for efficient keypoint detection, FRIF relies on the response of Laplacian of Gaussian~(LoG). The basic idea is to approximate LoG with rectangular filters so that to compute its response very quickly by integral images. According to Mikolajczyk and Schmid's study~\cite{Mikolajczyk_ICCV01}, Laplacian of Gaussian is stable in characteristic scale selection and has been used in many feature detectors~\cite{Mikolajczyk_IJCV04,LOWE_IJCV04}. In FRIF, it approximates a LoG template by linear combination of four rectangular filters. Therefore, computing the LoG responses on pixels of an image just requires linear combination of four rectangular filtering results, which can be done efficiently based on integral images. To detect extrema of the approximated LoG responses across both spatial and scale spaces, FRIF implements an identical scale space as BRISK does and uses a similar strategy for non-maximum suppression as well as location refinement. As far as the binary descriptor is concerned, FRIF uses a similar sampling pattern to BRISK, but proposes a mixed binary descriptor to achieve better performance. For each sampling point, it uses its neighboring points to conduct intensity tests to obtain a number of bits as part of the descriptor. It also uses some short-distance point pairs for intensity tests as the remaining part of the descriptor to capture complementary information. The long-distance point pairs are used to compute the keypoint orientation as in BRISK. \subsection{LDB} LDB~\cite{Yang_PAMI14} is a binary descriptor computed based on intensity difference and gradient difference. It first participates the local region into several cells according to the predefined spatial configurations. Then the averaged intensities and gradients are computed for each of these cells. These average values between cell pairs are compared to generate binary values so as to construct the binary descriptor. To select only a few discriminative and meaningful test pairs from all the possible cell pairs, a modified adaboost algorithm is proposed by Yang and Cheng~\cite{Yang_PAMI14}. \subsection{RFD} Gradient orientation map used in SIFT and DAISY~\cite{DAISY_PAMI10} has shown its effectiveness in constructing discriminative local descriptors. Fan et al.~\cite{Fan_TIP14} extended it for binary feature description. They proposed to construct a bit of a binary descriptor by thresholding the oriented gradient responses accumulated from a certain region, which is either a rectangular or a Gaussian shaped region. The best threshold value for each region is determined by the Bayesian criteria according to the labeled training data. Such regions constructing the so call RFD descriptor are greedy selected from a large pool of candidates according to their discriminative ability and correlation. \subsection{BinBoost} Similar to RFD which uses the thresholded gradient orientation map as the basic element, Trzcinski et al.~\cite{Trzcinski_PAMI15} applied boosting to learn high compact binary descriptor. The learned descriptor, named as BinBoost, takes a linear combination of several thresholded gradient orientation maps and then thresholds the combination result as one bit in the descriptor. In other words, if we consider each gradient orientation map as one weak classifier, each bit in BinBoost corresponds to a strong classifier according to the boosting theory. The gradient orientation maps and their linear weights are selected based on a modified adaboost learning algorithm proposed in their paper too. Among the above five binary descriptors, the first two have both feature detector and feature descriptor. The latter three are only binary descriptors which have to be evaluated along with a specific feature detector. Therefore, in our evaluation, we combine them with feature detectors provided by the first two methods respectively. Here, we do not evaluate ORB~\cite{Rublee_ICCV11} for two reasons. First, both BRISK keypoint and ORB keypoint are based on the AGAST while BRISK uses a finer scale space, so the BRISK keypoint is better. Second, ORB has been shown with inferior performance to BRISK and FRIF in our previous work~\cite{Fan_CVPRW16}. \subsection{Implementation Details} All the evaluated binary features have source codes available on the Internet, therefore, we use the original implementations with default parameters released by their authors\footnote{\scriptsize{ RFD: \href{http://www.nlpr.ia.ac.cn/fanbin/rfd.htm}{http://www.nlpr.ia.ac.cn/fanbin/rfd.htm} LDB: \href{http://lbmedia.ece.ucsb.edu/research/binaryDescriptor/web_home/web_home/index.html}{http://lbmedia.ece.ucsb.edu/research/binaryDescriptor/web\_home/web\_home} FRIF: \href{https://github.com/foelin/FRIF}{https://github.com/foelin/FRIF} BRISK: \href{http://www.asl.ethz.ch/people/lestefan/personal/BRISK}{http://www.asl.ethz.ch/people/lestefan/personal/BRISK} BinBoost: \href{http://cvlab.epfl.ch/research/detect/binboost}{http://cvlab.epfl.ch/research/detect/binboost} }}. For RFD, the one trained on the 'Liberty' of the Patch Dataset with rectangle receptive field is used~(denoted as RFDR). For BinBoost, the one with 256 bits is used, which is also trained on the 'Liberty' and reported with the best generalization ability. To match keypoints of these binary features, we use the multi-table and multi-probe LSH implemented in the FLANN library~\cite{FLANN} to approximately find the first two nearest neighbors in an efficient manner. Then the distance ratio of the first and the second nearest neighbors is used to decide whether two keypoints are matched or not. The same as the case of float descriptor, the ratio threshold is set as 0.8. Note that although computing the Hamming distance of two binary descriptors is significantly faster than computing the Euclidean distance of two float type descriptors, it is still impractical to conduct bruteforce nearest neighbor search in Hamming space because of the large number of image matching operations involved in 3D reconstruction task. Due to this reason, the fast approximate nearest neighbor search method, i.e. multi-table, multi-probe LSH, is used. Specifically, we set the number of hash tables as 4, the multi-probe level as 1, the LSH code length as 24 in all our evaluations. \section{Evaluation on Multiview Stereo Dataset} \label{sec:MVS_result} \subsection{Dataset} We first choose to evaluate the 3D reconstruction performance of different features on a recently published multiview stereo dataset, known as the DTU MVS dataset~\cite{Jensen_CVPR14}. It contains a total number of 124 different scenes, covering a wide range of objects and surface materials. For each scene, it collects images of $1600 \times 1200$ resolution from 49 or 64 different viewpoints, with 8 different illumination conditions. Among these scenes, 80 scenes contain necessary information~(i.e., observability mask) that is required for the evaluation of reconstruction results as Jensen et al. did~\cite{Jensen_CVPR14}. In this paper, we use the scenes with 49 views, which occupy 58 out of all 80 scenes. We do not study effects of different lighting conditions, so we just use the subset with all lights on. Due to the fact that our implemented 3D reconstruction system is fully automatic and uses the self-calibration to decide the camera parameters, the coordinate system of the reconstructed 3D points can be any of those recovered cameras. In this case, the reconstructed coordinate system and the supplied reference coordinate system are related by a 3D similarity transformation~(scaling, rotation and translation). Therefore, we have to firstly register the reconstructed 3D points to the reference scans~(groundtruth) obtained by a structure light scanner which are supplied in the dataset. To this end, we manually selected three corresponding 3D points between the reconstructed one and the groundtruth. Then, they are used to estimate a similarity transformation to register the reconstructed 3D points. \subsection{Evaluation Protocol} After registering the reconstructed 3D points to the reference coordinate system, we use the supplied code in the dataset for performance evaluation. The evaluation protocol is based on that of~\cite{Seitz_CVPR06}, with some modifications to make it unbiased and better at handling missing data and outliers. Basically, it adopts an observability mask so that the evaluation is only focused on the visible part of the scene. Please refer to~\cite{Jensen_CVPR14} for more details. As in~\cite{Seitz_CVPR06,Jensen_CVPR14}, accuracy and completeness are used as quality measures of a reconstruction. According to their definitions, given a reconstruction and the structure light reference, the accuracy is computed as the distance from the reconstruction to the reference scan. On the contrary, the completeness is computed as the distance from the reference scan to the reconstruction. For each 3D point in one~(either the reconstructed 3D points or the reference 3D points), its distance to the other is computed as the closest distance to all the 3D points in the other. The mean accuracy and completeness are recorded to evaluate the quality of a reconstruction. The evaluation code and the dataset can be downloaded on: \href{http://roboimagedata.compute.dtu.dk}{http://roboimagedata.compute.dtu.dk} All experiments are conducted in a laptop with Intel 2.5GHz CPU and 8GB memory. \begin{figure}[tb] \centering \includegraphics[width=0.5\textwidth]{smallvar_examples.pdf} \caption{Some example scenes that have small performance difference for the evaluated methods. \label{fig:smallvar_examples}} \end{figure} \subsection{Results and Analysis} Among the 58 tested scenes, there are 3 scenes for which at least one method fails to obtain the reconstruction result due to the poor quality of point matching. For the remaining successful scenes, we further divide them into two groups. For one group, it contains scenes that all the evaluated methods perform similarly, i.e., both the variance of their reconstruction accuracy and the variance of their completeness are smaller than a threshold, which we set to be 0.05. For the other group, it contains those scenes that all the evaluated methods have a large variance of their performance, i.e., at least one method performs significantly different from other ones. There are 6 scenes in this group. We will analysis the performance of the evaluated methods for these three groups of scenes respectively. \textbf{\emph{Scenes with small performance variance}}. In this case, it corresponds to the easiest scenes for 3D reconstruction. Some examples of these scenes are shown in Fig.~\ref{fig:smallvar_examples}. These scenes all contain rich textures and are easy for feature point matching. The average mean accuracy of different methods across all scenes of this kind~(i.e., with small performance variance) is shown in Fig.~\ref{fig:smallvar_result}(a), while the average mean completeness is shown in Fig.~\ref{fig:smallvar_result}(b). Among the binary features, the combination of BRISK keypoint with BinBoost descriptor performs the best, whose performance is comparable or even better than some float features. For all the tested combinations, BRISK keypoint with BinBoost descriptor and DoG keypoint with LIOP descriptor perform similar, both of which are with the top performance. In general, DoG with float descriptors lead to a better reconstruction accuracy than using binary features, except for the best combination of BRISK + BinBoost. An interesting observation is that the entire feature learning solution, LIFT, does not perform as well as other float features. In fact, it performs the worst among all the evaluated features, including the binary ones. Obviously, using LIFT leads to larger reconstruction error both in terms of accuracy and completeness. Such an inferior performance of LIFT indicates that there might be larger localization error between corresponding LIFT keypoints since it indeed produces comparable or more matching points than SIFT in our experiments. Except for LIFT, LIOP produces slightly better results than other float type descriptors and the remaining ones perform similarly. Among all the binary descriptors, LDB is not as good as others no matter which keypoint is used. Meanwhile, when using FRIF keypoint, the results of different binary descriptors are more flat than using BRISK keypoint. This means that FRIF keypoint is less sensitive to descriptors. For BRISK keypoint, it has to be careful when choosing the combined descriptor so as to achieve good performance. From Fig.~\ref{fig:smallvar_result}, we can conclude that it is not necessary to learn sophisticated descriptors for easy scenes. In this case, using binary features is good enough to obtain satisfactory reconstruction accuracy as using float features. Taking the best descriptor for each keypoint, we show mean accuracy and completeness of all 55 successful scenes~(all the evaluated methods successfully obtained reconstruction results) in Fig.~\ref{fig:all_success_result}. Note that we do not include the results of LIFT as it performs the worst according to the average results. \begin{figure}[tb] \centering \subfloat[]{ \includegraphics[width=0.235\textwidth]{mean_acc.pdf}} \subfloat[]{ \includegraphics[width=0.235\textwidth]{mean_comp.pdf}} \caption{The average reconstruction (a)~accuracy and (b)~completeness over all the scenes that have small performance variance for the evaluated methods. See text for details. \label{fig:smallvar_result}} \end{figure} \begin{figure}[tb] \centering \subfloat[]{ \includegraphics[width=0.23\textwidth]{all_success_scene_acc.pdf}} \subfloat[]{ \includegraphics[width=0.23\textwidth]{all_success_scene_comp.pdf}} \caption{The average reconstruction (a)~accuracy and (b)~completeness for the 55 different scenes that all the evaluated methods successfully obtained reconstruction results. To reduce cluster, we show results for each keypoint with the best descriptor combination. \label{fig:all_success_result}} \end{figure} \textbf{\emph{Scenes with large performance variance}}. In this case, it refers to complex scene types for reconstruction. The results are shown in Fig.~\ref{fig:largevar_result}. In these figures, the 1st column displays the scenes, the 2nd column shows the mean accuracy of different methods, the 3rd column shows the mean completeness of different methods, and the 4th column gives the running times of different methods. From Fig.~\ref{fig:largevar_result}, we have the following observations: \begin{itemize} \item Consistent to the observation in easy scenes, using FRIF keypoint is relatively less sensitive to the used descriptors than using the BRISK keypoint. In many scenes, it produces similar results for different binary descriptors when using FRIF keypoint. This property of FRIF is similar to DoG. To further show this point, for each kind of keypoint, we record the number of scenes that have large performance variance for different descriptors. These numbers for BRISK, FRIF and DoG are 7, 3 and 2 respectively. \item Different from the easy scenes, BRISK with BinBoost does not perform the best for these complex scenes. For these complex scenes, it is hard to say which combination is better because it tends to be scene related. In addition, LIFT does not perform the worst for these complex scenes, but it is the most time consuming. In general, using float features is a better choice than using binary features if one does not consider the running time. \item For the float type features, the learning based descriptors do not necessarily outperform the handcrafted ones. The baseline SIFT performs rather well for all these complex scenes. Similar results can be observed for the binary features, among which the handcrafted ones are better than many learned ones in most cases. \item In most cases, the running times of SFM and PMVS for all evaluated methods are similar, the main difference of total running time lies in the matching time. In general, using BRISK keypoint requires less running time than using other keypoints. For either BRISK or FRIF keypoints, using FRIF descriptor requires more matching time than other binary descriptors, thus needs more time to do the reconstruction task. Among all the evaluated methods, using float features is more time consuming since matching binary features is more efficient. Due to the smaller descriptor length, using VGGDesc requires the least running time among all the evaluated float features. L2Net usually requires less time than SIFT and DeepDesc although all of them have the same descriptor length. This implicitly indicates that L2Net could generate better matching results~(i.e., similar number of matches but with higher precision), thus requiring less time for SFM. \end{itemize} \begin{figure*} \centering \subfloat[]{ \begin{minipage}[c]{0.12\textwidth} \centering \includegraphics[width=1.0\textwidth]{scene35.pdf} \end{minipage} \begin{minipage}[c]{0.78\textwidth} \centering \includegraphics[width=0.28\textwidth]{scene_35_acc.pdf} \includegraphics[width=0.28\textwidth]{scene_35_comp.pdf} \includegraphics[width=0.28\textwidth]{scene_35_time.pdf} \end{minipage}} \subfloat[]{ \begin{minipage}[c]{0.12\textwidth} \centering \includegraphics[width=1.0\textwidth]{scene48.pdf} \end{minipage} \begin{minipage}[c]{0.78\textwidth} \centering \includegraphics[width=0.28\textwidth]{scene_48_acc.pdf} \includegraphics[width=0.28\textwidth]{scene_48_comp.pdf} \includegraphics[width=0.28\textwidth]{scene_48_time.pdf} \end{minipage}} \subfloat[]{ \begin{minipage}[c]{0.12\textwidth} \centering \includegraphics[width=1.0\textwidth]{scene51.pdf} \end{minipage} \begin{minipage}[c]{0.78\textwidth} \centering \includegraphics[width=0.28\textwidth]{scene_51_acc.pdf} \includegraphics[width=0.28\textwidth]{scene_51_comp.pdf} \includegraphics[width=0.28\textwidth]{scene_51_time.pdf} \end{minipage}} \subfloat[]{ \begin{minipage}[c]{0.12\textwidth} \centering \includegraphics[width=1.0\textwidth]{scene62.pdf} \end{minipage} \begin{minipage}[c]{0.78\textwidth} \centering \includegraphics[width=0.28\textwidth]{scene_62_acc.pdf} \includegraphics[width=0.28\textwidth]{scene_62_comp.pdf} \includegraphics[width=0.28\textwidth]{scene_62_time.pdf} \end{minipage}} \subfloat[]{ \begin{minipage}[c]{0.12\textwidth} \centering \includegraphics[width=1.0\textwidth]{scene65.pdf} \end{minipage} \begin{minipage}[c]{0.78\textwidth} \centering \includegraphics[width=0.28\textwidth]{scene_65_acc.pdf} \includegraphics[width=0.28\textwidth]{scene_65_comp.pdf} \includegraphics[width=0.28\textwidth]{scene_65_time.pdf} \end{minipage}} \subfloat[]{ \begin{minipage}[c]{0.12\textwidth} \centering \includegraphics[width=1.0\textwidth]{scene74.pdf} \end{minipage} \begin{minipage}[c]{0.78\textwidth} \centering \includegraphics[width=0.28\textwidth]{scene_74_acc.pdf} \includegraphics[width=0.28\textwidth]{scene_74_comp.pdf} \includegraphics[width=0.28\textwidth]{scene_74_time.pdf} \end{minipage}} \caption{Performance of scenes that have large accuracy and completeness variances among different evaluated methods. From left to right are: the scene, mean accuracy of different methods, mean completeness of different methods, and the timing results in different stages of different methods. \label{fig:largevar_result}} \end{figure*} \textbf{\emph{Scenes that at least one method fails}}. In this case, it refers to the most challenging scene type for 3D reconstruction since one may fail if the local feature is not chosen appropriately. The results are shown in Fig.~\ref{fig:failscene_result}. We can find that all the failures are from the combinations with BRISK as keypoint detector. More specifically, using LDB descriptor leads to failure for one scene, while using RFD is responsible for 3 failed cases. Even in cases that using BRISK keypoints can be survived to get a reconstruction result, it is usually less accurate and complete than using other keypoints. Considering together with the performance of BRISK keypoint for complex scenes, it is clear that BRISK keypoint is less suitable for reconstructing complex and challenging scene types. However, we have to acknowledge that it is a good choice for easy scene types because it requires less time to obtain better accuracy. While for the other keypoints, DoG is slightly better. Taking Fig.~\ref{fig:smallvar_result} to Fig.~\ref{fig:failscene_result} altogether, it is interestingly to see that when the scene type becomes more and more challenging, using float type features gradually shows its superiority over binary features. Even though, using FRIF keypoint with one binary descriptor is still a good choice for 3D reconstruction with moderate images captured from controlled conditions~(e.g., fixed viewpoints) as it requires less running time than using float type features. While among the float type features, the reconstruction results of LIFT is less accurate due to the larger localization error of LIFT than that of DoG. \begin{figure*} \centering \subfloat[]{ \begin{minipage}[c]{0.15\textwidth} \centering \includegraphics[width=1.0\textwidth]{scene30.pdf} \end{minipage} \begin{minipage}[c]{0.84\textwidth} \centering \includegraphics[width=0.32\textwidth]{fail_scene_30_acc.pdf} \includegraphics[width=0.32\textwidth]{fail_scene_30_comp.pdf} \includegraphics[width=0.32\textwidth]{fail_scene_30_time.pdf} \end{minipage}} \subfloat[]{ \begin{minipage}[c]{0.15\textwidth} \centering \includegraphics[width=1.0\textwidth]{scene42.pdf} \end{minipage} \begin{minipage}[c]{0.84\textwidth} \centering \includegraphics[width=0.32\textwidth]{fail_scene_42_acc.pdf} \includegraphics[width=0.32\textwidth]{fail_scene_42_comp.pdf} \includegraphics[width=0.32\textwidth]{fail_scene_42_time.pdf} \end{minipage}} \subfloat[]{ \begin{minipage}[c]{0.15\textwidth} \centering \includegraphics[width=1.0\textwidth]{scene63.pdf} \end{minipage} \begin{minipage}[c]{0.84\textwidth} \centering \includegraphics[width=0.32\textwidth]{fail_scene_63_acc.pdf} \includegraphics[width=0.32\textwidth]{fail_scene_63_comp.pdf} \includegraphics[width=0.32\textwidth]{fail_scene_63_time.pdf} \end{minipage}} \caption{Performance of scenes that at least one method fails to obtain the reconstruction result. If one method fails, there is no bar shown in the related figures. \label{fig:failscene_result}} \end{figure*} \section{Evaluation on Large Scale Structure from Motion Dataset} \label{sec:SFM_result} Apart from the controlled case of image capturing, we also evaluate all these local features on 3D reconstruction from a large collection of Internet images, which is the case of most large scale applications of 3D reconstruction, i.e., reconstructing landmarks or cities. For this experiment, we choose the large scale structure from motion dataset~\cite{Wilson_ECCV14}. This dataset contains images of several landmarks across the world. For each landmark, it has several thousands of images obtained from the Internet. Different from the previous tested MVS dataset, each image set of one landmark contains a large portion of unrelated images as distractors. On the contrary, the MVS dataset only contains images of one scene from different viewpoints. Meanwhile, since there is no constraint on these collected images, they inevitably contain many low quality and non-overlapping images. For these reasons, this dataset is more challenging for feature matching, and so for 3D reconstruction. \begin{figure*}[htbp] \centering \subfloat[]{ \includegraphics[width=0.32\textwidth]{cam_Gendar.pdf}} \subfloat[]{ \includegraphics[width=0.32\textwidth]{cam_Madrid.pdf}} \subfloat[]{ \includegraphics[width=0.32\textwidth]{cam_piazza.pdf}} \caption{The number of recovered cameras by SFM based on matching different local features for three different landmarks. \label{fig:SFM_cams}} \end{figure*} Since there is no groundtruth 3D model available for this dataset, we use the number of recovered images as the performance indicator for different methods. This is because that the following PMVS procedure is highly related to the number of recovered cameras. In general, if we could recover more number of cameras, the reconstruction could cover more parts of the scene, so the more number of 3D points could be obtained by PMVS and a better accuracy and completeness are expected for the reconstructed scene. The results are shown in Fig.~\ref{fig:SFM_cams}. In this dataset, the float type features generally perform better than the binary ones with a significantly large margin. This observation is different from the one observed in the previous MVS dataset, where using binary features could achieve comparable results to those of using float type features. Such a superior performance of the float type features demonstrates their good generalization ability. Considering the fact that there are many unrelated images exist in this dataset, binary features may be sensitive to the distractors, i.e., the local features extracted from unrelated images. For the binary features, using FRIF keypoint recovers more number of cameras than using BRISK keypoint. In some case, when combined with an appropriate descriptor, using FRIF keypoint can even produce comparable performance to that of using float type features. The better result of using FRIF keypoint than using BRISK keypoint is also consistent to the observations found in MVS dataset. For the handcrafted float type descriptors, the performance of the most traditional SIFT is very stable across different landmarks while LIOP fails to reconstruct a large part of the scene for the third landmark~(Fig.~\ref{fig:SFM_cams}(c)). This is similar for the DeepDesc, showing an inferior performance to other learning based methods. Especially, the advanced CNN based learning method, L2Net, performs the best, which is followed by the traditional learning method, i.e., convex optimization. Both of them outperform the SIFT baseline. It is worth to note that LIFT recovers many cameras for this dataset, implying a potential good performance. This is not contradictory to its inferior performance in reconstruction accuracy shown in the MVS dataset. The reason is that the localization precision of LIFT keypoints is not as accurate as other handcrafted keypoints, but the LIFT descriptor does have a very good matching ability. Therefore, it could recover many cameras but with a relative large error on the recovered camera poses, which would further reduce the reconstruction accuracy as shown in the previous experiments. \section{Conclusion} \label{sec:conclusion} In this paper, we provide an extensively comparative study of popular local features for the task of 3D reconstruction. We focus on how the matching quality of different local features affects the final reconstruction performance, either in terms of accuracy and completeness or indicated by the number of recovered cameras. Our evaluation covers a wide range of the state of the art local features, ranging from the traditional handcrafted ones to the recently popular learning based ones. Meanwhile, we also include both float type feature descriptors and binary ones to have a thorough and comprehensive evaluation. Not only the studied local features have a large diversity, the evaluated datasets also cover the two main application situations of image based 3D reconstruction. One is a controlled case where all images are taken from different viewpoints of the reconstructed scene so that all images have a considerable range of overlap. The other is a general case where many unrelated images exist in the image set of the reconstructed scene. For the first case, we choose to use the recently proposed DTU MVS datasets, which contain various scene types with specifically designed image capturing positions and supply the groundtruth 3D points that facilitate an objective and quantitative evaluation of the reconstruction results. While for the latter case, we choose to use the Internet scale image sets of landmarks, each of which contains a large number of related images and distractors. Such a dedicated consideration on the evaluated methods and datasets makes our work potentially be a guidance for practical engineers on 3D reconstruction applications. Our experimental results reveal that for the controlled case where no distracting images exist, using binary features is good enough to produce the state of the art 3D reconstruction results with only a fraction of time of using float type features. However, for the large scale free image set with many distractors, using binary features can not guarantee the good performance. The float type descriptors are the most competitive ones in this case even though they need more time to establish point correspondences. Among the evaluated float type descriptors, using recently learned descriptors, such as VGGDesc and L2Net, can lead to better results than using handcrafted ones~(SIFT, LIOP). However, DeepDesc is not as competitive as these two learned descriptors. Meanwhile, the most traditional SIFT also produces very good results among all the evaluated features, implying that it still requires a lot of efforts to improve the general matching performance of local features. The good results of the learned descriptors further encourage the potential of learning descriptors. However, how to learn the whole stuffs of feature detection and description together still requires lots of works to do, as shown by the results of LIFT which are even inferior to the baseline in terms of reconstruction accuracy and completeness, indicating a less accurate localization of the learned keypoints.
{ "timestamp": "2017-12-15T02:08:58", "yymm": "1712", "arxiv_id": "1712.05271", "language": "en", "url": "https://arxiv.org/abs/1712.05271" }
\section{Introduction} \label{perugia_mini4:sec:1} For the numerical approximation of the Helmholtz problem, it has been shown that, by using non-polynomial basis functions, it is possible to reduce the pollution effect in finite element approximations. One special class of such methods are Trefftz finite element methods, which use basis functions that are local solutions of the homogeneous problem under consideration. For the Helmholtz problem, a common choice of Trefftz basis functions are plane waves; when used in connection to a discontinuous Galerkin variational framework, they lead to the plane wave discontinuous Galerkin (PWDG) method \cite{PerugiaMS4_BM08,PerugiaMS4_CD98,PerugiaMS4_CD03,PerugiaMS4_GHP09,PerugiaMS4_HMP11,PerugiaMS4_HMP16b}. Containing information on the oscillatory behaviour of the solutions already in the approximation spaces, PWDG deliver better accuracy than standard polynomial finite element methods, for a comparable number of degrees of freedom. In addition, they involve only evaluation of basis functions on mesh interelement boundaries; hence, they can be easily used in connection with general polytopal meshes. However, it is well known that these basis functions are ill conditioned for small mesh sizes, small wavenumbers and large numbers of plane wave directions \cite{PerugiaMS4_HMK02,PerugiaMS4_LHM13}. The aim of this paper is to numerically investigate the dependence of the elemental and global condition numbers of the PWDG system matrix on the size and shape of the local (convex) polygonal element, the wavenumber, and the number of plane wave directions in the local approximation spaces. \section{The PWDG method for the Helmholtz problem} Let $\Omega\subset \mathbb{R}^2$ be a bounded Lipschitz domain and $k>0$ denote the wave\-number. We consider the homogeneous Helmholtz problem with impedance boundary condition: \begin{align}\label{perugia_problem} \begin{split} -\Delta u -k^2 u &= 0 \quad \textrm{in } \Omega,\\ \nabla u \cdot \vec{n} + iku &= g \quad \textrm{on } \partial\Omega, \end{split} \end{align} where $i$ denotes the imaginary unit, $\vec{n}$ is the unit outward normal and $g\in L^2(\partial\Omega)$ is given. The variational formulation of problem \eqref{perugia_problem} reads as follows: find $u\in H^1(\Omega)$ such that \begin{align}\label{perugia_variational} \int_\Omega (\nabla u\cdot \nabla \bar{v} - k^2 u \bar{v})dx + ik\int_{\partial\Omega} u\bar{v}\, \mathrm{d}s = \int_{\partial\Omega} g\bar{v}\, \mathrm{d}s \quad \mbox{for all } v\in H^1(\Omega). \end{align} Problem \eqref{perugia_variational} is well posed by the Fredholm alternative argument \cite{PerugiaMS4_Melenk}. \par We consider a shape-regular, uniform partition $\mathcal{T}_h$ of the domain $\Omega$ into convex polygons $K\in\mathcal{T}_h$ of diameter $h$. We define the mesh skeleton $\mathcal{F}_h = \cup_{K\in\mathcal{T}_h}\partial K$, and denote the interior mesh skeleton by $\mathcal{F}_h^I = \mathcal{F}_h\backslash\partial\Omega$. For an element $K\in\mathcal{T}_h$ we define the plane wave space $\mathrm{PW}_p(K)$ of degree $p$ as \[ \mathrm{PW}_p(K) = \{ v\in L^2(K) \;:\; v(x) = \sum_{j=1}^p\alpha_j \exp(ik\vec{d}_j\cdot (\vec{x}-\vec{x}_K)), \alpha_j\in\mathbb{C} \}, \] where $\vec{x}_K$ is the mass center of $K$, and $\vec{d}_j$, $|\vec{d}_j|=1$, $j=1,\ldots,p$, are $p$ unique directions. Since, in general, small angles between those directions lead to bad conditioning of the basis, we consider equally spaced directions. The PWDG space is defined as \begin{align*} \mathrm{PW}_p(\mathcal{T}_h) = \{ v_{hp}\in L^2(\Omega) \;:\; v_{hp}|_K\in\mathrm{PW}_p(K)\quad\mbox{for all } K\in\mathcal{T}_h\}. \end{align*} The functions in $\mathrm{PW}_p(\mathcal{T}_h)$ are local solutions of the homogeneous Helmholtz problem; therefore, they exhibit the Trefftz property \begin{align*} -\Delta v_{hp} - k^2 v_{hp} = 0\quad\mbox{for all } v_{hp}\in \mathrm{PW}_p(K). \end{align*} We assume uniform local resolution, i.e., we employ the same uniformly distributed directions $\vec{d}_j$, $j=1,\ldots,p$ on each element $K\in\mathcal{T}_h$. \par We use the standard notation for averages and normal jumps of traces across inter-element boundaries, namely $\{\!\!\{\cdot\}\!\!\}$ and $[\![\cdot]\!]$, respectively, and denote by $\nabla_h$ the elementwise application of $\nabla$. Hence, we can formulate the PWDG method as follows \cite{PerugiaMS4_GHP09,PerugiaMS4_HMP11,PerugiaMS4_HMP16b}: find $u_{hp}\in \mathrm{PW}_p(\mathcal{T}_h)$ such that \begin{align}\label{perugia_pwdg} \mathcal{A}_h(u_{hp},v_{hp}) = \ell_h(v_{hp}) \quad\mbox{for all } v_{hp} \in \mathrm{PW}_p(\mathcal{T}_h), \end{align} where \begin{align*} \mathcal{A}_h(u_{hp},v_{hp}) &:= i\left[ -\int_{\mathcal{F}_h^I} \{\!\!\{u\}\!\!\} [\![\nabla_h\bar{v}]\!]\, \mathrm{d}s + \int_{\mathcal{F}_h^I}\{\!\!\{\nabla_h u\}\!\!\}\cdot [\![\bar{v}]\!]\, \mathrm{d}s\right.\\ &\qquad-\frac{1}{2}\int_{\partial\Omega}u \nabla_h\bar{v}\cdot\vec{n} \, \mathrm{d}s +\left.\frac{1}{2}\int_{\partial\Omega}\nabla_h u\cdot\vec{n}\bar{v}\, \mathrm{d}s\right]\\ &\qquad+\frac{1}{2k}\int_{\mathcal{F}_h^I}[\![\nabla_hu]\!][\![\nabla_h\bar{v}]\!]\, \mathrm{d}s +\frac{k}{2}\int_{\mathcal{F}_h^I} [\![u]\!]\cdot[\![\bar{v}]\!]\, \mathrm{d}s\\ &\qquad+\frac{1}{2k}\int_{\partial\Omega}(\nabla_hu\cdot\vec{n})(\nabla_h\bar{v}\cdot\vec{n})\, \mathrm{d}s +\frac{k}{2}\int_{\partial\Omega}u\bar{v}\, \mathrm{d}s,\\ \ell_h(v) &:= \frac{1}{2k}\int_{\partial\Omega}g\nabla_h\bar{v}\cdot\vec{n} \, \mathrm{d}s - \frac{i}{2}\int_{\partial\Omega}g\bar{v}\, \mathrm{d}s. \end{align*} The PWDG method \eqref{perugia_pwdg} is unconditionally well-posed and stable \cite{PerugiaMS4_BM08,PerugiaMS4_CD98}. The $h$, $p$ and $hp$ convergence has been studied in \cite{PerugiaMS4_BM08,PerugiaMS4_GHP09,PerugiaMS4_HMP11,PerugiaMS4_HMP16b}. \par Let $A\in\mathbb{C}^{N_h\times N_h}$ denote the matrix associated with the sesquilinear form $\mathcal{A}_h(\cdot,\cdot)$, and $\vec{b}\in\mathbb{C}^{N_h}$ the vector associated with the functional $\ell_h(\cdot)$, for $N_h:=\mbox{dim}(\mathrm{PW}_p(\mathcal{T}_h))$. Then, the algebraic linear system associated with the PWDG method \eqref{perugia_pwdg} on the mesh $\mathcal{T}_h$ is $A \vec{u} = \vec{b}$. \section{Conditioning of the plane wave basis} In this section, we investigate numerically the conditioning of the local plane wave basis. In order to do so, we consider the spectral condition number of the local mass matrix $M_K\in\mathbb{C}^{p\times p}$ on a single element $K\in\mathcal{T}_h$. From \cite{PerugiaMS4_Gittelson} we get $M_{K,jj}=|K|$, and \begin{align*} M_{K,jl} &= \int_K e^{ik\vec{d}_j\cdot (\vec{x}-\vec{x}_K)}\overline{e^{ik\vec{d}_l\cdot (\vec{x}-\vec{x}_K)}} \, \mathrm{d}\vec{x} \\ &= -\sum_{F\in\partial K\cap \mathcal{F}_h} \frac{ik(\vec{d}_j-\vec{d}_l)\cdot\vec{n}}{k^2(\vec{d}_j-\vec{d}_l)\cdot(\vec{d}_j-\vec{d}_l)}\int_Fe^{ik(\vec{d}_j-\vec{d}_l)\cdot(\vec{x}-\vec{x}_K)}\, \mathrm{d}s, \end{align*} for $j\neq l$, $1\leq j,l\leq p$, which can be evaluated in closed form. Note that the entries of $M_K$ tend to $|K|$ as $k(\vec{d}_j-\vec{d}_l)\cdot(\vec{x}-\vec{x}_K)$ tends to zero; hence, small values of the element size $h$ and wavenumber $k$, or a small angle between two plane wave directions, lead to ill conditioning. \subsection{Dependence on the shape} \begin{figure}[tb] \centering \includegraphics[width=0.49\textwidth]{Outer} \includegraphics[width=0.49\textwidth]{AspectRatio} \caption{Spectral condition numbers of $M_K$ for regular $n$-polygons (left) and anisotropic rectangles (right) with $h=1$ and $k=10$.} \label{perugia_fig:1} \end{figure} The initial numerical experiments investigate the conditioning of the basis for different shapes of the element, for a fixed wavenumber $k=10$. Firstly, we consider regular $n$-polygons with element size $h=1$; cf. Figure~\ref{perugia_fig:1} (left). We observe that the condition numbers grow in the number of plane waves directions $p$, but are smaller for larger $n$. In particular, the condition number is decreasing in the number of sides $n$ and is asymptotically stable; hence, small edges do not cause any problems. It has been noted in \cite{PerugiaMS4_LHM13} that the conditioning of the basis depends on the aspect ratio of the elements. Therefore, we consider a single anisotropic rectangle, with size $h=1$, and vary its aspect ratio. We see that the condition number increases exponentially as the aspect ratio increases; cf. Figure~\ref{perugia_fig:1} (right). \subsection{Dependence on $hk$ and $p$} \begin{figure}[b] \centering \includegraphics[width=0.327\textwidth]{ConditionHK} \includegraphics[width=0.327\textwidth]{ConditionHKP} \includegraphics[width=0.327\textwidth]{ConditionK} \caption{Dependence of the condition number of $M_k$ on $hk$ and $p$, and verification of the approximation \eqref{perugia_cond}.} \label{perugia_fig:2} \end{figure} In this section we empirically determine the dependence of the condition number on $hk$ and $p$. We restrict to the case of a single square element. The numerical experiments displayed in the first two graphs in Figure~\ref{perugia_fig:2} suggest that the condition number is algebraic with respect to $hk$ and exponential with respect to $p$. To get a more precise answer, we fitted the data obtained from numerous numerical experiments to \begin{align}\label{perugia_cond} \mbox{cond}_2(M_K)\approx \frac{2.34^{p\ln p}}{(hk)^{p-1}}. \end{align} In Figure~\ref{perugia_fig:2} (right) we show the values of $\mbox{cond}_2(M_K)/(\frac{2.34^{p\ln p}}{(hk)^{p-1}})$ for values $h=2^{-1},...,2^{-5}$, $k=5,\ldots,30$ and $p=5,\ldots,23$. To obtain reliable data, we only plot data points for which $\mbox{cond}_2(M_K) < 10^{15}$, due to double precision limitations, and for which $hk<10$, due to the resolution condition. Hence, we could only cover a moderate range of values for $h$, $k$ and in particular $p$. All the presented values of $\mbox{cond}_2(M_K)/(\frac{2.34^{p\ln p}}{(hk)^{p-1}})$ are between $1$ and $10$ (recall that the corresponding values of $\mbox{cond}_2(M_K)$ are between $1$ and $10^{15}$), which confirms that the approximation \eqref{perugia_cond} is reasonable, at least for moderate $h$, $k$ and $p$. \section{Orthogonalization of the plane wave basis} \begin{figure}[b] \centering \includegraphics[width=0.32\textwidth]{tri} \includegraphics[width=0.32\textwidth]{quad} \includegraphics[width=0.32\textwidth]{hex} \caption{Three different meshes.} \label{perugia_meshes} \end{figure} In the previous section, we have observed that the condition number of the local basis is large for small $hk$ or large $p$. In this section we aim at improving the conditioning of the local basis in order to lower the condition number of the global system matrix $A$. Therefore, we will investigate the effect of orthogonalization of the (local) basis functions on the conditioning of the (global) system matrix $A$. A different approach has been presented in \cite{PerugiaMS4_HMK02}, where improvement of the conditioning of the global system is achieved by suitably designed non-uniform distributions of $p$. \par \begin{figure}[tb] \centering \includegraphics[width=0.49\textwidth]{GS_double_condition_pwdg} \includegraphics[width=0.49\textwidth]{GS_double_error} \caption{Spectral condition numbers for $k=10$ in double precision arithmetic.} \label{perugia_double} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=0.49\textwidth]{GS_single_condition_pwdg} \includegraphics[width=0.49\textwidth]{GS_single_error} \caption{Spectral condition numbers for $k=10$ in single precision arithmetic.} \label{perugia_single} \end{figure} We compare the condition numbers of the system matrix $A$ with original basis functions with that of the system matrix $\widetilde{A}:=Q^TAQ$ with orthogonalized basis functions for the three different meshes displayed in Figure~\ref{perugia_meshes}. Here, $Q\in\mathbb{C}^{N_h\times N_h}$ denotes the transformation matrix obtained by modified Gram-Schmidt orthogonalization \cite{PerugiaMS4_Steward} of the (local) basis functions with respect to the Hermitian part of the local system matrix $H(A_K):=\frac{A_K+\bar{A}_k^T}{2}$ on each element $K$ separately; cf. \cite{PerugiaMS4_Bassi,PerugiaMS4_Mascotto,PerugiaMS4_Schweitzer} for application of modified Gram-Schmidt to partition of unity methods, (polynomial) DG methods, and virtual element methods, respectively. In all experiments, we choose $k=10$ and only investigate the effect on the critical dependence of the condition number on $p$. As a model problem, we consider problem \eqref{perugia_problem} with $\Omega=(0,1)^2$, and exact solution given by the Bessel function of the third kind (Hankel function) $u(\vec{x}) = H^1_0(k\sqrt{(x_1+1/4)^2+x_2^2})$. \par In Figure~\ref{perugia_double} (left), we observe, for all meshes, the expected increase of the condition number in $p$ for the original system matrix $A$ (dashed lines), which results in the loss of accuracy in the $L^2$ error for $p>21$, cf. Figure~\ref{perugia_double} (right), when using a direct linear solver. We observe major improvements of the condition numbers for the matrix $\widetilde{A}$ (solid lines) until $p=21$ when the (modified) Gram-Schmidt orthogonalization breaks down, which directly correlates to the point when the direct solver fails to produce a more accurate solution. Note that, for the original matrix $A$, there is no such correlation. To further investigate these results, we also carried out the same experiments in single precision arithmetic. Figure~\ref{perugia_single} shows the results for single precision, where we observe that the loss of accuracy already occurs at $p=13$. Note, again, that this loss is correlated to the failure of the orthogonalization, indicated by the sudden increase of the condition numbers for $\widetilde{A}$ at $p=13$. \begin{table}[t] \small \setlength{\tabcolsep}{0.8mm} \begin{tabular}{|l|r|r|r|r|r|r|}\hline & \multicolumn1{c|}{$p=5$} & \multicolumn1{c|}{$p=7$} & \multicolumn1{c|}{$p=9$} & \multicolumn1{c|}{$p=11$} & \multicolumn1{c|}{$p=13$} & \multicolumn1{c|}{$p=15$} \\ \hline $\lambda_{min}(H(A))$ & $7.75\cdot10^{-1}$ & $2.56\cdot10^{-1}$ & $1.16\cdot10^{-2}$ & $4.50\cdot10^{-4}$ & $6.73\cdot10^{-6}$ & $1.18\cdot10^{-7}$\\ $\lambda_{min}(H(A^{-1}))$ & $3.83\cdot10^{-2}$ & $2.70\cdot10^{-2}$ & $2.09\cdot10^{-2}$ & $1.71\cdot10^{-2}$ & $1.44\cdot10^{-2}$ & $1.25\cdot10^{-2}$\\ GMRES ($A$) & 60 & 96 & 135 & 159 & 193 & 217 \\ \hline $\lambda_{min}(H(\widetilde{A}))$ & $1.10\cdot10^{-1}$ & $8.45\cdot10^{-2}$ & $8.29\cdot10^{-2}$ & $8.27\cdot10^{-2}$ & $7.44\cdot10^{-2}$ & $6.08\cdot10^{-2}$\\ $\lambda_{min}(H(\widetilde{A}^{-1}))$ & $5.02\cdot10^{-1}$ & $5.00\cdot10^{-1}$ & $5.00\cdot10^{-1}$ & $5.00\cdot10^{-1}$ & $5.00\cdot10^{-1}$ & $5.00\cdot10^{-1}$\\ GMRES ($\widetilde{A}$) & 47 & 52 & 58 & 62 & 68 & 73 \\ \hline \end{tabular} \caption{Eigenvalue approximations and GMRES iteration count for the original basis and the orthogonalized basis using the second mesh of Figure~\ref{perugia_meshes} and $k=10$.} \label{perugia_table} \end{table} Finally, we are interested in the effect of the orthogonalization on the convergence of iterative solvers such as GMRES. From the convergence theory of GMRES \cite{EES83}, provided that $H(A)$, the Hermitian part of the system matrix A, is positive definite, the residual contraction factor, for the residual $r_j$ at iteration $j$, can be bounded as \[ \frac{\|r_j\|}{\|r_0\|} \leq \left( 1 - \lambda_{min}(H(A))\lambda_{min}(H(A^{-1}))\right)^{j/2}. \] Therefore, in Table~\ref{perugia_table}, we report $(\lambda_{min}(H(A)),\lambda_{min}(H(A^{-1})))$, for the system matrix $A$ with the original basis, and $(\lambda_{min}(H(\widetilde{A})),\lambda_{min}(H(\widetilde{A}^{-1})))$, for the system matrix $\widetilde{A}$ obtained with the orthogonal basis, along with the number of GMRES iterations needed in order to reduce the residual of a factor $10^{-10}$. We observe that, for increasing $p$, the values $\lambda_{min}(H(A^{-1}))$ and $\lambda_{min}(H(\widetilde{A}^{-1}))$ are fairly constant, while the values $\lambda_{min}(H(A))$ and $\lambda_{min}(H(\widetilde{A}))$ decrease significantly. However, the values of $\lambda_{min}(H(\widetilde{A}))$ decrease much more slowly than those of $\lambda_{min}(H(A))$, which results in a far slower increase of GMRES iterations for the orthogonalized basis than for the original one. As we merely change the basis and do not precondition the system matrix, we cannot expect a constant number of GMRES iterations. \section{Conclusions} We have provided empirical evidence that, in 2D, the condition number of the plane wave basis is stable for large edge counts in regular polygons, and grows as the aspect ratio of the anisotropy of the elements increases. We also observed its algebraic dependence on the product $hk$, and exponential dependence on~$p$. It has been demonstrated that the condition number of the global system matrix can be significantly lowered by a local modified Gram-Schmidt orthogonalization with respect to the Hermitian part of the local system matrix; this results in faster convergence of the GMRES solver. The improvement of the conditioning in 3D will be considered in future work. \section*{Acknowledgments} The authors have been funded by the Austrian Science Fund (FWF) through the project P 29197-N32. The third author has also been funded by the FWF through the project F 65. \ifx\undefined\leavevmode\hbox to3em{\hrulefill}\, \newcommand{\leavevmode\hbox to3em{\hrulefill}\,}{\leavevmode\hbox to3em{\hrulefill}\,} \fi
{ "timestamp": "2018-08-17T02:07:03", "yymm": "1712", "arxiv_id": "1712.05174", "language": "en", "url": "https://arxiv.org/abs/1712.05174" }
\section{Introduction} The widely used Jensen's inequality for convex functions, attributed to Danish mathematician Johan Jensen, dates back to 1906\cite{jensen1906fonctions}. The literature contains numerous bounds on the Jensen gap, defined as $\J{f} = \E{f\left(X\right)}-f\left(\E{X}\right)$, where $X$ is a random variable with distribution $\mathcal{P}$, and the function $f$ might be convex or nonconvex\cite{zlobec2004jensen}. Example consequences and applications of the known bounds are: a number of famous classical inequalities such as the generalized mean inequality, and a special case the inequality of arithmetic and geometric means, the H\"older's inequality, etc. \cite{steele2004cauchy}; commonly used results in information theory, e.g. non-negativity of Kullback-Leibler divergence \cite{cover2012elements}; variational bounds for negative log likelihood used in statistics and machine learning methods such as the expectation maximization algorithm\cite{dempster1977maximum}, and variational inference\cite{wainwright2008graphical}. Computing a hard-to-compute $\E{f(X)}$ appears in theoretical estimates in a variety of scenarios from statistical mechanics to machine learning theory. A common approach to tackle this problem is to make the approximation $\E{f(X)}\approx f\left(\E{X}\right)$ (for example $\left<\frac{1}{X}\right>\approx\frac{1}{\left<X\right>}$), and then show that the error, i.e., the Jensen gap, would be small enough for the application. Since the error itself is as hard to compute as $\E{f(X)}$, inequalities on the Jensen gap would help by giving easy-to-compute bounds. Moments are commonly used to characterize distributions of random variables because of their relative ease to compute for many distributions. By establishing the connection between the Jensen gap and moments, we create a powerful tool for error estimation based on moment estimates. As a concrete scenario, the Jensen gap has many useful interpretations in statistical mechanics such as the difference of average non-equilibrium work and change of free energy, an important quantity to characterize the deviation of a thermodynamical process from a quasi-static process, as in Jarzynski equality\cite{PhysRevLett.78.2690}, and the fluctuation of thermodynamical quantities around their ensemble average, which is of common interest in physics. In machine learning theory, stochastic gradient descent is employed to minimize the so-called loss function, when learning the parameters of a function from a parametrized family; in this case the training inputs are sampled from a distribution and the loss function is an expectation, to be minimized with respect to the parameters. In such scenarios, a common type of random variable has a distribution concentrated around its mean as described below. \begin{enumerate} \item In estimating $\xi=f\left(\E{X}\right)$, an empirical average from samples is often used as an estimation of expectation, i.e. $\E{X}\approx\bar{X}=\frac{1}{\left|\mathcal{M}\right|}\sum_{X_i\in\mathcal{M}} X_i$ and $\xi\approx\hat\xi=f\left(\bar{X}\right)$. The bias of $\hat\xi$, i.e. $\mathbb{E}_{\mathcal{M}}[\hat\xi]-\xi$ is given by the Jensen gap where the random variable $\bar{X}$ has a distribution concentrated around its mean. The asymptotic growth behavior of the Jensen gap therefore gives an idea how fast we can push the bias to zero by increasing $N$. \item Random variables with a distribution concentrated around the mean are very common in statistical mechanics. Since the number of particles in the system is usually the order of Avogadro constant $N_A\sim 10^{23}$, the distribution is so sharp that all the Jensen gaps become negligible. However, this is not the case in computer simulation or microscopic experiments, which usually have a much smaller system size. The asymptotic growth behavior of thermodynamic fluctuation (defined as the Jensen gap of function $\sqrt\cdot$ of random variable $E^2$) with the system size guides the simulation/experiment setup. \end{enumerate} Moments play an important role in studying random variables with a distribution concentrated on the mean, especially when the random variable is an empirical average of i.i.d random variables\cite{packwood2011moments}. Our results will use moments to express the asymptotic growth behavior of the Jensen gap. Next we give an elementary example to illustrate the inspiration behind our results, which establish the connection between the Jensen gap and the (absolute centered) moment $\sigma_{p}=\sqrt[p]{\E{\left|X-\mu\right|^{p}}}$, where $\mu=\E{X}$ is the expectation of random variable $X$. Assume that for $\alpha>0$, $f\left(x\right)$ is $\alpha$-H\"older continuous over $\mathbb{R}$, i.e. there exists a positive number $M$ such that for any $x\in\mathbb{R}$, $\left|f\left(x\right)-f\left(\mu\right)\right|\leq M\left|x-\mu\right|^{\alpha}$. Then we have \begin{multline} \left|\E{f\left(X\right)}-f\left(\E{X}\right)\right|\leq\int\left|f\left(X\right)-f\left(\mu\right)\right|d\mathcal{P}\left(X\right)\\ \leq M\int\left|x-\mu\right|^{\alpha}d\mathcal{P}\left(X\right)\leq M\sigma_{\alpha}^{\alpha}\label{eq:trivial-upper-bound} \end{multline} Similarly, if $f\left(x\right)-f\left(\mu\right)\geq M\left|x-\mu\right|^{\alpha}$ or $f\left(\mu\right)-f\left(x\right)\geq M\left|x-\mu\right|^{\alpha}$, we can obtain an elementary lower bound on the Jensen gap \begin{multline} \left|\E{f\left(X\right)}-f\left(\E{X}\right)\right|=\int\left|f\left(X\right)-f\left(\mu\right)\right|d\mathcal{P}\left(X\right)\\ \geq M\int\left|x-\mu\right|^{\alpha}d\mathcal{P}\left(X\right)\geq M\sigma_{\alpha}^{\alpha}\label{eq:trivial-lower-bound} \end{multline} Our main results generalize these two elementary bounds as described in the next section. \subsection{Contribution and Comparison} We prove an upper and lower bound on the Jensen gap, summarized below, and demonstrate their tightness. In the following, "upper bound of $A$" means $\left|\J{f}\right|\leq A$ which means $-A\leq\J{f}\leq A$, while "lower bound of $A$" means either $\J{f}\geq A$ or $-\J{f}\geq A$. \begin{itemize} \item For functions that approach $f\left(\mu\right)$ at $x\to\mu$ no slower than $\left|x-\mu\right|^{\alpha}$, and grow as $x\to\pm\infty$ no faster than $\pm\left|x\right|^{n}$ for $n\geq\alpha$, \[ \left|\E{f\left(X\right)}-f\left(\E{X}\right)\right| \leq M\left(\sigma_{\alpha}^{\alpha}+\sigma_{n}^{n}\right)\leq M\left(1+\sigma_{n}^{n-\alpha}\right)\sigma_{n}^{\alpha} \] where $M=\sup_{x\neq\mu}\frac{\left|f\left(x\right)-f\left(\mu\right)\right|}{\left|x-\mu\right|^{\alpha}+\left|x-\mu\right|^{n}}$. This implies that $\E{f\left(X\right)}-f\left(\E{X}\right)$ approaches $0$ no slower $\sigma_{n}^{\alpha}$ as $\sigma_{n}\to0$. \item For functions that either decrease or increase (but do not decrease on one side and increase on the other) to $f\left(\mu\right)$ as $x\to\mu$ no faster than $\left|x-\mu\right|^{\alpha}$, and grow to infinity at $x\to\infty$ no slower than $\left|x\right|^{\beta}$ for $0\leq\beta\leq\alpha$, \[ \left|\E{f\left(X\right)}-f\left(\E{X}\right)\right|\geq M\frac{\sigma_{\alpha/2}^{\alpha}}{1+\sigma_{\alpha-\beta}^{\alpha-\beta}} \] where $M=\inf_{x\neq\mu}\left\{ \left[f\left(x\right)-f\left(\mu\right)\right]\cdot\left(\frac{1}{\left|X-\mu\right|^{\beta}}+\frac{1}{\left|X-\mu\right|^{\alpha}}\right)\right\} $. This implies that $ \E{f\left(X\right)}-f\left(\E{X}\right)$ decreases to $0$ no faster than $\sigma_{\alpha/2}^{\alpha}$ as $\sigma_{\alpha/2}\to0$ as long as $\sigma_{\alpha-\beta}$ does not grow to infinity at the same time. \end{itemize} Although neither our upper bounds nor our lower bounds require the function to be convex or concave, the condition in our lower bound is naturally satisfied by convex or concave functions as we will show in section \ref{subsec:lower-convex}. In order to illustrate the flavor of the main results, we give simple examples that are direct consequences. We also compare the consequences of our main results with known lower bounds\cite{abramovich2004refining,simic2009jensen,walker2014lower,abramovich2016some,zlobec2004jensen,liao2017sharpening} and upper bounds\cite{costarelli2015sharp,Simic2009}. A major advantage of our result over these known results is their relative generality: our conditions on the function, its domain, and the distribution are weak (for example we do not require the function to be convex, and we do not require the distribution to be discrete). Among the above-mentioned bounds, \cite{simic2009jensen} and \cite{Simic2009} are only for discrete distributions, and are omitted from the comparisons below. Besides the bounds as listed above, the Jensen's gap can also be estimated by Jensen-Ostrowski type inequalities\cite{cerone2015jensen, dragomir2016jensen, cerone2015inequalities, dragomir2016ostrowski, cerone2017jensen}. \begin{example} Consider the Jensen gap of $\sin\left(x\right)$ and random variables with mean at $0$. Observe that $\sin\left(x\right)$ has a power series $\sin\left(x\right)=x-\frac{x^{3}}{6}+\frac{x^{5}}{120}+\cdots$, and by choosing $\alpha=n=1$, we get $\left|\J{\sin}\right|\leq \sigma_1^1$. Also, since $\sin'\left(0\right)=1\neq0$, we can obtain a different result by studying $g\left(x\right)=\sin\left(x\right)-x$ (which has the same gap behavior, discussed in section \ref{subsec:upper-shifting} and section \ref{subsec:lower-convex}) instead. This time, by choosing $\alpha=n=3$, we can see that $\left|\J{\sin}\right|\leq \frac{\sigma_3^3}{6}$. If we are interested in the asymptotic behavior of the Jensen gap when the distribution is concentrated around the mean, we can conclude immediately that $\left|\J{\sin}\right|$ decreases to $0$ no slower than $\sim\sigma_3^3$ and $\sim\sigma_1^1$. It is also possible to choose $\alpha=n=2$ and obtain $\left|\J{\sin}\right|\leq\frac{\sigma_{2}^{2}}{\pi}$. Although this result is not as good as the $\sigma_{3}^{3}$ version in terms of asymptotic behavior for non-heavy-tailed distributions such as Gaussian distribution and Laplace distribution, the second moment is usually more available than the third moment. Our lower bound result is not useful in this example. In fact, since $\sin(x)$ is odd, any even distribution $\mathcal{P}$ will result in a zero Jensen gap regardless of its moments. That is, it is impossible to obtain a non-trivial bound that is a function of only moments. Results in \cite{costarelli2015sharp,walker2014lower} require the function to be convex, and are therefore not useful for this example. Since the domain is not $\left[0,A\right)$ or $\left(0,A\right]$ as required by \cite{abramovich2016some}, or $\left[0,+\infty\right)$ as required by \cite{abramovich2004refining}, these results do not apply either. The result in \cite{liao2017sharpening} gives the same bound as our $\sigma_{2}^{2}$ version, which is better than the $\J{\sin}\geq-\frac{\sigma_{2}^{2}}{2}$ given by \cite{zlobec2004jensen}. The fact that \cite{liao2017sharpening} gives the same bound as ours is not just a coincidence, but can be attributed to the connections between our results and \cite{liao2017sharpening} as described in Section \ref{subsec:upper-shifting}. \end{example} \begin{example} Consider $\cos\left(x\right)$ and random variables with mean at $0$. Observe that $\cos\left(x\right)$ has a power series $\cos\left(x\right)=1-\frac{x^{2}}{2}+\cdots$, we can choose $\alpha=n=2$ and see that $\left|\J{\cos}\right|\leq\frac{\sigma_2^2}{2}$. If we are interested in the asymptotic behavior, we can conclude that $\left|\J{\cos}\right|$ will decrease no slower than $\sim\sigma_2^2$, i.e. the variance of the distribution. Again, our lower bound result is not useful in this example. In fact, although non-trivial, it is possible to construct a probability distribution $\mathcal{P}$ that makes the Jensen gap equal $0$ and has arbitrary moments\footnote{To construct such a probability distribution, we can choose a discrete $\mathcal{P}$ whose support is a subset of $\left\{\pm 2\pi k\middle|k\in\mathbb{N}\right\}$. By appropriate choice of probability values at discrete points, it is possible to make $\mathcal{P}$ have any desired set of moments.}, that is, it is impossible to obtain a non-trivial bound that is a function of only moments. Results in \cite{costarelli2015sharp,walker2014lower} require the function to be convex, therefore not useful for this example. Since the domain is not $\left[0,A\right)$ or $\left(0,A\right]$ as required by \cite{abramovich2016some}, or $\left[0,+\infty\right)$ as required by \cite{abramovich2004refining}, these results do not apply. Both our result and \cite{liao2017sharpening,zlobec2004jensen} are able to get $\J{\cos}\geq-\frac{\sigma_{2}^{2}}{2}$. Since $\cos\left(x\right)-1\leq0$, it is not hard to see that $\J{\cos}\leq0$, which is also given by the result of \cite{liao2017sharpening}. \end{example} \begin{example} Consider $\log\left(x\right)$ and random variable $X\in\left[a,+\infty\right)$ with $a>0$ that has $\E{X}=1$. Since $\log'\left(x\right)=1\neq0$, we study $g\left(x\right)=\log\left(x\right)-\left(x-1\right)$ instead, which preserves gap behavior as in the $\sin$ example above. Note that $\log\left(x\right)=\left(x-1\right)-\frac{1}{2}\left(x-1\right)^{2}+\cdots$. By choosing $\alpha=n=2$, we have \[ \left|\J{\log}\right|\leq\frac{a-1-\log(a)}{(1-a)^2}\sigma_2^2 \] i.e. $\left|\J{\log}\right|$ will decrease no slower than $\sim\sigma_2^2$, i.e. the variance of the distribution, as $\sigma_2\to0$. Since the $\log$ function is concave, our lower bound is useful (see section \ref{subsec:lower-convex}). Choosing $\alpha=2$ and $\beta=1$, we get \[ -\J{\log}\geq \frac{1}{2}\cdot\frac{\sigma_1^2}{1+\sigma_1} \] whereby the Jensen gap approaches 0 no faster than $\sigma_1^2$ as $\sigma_1\to0$. The estimate given by \cite{costarelli2015sharp} is \[ \J{\log}\leq \frac{1}{2a^2}\min_{c\geq a}\left\{\E{(X-c)^2}+(1-c)^2\right\}=\frac{\sigma_2^2}{2a^2} \] for $a<1$ \[ \frac{a-1-\log(a)}{(1-a)^2} < \frac{1}{2a^2} \] which means that we get a better result than \cite{costarelli2015sharp}. Since the domain is not $\left[0,A\right)$ or $\left(0,A\right]$ as required by \cite{abramovich2016some}, or $\left(0,+\infty\right)$ as required by \cite{walker2014lower}, or $\left[0,+\infty\right)$ as required by \cite{abramovich2004refining}, these results do not apply. The result in \cite{zlobec2004jensen} in this case falls back to Jensen's inequality $\J{\log}\geq 0$. The result in \cite{liao2017sharpening} gives the same upper bound as ours and fall back to Jensen's inequality for the lower bound. \end{example} \begin{example} Consider $f\left(x\right)=\sqrt{x}$ and random variables on $\left[0,+\infty\right)$ that has mean at $1$. Since $f'\left(1\right)=\frac{1}{2}\neq0$, we study $g\left(x\right)=\sqrt{x}-\frac{x-1}{2}$ instead. Note that $\sqrt{x}=1+\frac{x-1}{2}-\frac{1}{8}(x-1)^2+\cdots$. By choosing $n=\alpha=2$, we see that $\left|\J{\sqrt\cdot}\right|\leq\frac{\sigma_2^2}{2}$, i.e. $\left|\J{\sqrt\cdot}\right|$ will decrease no slower than $\sim\sigma_2^2$, i.e. the variance of the distribution. Also, since $\sqrt{x}$ is concave, our lower bound is useful. By setting $\alpha=2$ and $\beta=1$, we get \[ -\J{\sqrt\cdot}\geq \frac{1}{8}\cdot\frac{\sigma_1^2}{1+\sigma_1} \] whereby the Jensen gap will approach 0 no faster than $\sigma_1^2$. Since the second order derivative is not bounded, \cite{costarelli2015sharp} does not apply to this example. Since $\frac{\sqrt{x}-\sqrt{0}}{x}$ is not defined on $0$ and does not have a power series on $0$, results in \cite{abramovich2016some} do not apply. Since the domain is not $\left(0,+\infty\right)$ as required by \cite{walker2014lower}, that result does not apply. Since $-\sqrt\cdot$ is superquadratic, \cite{abramovich2004refining} applies and has a result $-\J{\sqrt\cdot}\geq-\E{\sqrt{\left|X-1\right|}}=-\sigma_{1/2}^{1/2}$, which is not even an improvement of Jensen's inequality $-\J{\sqrt\cdot}\geq0$. Again, the result in \cite{zlobec2004jensen} falls back to Jensen's inequality $-\J{\sqrt\cdot}\geq0$. The result in \cite{liao2017sharpening} gives the same upper bound as ours and fall back to Jensen's inequality for the lower bound. \end{example} \begin{example} Consider $f\left(x\right)=x^{4}$ and random variables that have mean at $1$. Since $f'\left(1\right)=4\neq0$, we study $g\left(x\right)=x^{4}-4(x-1)$ instead. By choosing $\alpha=2$, $n=4$, we see that $\left|\J{f}\right|\leq\frac{7+\sqrt{41}}{2}\left(1+\sigma_4^2\right)\sigma_4^2$, i.e. $\left|\J{f}\right|$ will decrease no slower than $\sim\sigma_4^{2}$. Also, since $f(x) = x^4$ is convex, our lower bound is useful. By choosing $\alpha=\beta=2$, we get $\J{f}\geq 2\sigma_1^2$ whereby the Jensen gap will decrease to 0 no faster than $\sigma_1^2$. Since the second order derivative is not bounded, results in \cite{costarelli2015sharp} do not apply to this example. Since the domain is not $\left[0,A\right)$ or $\left(0,A\right]$ as required by \cite{abramovich2016some}, or $\left(0,+\infty\right)$ as required by \cite{walker2014lower}, or $\left[0,+\infty\right)$ as required by \cite{abramovich2004refining}, these results do not apply. Again \cite{zlobec2004jensen} falls back to Jensen's inequality $\J{f}\geq0$. The Jensen gap of $x^{4}$ on $\mathbb{R}$ with $\mu=1$. The result in \cite{liao2017sharpening} gives a trivial upper bound $\J{f}\leq+\infty$ and a lower bound $\J{f}\geq2\sigma_{2}^{2}$. This lower bound gives better numerical values and is usually easier to compute compared with ours. \end{example} \section{First Main result: Upper bound} \label{subsec:upper-main} We first prove an upper bound on the Jensen gap and discuss the tightness of this bound in section \ref{subsec:upper-tightness}. Note that our upper bound is useful even when the function $f$ is not convex. Next, we show how to use shifts to expand the scope of our upper bound in section \ref{subsec:upper-shifting}. The upper bound in the following theorem holds for any probability distribution as long as the relevant moments are well defined. \begin{thm} \label{thm:main-result-upper}If $f:I\to\mathbb{R}$, where $I$ is a closed subset of $\mathbb{R}$ and $\mu\in I$, satisfies the following conditions: \begin{enumerate} \item $f$ is bounded on any compact subset of $I$. \item \textup{$\left|f\left(x\right)-f\left(\mu\right)\right|=O\left(\left|x-\mu\right|^{\alpha}\right)$ at $x\to\mu$ for $\alpha>0$ .} \item $\left|f\left(x\right)\right|=O\left(\left|x\right|^{n}\right)$ as $x\to\infty$ for $n\geq\alpha$ \end{enumerate} then for a random variable $X$ with probability distribution $\mathcal{P}$ and expectation $\mu$, the following inequality holds: \begin{equation} \left|\E{f\left(X\right)}-f\left(\mu\right)\right|\leq M\left(\sigma_{\alpha}^{\alpha}+\sigma_{n}^{n}\right)\leq M\left(1+\sigma_{n}^{n-\alpha}\right)\sigma_{n}^{\alpha}\label{eq:sigma-alpha-n-bound} \end{equation} where $M=\sup_{x\in I\backslash\{\mu\}}\frac{\left|f\left(x\right)-f\left(\mu\right)\right|}{\left|x-\mu\right|^{\alpha}+\left|x-\mu\right|^{n}}$ does not depend on the probability distribution $\mathcal{P}$. \end{thm} \begin{proof} We begin by showing that $g\left(x\right)=\frac{\left|f\left(x\right)-f\left(\mu\right)\right|}{\left|x-\mu\right|^{\alpha}+\left|x-\mu\right|^{n}}$ is bounded on $I\backslash\left\{\mu\right\} $: Since $\left|f\left(x\right)\right|=O\left(\left|x\right|^{n}\right)$ and $\left|x-\mu\right|^{\alpha}+\left|x-\mu\right|^{n}=\Theta\left(\left|x\right|^{n}\right)$ at $x\to\infty$, there exists $d_{1}$ that $g\left(x\right)$ is bounded on $\left|x-\mu\right|\geq d_{1}$. Also, at $x\to\mu$, since $\left|f\left(x\right)-f\left(\mu\right)\right|=O\left(\left|x-\mu\right|^{\alpha}\right)$ and $\left|x-\mu\right|^{\alpha}+\left|x-\mu\right|^{n}=\Theta\left(\left|x-\mu\right|^{\alpha}\right)$, there exists $d_{2}<d_{1}$ such that $g\left(x\right)$ is bounded on $\left|x-\mu\right|\leq d_{2}$. Since the set $d_{1}\leq\left|x-\mu\right|\leq d_{2}$ is compact, the numerator is bounded on this set, and the denominator is bounded from below by $d_{2}^{\alpha}+d_{2}^{n}$, $g\left(x\right)$ is therefore bounded on $d_{1}\leq\left|x-\mu\right|\leq d_{2}$. In summary, $g\left(x\right)$ is bounded on $\mathbb{R}\backslash\left\{0\right\}$. Let $M=\sup_{x\in I\backslash\{\mu\}}\frac{\left|f\left(x\right)-f\left(\mu\right)\right|}{\left|x-\mu\right|^{\alpha}+\left|x-\mu\right|^{n}}$, we then have: \[ \left|f\left(x\right)-f\left(\mu\right)\right|=\left(\left|x-\mu\right|^{\alpha}+\left|x-\mu\right|^{n}\right)\cdot g\left(x\right)\leq M\left(\left|x-\mu\right|^{\alpha}+\left|x-\mu\right|^{n}\right) \] So the Jensen gap is \begin{multline*} \left|\E{f\left(X\right)}-f\left(\E{X}\right)\right|=\int_{\mathbb{R}}\left|f\left(X\right)-f\left(\mu\right)\right|d\mathcal{P}\left(X\right)\\ \leq M\int_{\mathbb{R}}\left|X-\mu\right|^{\alpha}+\left|X-\mu\right|^{n}d\mathcal{P}\left(X\right)\leq M\left(\sigma_{\alpha}^{\alpha}+\sigma_{n}^{n}\right) \end{multline*} Also note that $\sigma_{\alpha}\leq\sigma_{n}$ for $\alpha\leq n$, we then have \[ M\left(\sigma_{\alpha}^{\alpha}+\sigma_{n}^{n}\right)\leq M\left(1+\sigma_{n}^{n-\alpha}\right)\sigma_{n}^{\alpha} \] \end{proof} If we are only interested in distributions concentrated around $\mu$, we can further simplify the inequality to the corollary below: \begin{cor} For functions that satisfy the condition in \thmref{thm:main-result-upper}, there exists a positive number $M'$ independent of the distribution such that \begin{equation} \left|\E{f\left(X\right)}-f\left(\mu\right)\right|\leq M'\sigma_{n}^{\alpha}\label{eq:sigma-alpha-bound} \end{equation} for sufficiently small $\sigma_{n}$, \end{cor} \subsection{Tightness of upper bound\label{subsec:upper-tightness}} We show that modulo the preceding constant $M'$, the inequality \ref{eq:sigma-alpha-bound} is sharp. \begin{prop}\label{prop:upper-sharp} Let $f\left(x\right)$ be a function that satisfies the condition in \thmref{thm:main-result-upper} with $I=\mathbb{R}$ and has $\left|f\left(x\right)-f\left(\mu\right)\right|\geq M\left|x-\mu\right|^{\alpha}$ on $x\in\mathbb{R}$ for some $M>0$. Then for any $\sigma_{n}>0$ there exists probability distribution $\mathcal{P}$ that makes \[ \left|\E{f\left(X\right)}-f\left(\E{X}\right)\right|\geq M\sigma_{n}^{\alpha} \] \end{prop} \begin{proof} Let $\mathcal{P}$ be discrete with \[ \mathcal{P}\left(\left\{ \mu+\sigma_n\right\} \right)=\mathcal{P}\left(\left\{ \mu-\sigma_n\right\} \right)=\frac{1}{2} \] \[ \mathcal{P}\left(\mathbb{R}\backslash\left\{ \mu+\sigma_n,\mu-\sigma_n\right\} \right)=0 \] The Jensen gap can then be written as \[ \left|\E{f\left(X\right)}-f\left(\E{X}\right)\right|\geq M\int\left|X-\mu\right|^{\alpha}d\mathcal{P}\left(X\right)=M\sigma_{n}^{\alpha} \] \end{proof} The following proposition shows that the $\sigma_{n}$ in inequality \eqref{eq:sigma-alpha-bound} cannot be replaced by $\sigma_{\beta}$ for any $\beta<n$: \begin{prop} There exists a function $f$ that satisfies the condition in \thmref{thm:main-result-upper} such that for any $0<\beta<n$ and $\sigma_{n}>0$, there exists a probability distribution $\mathcal{P}$ that makes $\frac{\left|\J{f}\right|}{\sigma_{\beta}{}^{\alpha}}$ arbitrarily large. \end{prop} \begin{proof} Let $\mathcal{P}$ be discrete with \[ \mathcal{P}\left(\left\{ \mu\right\} \right)=1-p \] \[ \mathcal{P}\left(\left\{ \mu+a\right\} \right)=\mathcal{P}\left(\left\{ \mu-a\right\} \right)=p/2 \] \[ \mathcal{P}\left(\mathbb{R}\backslash\left\{ \mu,\mu+a,\mu-a\right\} \right)=0 \] Then $\sigma_{\beta}$ can be written as \[ \sigma_{\beta}=\sqrt[\beta]{p}\cdot a \] Let $f\left(x\right)=\left|x-\mu\right|^{\alpha}+\left|x-\mu\right|^{n}$. The absolute value of the Jensen gap can be written as \[ \left|\J{f}\right|=p\cdot\left(a^{\alpha}+a^{n}\right). \] Then the ratio \[ \frac{\left|\J{f}\right|}{\sigma_\beta^{\alpha}}=p^{1-\frac{\alpha}{\beta}}\cdot\left(1+a^{n-\alpha}\right). \] Note that $\sigma_{n}=\sqrt[n]{p}\cdot a$. We then have $a=\frac{\sigma_{n}}{\sqrt[n]{p}}$. Then we can write the ratio as \[ \frac{\left|\J{f}\right|}{\sigma_\beta^{\alpha}}=p^{1-\frac{\alpha}{\beta}}\cdot\left(1+\frac{\sigma_{n}^{n-\alpha}}{p^{1-\frac{\alpha}{n}}}\right)=p^{1-\frac{\alpha}{\beta}}+p^{\alpha\left(\frac{1}{n}-\frac{1}{\beta}\right)}\cdot\sigma_{n}^{n-\alpha}. \] Since $\frac{1}{n}-\frac{1}{\beta}<0$ and $p$ can take any value in $\left(0,1\right)$, it is always possible to make the ratio arbitrarily large. \end{proof} \subsection{Expanding the scope of the upper bound by linear shifts \label{subsec:upper-shifting}} When referring to random variables with distribution peaked around its mean, i.e. random variables with small $\sigma_{n}$, the larger the $\alpha$ in inequality \eqref{eq:sigma-alpha-bound}, the tighter the upper bound. However, for many $f$, it is impossible to find an $\alpha>1$. For example, for functions that are differentiable at $\mu$ and have a $f'\left(\mu\right)\neq0$, the largest $\alpha$ we can obtain is $\alpha=1$. Also, for the case of convex functions that are strictly increasing at $x=\mu$, it is impossible to find an $\alpha>1$: \begin{prop} Let $f\left(x\right)$ be a convex function that is strictly increasing near $\mu$. Then for any $\alpha>1$, we have \[ \lim_{x\to\mu}\frac{\left|x-\mu\right|^{\alpha}}{f\left(x\right)-f\left(\mu\right)}=0 \] \end{prop} \begin{proof} Since $f$ is convex and strictly increasing, we have $f'_+(\mu)>0$ and $f'_-(\mu)>0$. So \[ \lim_{x\to\mu^+}\frac{\left|x-\mu\right|^{\alpha}}{f\left(x\right)-f\left(\mu\right)}=\lim_{x\to\mu^+}\left(x-\mu\right)^{\alpha-1}\cdot\frac{x-\mu}{f\left(x\right)-f\left(\mu\right)}=0\cdot\frac{1}{f'_+(\mu)}=0 \] Same argument holds for $x\to\mu^-$. \end{proof} Although the inability to get an $\alpha>1$ seems to be a major limitation, fortunately for most cases we can eliminate this limitation by shifting the function by a linear function, because this does not change its convexity or the Jensen gap. For functions that are differentiable at $x=\mu$, from Taylor's theorem with Peano's form of remainder, we know that \[ f\left(x\right)=f\left(\mu\right)+f'\left(\mu\right)\left(x-\mu\right)+o\left(x-\mu\right) \] We can therefore study $g\left(x\right)=f\left(x\right)-f'\left(\mu\right)\left(x-\mu\right)$ instead of $f\left(x\right)$. We will then have $g\left(x\right)-g\left(\mu\right)=o\left(x-\mu\right)$, which has an $\alpha$ value at least as large as $f\left(x\right)$. If further $f\left(x\right)$ has well defined second derivative, we then have \[ f\left(x\right)=f\left(\mu\right)+f'\left(\mu\right)\left(x-\mu\right)+\frac{f^{''}\left(\xi_{L}\right)}{2}\left(x-\mu\right)^{2} \] that is \begin{equation} g\left(x\right)-g\left(\mu\right)=\frac{f^{''}\left(\xi_{L}\right)}{2}\left(x-\mu\right)^{2}\label{eq:gx} \end{equation} which implies $\alpha=2$. If $f''(\mu)=0$, we can apply similar arguments to higher order derivatives to find the best $\alpha$. Note that if we define $h\left(x;\mu\right)\equiv\frac{f''\left(\xi_{L}\right)}{2}\equiv\frac{f\left(x\right)-f\left(\mu\right)-f'\left(\mu\right)\left(x-\mu\right)}{\left(x-\mu\right)^{2}}$, then \eqref{eq:gx} can be written as $g\left(x\right)-g\left(\mu\right)=h\left(x;\mu\right)\left(x-\mu\right)^{2}$, which further gives \begin{equation} \inf_{x}h\left(x;\mu\right)\cdot\mathrm{Var}\left[X\right]\leq\J{f}\leq\sup_{x}h\left(x;\mu\right)\cdot\mathrm{Var}\left[X\right]\label{eq:liaoresult} \end{equation} as shown in\cite{liao2017sharpening}. If $\left|f\left(x\right)\right|\neq O\left(x^{2}\right)$ at $x\to\infty$, then $\sup_{x}h\left(x;\mu\right)=+\infty$ or $\inf_{x}h\left(x;\mu\right)=-\infty$ or both, which means at least half of \eqref{eq:liaoresult} will become a trivial inequality $-\infty\leq\J{f}$ or $\J{f}\leq+\infty$. On the other hand, if $\left|f\left(x\right)\right|=O\left(x^{2}\right)$ at $x\to\infty$, we then have $n=2$. If this is the case, the preceding constant $M$ in theorem \ref{thm:main-result-upper} can then be written as $M=\frac{1}{2}\sup_{x}\left|h\left(x;\mu\right)\right|$ and Equation (2.1) therefore becomes $-\sup_{x}\left|h\left(x;\mu\right)\right|\cdot\sigma_{2}^{2}\leq\mathcal{J}\leq\sup_{x}\left|h\left(x;\mu\right)\right|\cdot\sigma_{2}^{2}$, which is equivalent to \eqref{eq:liaoresult} in half or in full\footnote{If $\left|\sup_{x}h\left(x;\mu\right)\right|>\left|\inf_{x}h\left(x;\mu\right)\right|$, then we must have $\sup_{x}\left|h\left(x;\mu\right)\right|=\sup_{x}h\left(x;\mu\right)$, which means the $\J{f}\leq$ part of theorem \ref{thm:main-result-upper} and of \eqref{eq:liaoresult} are equivalent. If $\left|\sup_{x}h\left(x;\mu\right)\right|<\left|\inf_{x}h\left(x;\mu\right)\right|$, then we must have $-\sup_{x}\left|h\left(x;\mu\right)\right|=\inf_{x}h\left(x;\mu\right)$, which means the $\leq\J{f}$ part of theorem \ref{thm:main-result-upper} and of \eqref{eq:liaoresult} are equivalent. If $\left|\sup_{x}h\left(x;\mu\right)\right|=\left|\inf_{x}h\left(x;\mu\right)\right|$ and $h\left(x;\mu\right)$ is not constant, then we must have $\sup_{x}\left|h\left(x;\mu\right)\right|=-\inf_{x}h\left(x;\mu\right)=\sup_{x}h\left(x;\mu\right)$, hence in this case, Theorem \ref{thm:main-result-upper} is fully equivalent to \eqref{eq:liaoresult}.}. Due to these connections, Lemma 1 in \cite{liao2017sharpening} gives a convenient way to compute the $M$ in equation \eqref{eq:sigma-alpha-n-bound} when the $f'\left(x\right)$ is convex or concave. \section{Second Main Result: Lower bound} \label{subsec:lower-main} We first prove our lower bound for conditions similar to the upper bound case. The tightness of this bound will be discussed in section \ref{subsec:lower-tight} followed, in section \ref{subsec:lower-convex}, by strong implications for convex functions, and expanding the scope via linear function shifts. The lower bound given in the following theorem holds for any probability distribution as long as the relevant moments are well-defined. \begin{thm}\label{thm:main-result-lower} If function $f:I\to\mathbb{R}$, where $I$ is a closed subset of $\mathbb{R}$ and $\mu\in I$, satisfies the following conditions: \begin{enumerate} \item $f(x)-f(\mu)>0$ at $x\neq\mu$ \item $f\left(x\right)-f\left(\mu\right)=\Omega\left(\left|x-\mu\right|^{\alpha}\right)$ at $x\to\mu$ for $\alpha>0$ \item $f\left(x\right)-f\left(\mu\right)=\Omega\left(\left|x-\mu\right|^{\beta}\right)$ at $x\to\infty$ for $0\leq\beta\leq\alpha$ \end{enumerate} then for random variable $X$ with probability distribution $\mathcal{P}$ that has expectation $\mu$, the following inequality holds: \begin{equation} \J{f}\geq M\frac{\sigma_{\alpha/2}^{\alpha}}{1+\sigma_{\alpha-\beta}^{\alpha-\beta}}\label{eq:cauchy-schwartz-lower-bound} \end{equation} where $M=\inf_{x\in I\backslash \{\mu\}}\left\{ \left[f\left(x\right)-f\left(\mu\right)\right]\cdot\left(\frac{1}{\left|X-\mu\right|^{\beta}}+\frac{1}{\left|X-\mu\right|^{\alpha}}\right)\right\}>0$ does not depend on the probability distribution $\mathcal{P}$. \end{thm} \begin{proof} Let \[ g\left(x\right)=\left(\frac{1}{\left|x-\mu\right|^{\beta}}+\frac{1}{\left|x-\mu\right|^{\alpha}}\right)^{-1} \] from the definition of $M$, we know that $f\left(x\right)-f\left(\mu\right)\geq M\cdot g(x)$. We first prove that $M>0$. It is easy to see that $g\left(x\right)$ is positive at $x\neq\mu$, $g\left(x\right)=\Theta\left(\left|x-\mu\right|^{\alpha}\right)$ at $x\to\mu$, and $g\left(x\right)=\Theta\left(\left|x-\mu\right|^{\beta}\right)$ at $x\to\infty$. Therefore, there exists positive $M_{1}$, $M_{2}$ and $d_{1}\leq d_{2}$ such that $f\left(x\right)-f\left(\mu\right)\geq M_{1}\cdot g\left(x\right)$ at $\left|x-\mu\right|\leq d_{1}$ and $f\left(x\right)-f\left(\mu\right)\geq M_{2}\cdot g\left(x\right)$ at $\left|x-\mu\right|\geq d_{2}$. Since $d_{1}\leq\left|x-\mu\right|\leq d_{2}$ is compact and both $f\left(x\right)-f\left(\mu\right)$ and $g\left(x\right)$ are positive in this interval, there exists $M_{3}>0$ such that $f\left(x\right)-f\left(\mu\right)\geq M_{3}\cdot g\left(x\right)$. Taking $M'=\min\left\{ M_{1},M_{2},M_{3}\right\}>0$, we have $f\left(x\right)-f\left(\mu\right)\geq M'g\left(x\right)$. That is, $\frac{f\left(x\right)-f\left(\mu\right)}{g(x)}$ is bounded from below by some positive number. Therefore \[ M=\inf_{x\in I\backslash \{\mu\}}\frac{f\left(x\right)-f\left(\mu\right)}{g(x)}>0 \] Since $f\left(x\right)-f\left(\mu\right)\geq M\cdot g\left(x\right)$, we have \begin{multline} \J{f}\geq M\int\left(\frac{1}{\left|X-\mu\right|^{\beta}}+\frac{1}{\left|X-\mu\right|^{\alpha}}\right)^{-1}d\mathcal{P}\left(X\right)\\ =M\int\frac{\left|X-\mu\right|^{\alpha}}{\left|X-\mu\right|^{\alpha-\beta}+1}d\mathcal{P}\left(X\right)\label{eq:ineq-int} \end{multline} The Cauchy\textendash Schwarz inequality can be used to simplify the above inequality: Let \[ g_{1}\left(X\right)=\sqrt{\frac{\left|X-\mu\right|^{\alpha}}{\left|X-\mu\right|^{\alpha-\beta}+1}} \] and \[ g_{2}\left(X\right)=\sqrt{\left|X-\mu\right|^{\alpha-\beta}+1} \] Cauchy\textendash Schwarz inequality \[ \E{g_{1}^{2}\left(X\right)}\E{g_{2}^{2}\left(X\right)}\geq\E{g_{1}\left(X\right)g_{2}\left(X\right)}^{2} \] can be rewritten as \[ \E{g_{1}^{2}\left(X\right)}\geq\frac{\E{g_{1}\left(X\right)g_{2}\left(X\right)}^{2}}{\E{g_{2}^{2}\left(X\right)}} \] Note that \[ \E{g_{1}^{2}\left(X\right)}=\int\frac{\left|X-\mu\right|^{\alpha}}{\left|X-\mu\right|^{\alpha-\beta}+1}d\mathcal{P}\left(X\right) \] \[ \E{g_{2}^{2}\left(X\right)}=\int\left(\left|X-\mu\right|^{\alpha-\beta}+1\right)d\mathcal{P}\left(X\right)=1+\sigma_{\alpha-\beta}^{\alpha-\beta} \] \[ \E{g_{1}\left(X\right)g_{2}\left(X\right)}^{2}=\left(\int\left|X-\mu\right|^{\alpha/2}d\mathcal{P}\left(X\right)\right)^{2}=\sigma_{\alpha/2}^{\alpha} \] We therefore have \[ \int\frac{\left|X-\mu\right|^{\alpha}}{\left|X-\mu\right|^{\alpha-\beta}+1}d\mathcal{P}\left(X\right)\geq\frac{\sigma_{\alpha/2}^{\alpha}}{1+\sigma_{\alpha-\beta}^{\alpha-\beta}} \] Plugging into \eqref{eq:ineq-int}, we have \begin{equation*} \J{f}\geq M\frac{\sigma_{\alpha/2}^{\alpha}}{1+\sigma_{\alpha-\beta}^{\alpha-\beta}} \end{equation*} \end{proof} Note that if we replace all $f(x)-f(\mu)$ with $f(\mu)-f(x)$, and at the same time replace $\J{f}$ with $-\J{f}$, \thmref{thm:main-result-lower} still holds. Also note that by replacing the Cauchy\textendash Schwarz inequality with H\"older's inequality in the proof of the above theorem, we can get a more general but less pleasing result: \begin{thm} The inequality \eqref{eq:cauchy-schwartz-lower-bound} in \thmref{thm:main-result-lower} can be replaced by the following inequality: \begin{equation}\label{eq:lower-bound-holder} \J{f}\geq M\frac{\left[\sum_{l=0}^{\left(k+1\right)/q-1}\left(\begin{array}{c} \left(k+1\right)/q-1\\ l \end{array}\right)\sigma_{\alpha/p+l\left(\alpha-\beta\right)}^{\alpha/p+l\left(\alpha-\beta\right)}\right]^{p}}{\left[\sum_{l=0}^{k}\left(\begin{array}{c} k\\ l \end{array}\right)\sigma_{l\left(\alpha-\beta\right)}^{l\left(\alpha-\beta\right)}\right]^{p/q}} \end{equation} where $k\geq1$ is an integer, $q$ can be any positive factor of $\left(k+1\right)$ except $1$, and $p=\frac{q}{q-1}$. \end{thm} \begin{proof} Following the same steps as in the Cauchy\textendash Schwarz version, but this time introducing a new integral parameter $k\geq1$, and setting \[ g_{1}\left(X\right)=\sqrt[p]{\frac{\left|X-\mu\right|^{\alpha}}{\left|X-\mu\right|^{\alpha-\beta}+1}} \] and \[ g_{2}\left(X\right)=\sqrt[q]{\left(\left|X-\mu\right|^{\alpha-\beta}+1\right)^{k}} \] we have \[ \E{g_{1}^{p}\left(X\right)}=\int\frac{\left|X-\mu\right|^{\alpha}}{\left|X-\mu\right|^{\alpha-\beta}+1}d\mathcal{P}\left(X\right) \] \[ \E{g_{2}^{q}\left(X\right)}=\int\left(\left|X-\mu\right|^{\alpha-\beta}+1\right)^{k}d\mathcal{P}\left(X\right)=\sum_{l=0}^{k}\left(\begin{array}{c} k\\ l \end{array}\right)\sigma_{l\left(\alpha-\beta\right)}^{l\left(\alpha-\beta\right)} \] \begin{multline*} \E{g_{1}\left(X\right)g_{2}\left(X\right)}=\int\left|X-\mu\right|^{\alpha/p}\left(\left|X-\mu\right|^{\alpha-\beta}+1\right)^{k/q-1/p}d\mathcal{P}\left(X\right)\\ =\int\left|X-\mu\right|^{\alpha/p}\left(\left|X-\mu\right|^{\alpha-\beta}+1\right)^{\left(k+1\right)/q-1}d\mathcal{P}\left(X\right)\\ =\sum_{l=0}^{\left(k+1\right)/q-1}\left(\begin{array}{c} \left(k+1\right)/q-1\\ l \end{array}\right)\sigma_{\alpha/p+l\left(\alpha-\beta\right)}^{\alpha/p+l\left(\alpha-\beta\right)}. \end{multline*} From H\"older's inequality, we know that \[ \E{g_{1}^{p}\left(X\right)}\geq\frac{\E{g_{1}\left(X\right)g_{2}\left(X\right)}^{p}}{\E{g_{2}^{q}\left(X\right)}^{\frac{p}{q}}}=\frac{\left[\sum_{l=0}^{\left(k+1\right)/q-1}\left(\begin{array}{c} \left(k+1\right)/q-1\\ l \end{array}\right)\sigma_{\alpha/p+l\left(\alpha-\beta\right)}^{\alpha/p+l\left(\alpha-\beta\right)}\right]^{p}}{\left[\sum_{l=0}^{k}\left(\begin{array}{c} k\\ l \end{array}\right)\sigma_{l\left(\alpha-\beta\right)}^{l\left(\alpha-\beta\right)}\right]^{p/q}} \] which immediately yields inequality \eqref{eq:lower-bound-holder}. \end{proof} Although general, inequality \eqref{eq:lower-bound-holder} is too cumbersome to be useful. To simplify it, we can take $q=k+1$ and therefore $p=1+\frac{1}{k}$. We then have \begin{equation} \label{eq:lower-bound-holder-special-q} \J{f}\geq M\frac{\sigma_{\alpha/\left(1+\frac{1}{k}\right)}^{\alpha}}{\left[\sum_{l=0}^{k}\left(\begin{array}{c} k\\ l \end{array}\right)\sigma_{l\left(\alpha-\beta\right)}^{l\left(\alpha-\beta\right)}\right]^{1/k}} \end{equation} Note that applying inequality \eqref{eq:lower-bound-holder-special-q} to $f(x)=\left|x\right|^\alpha$, we obtain \begin{equation} \label{eq:lower-bound-special} \sigma_\alpha\geq\sigma_{\alpha/\left(1+\frac{1}{k}\right)} \end{equation} which is a special case of the inequality \[ \E{\left|X\right|^{r}}\leq\E{\left|X\right|^{s}}^{\frac{r}{s}} \] for $0<r<s$. \subsection{Tightness of the lower bound\label{subsec:lower-tight}} Since inequality \eqref{eq:lower-bound-special} is a special case of \eqref{eq:lower-bound-holder-special-q} and since \eqref{eq:lower-bound-special} is sharp, it follows that \eqref{eq:lower-bound-holder-special-q} is sharp. In inequality \eqref{eq:lower-bound-holder-special-q}, as the centered absolute moments decrease to $0$, since the denominator decreases to $1$, it is the numerator that characterizes how fast the Jensen gap decreases to $0$. Since $\sigma_r\leq\sigma_s$ for $r\leq s$, having a larger subscript in the numerator means a tighter result. In \eqref{eq:lower-bound-holder-special-q}, the subscript of the numerator can be increased to a value arbitrarily close to $\alpha$ by choosing larger $k$, but as a side effect, this also brings higher orders of moments into the denominator. Therefore, a natural question is whether we can increase the subscript of the numerator of \eqref{eq:lower-bound-holder-special-q} without bringing in higher orders of moments? The following proposition shows that the answer is no, by showing that if we increase the subscript higher than that proposed in \eqref{eq:lower-bound-holder-special-q}, it is possible to construct a sequence of probability distributions that make the moments in the denominator decrease to 0 while at the same time making the ratio between the Jensen gap and the numerator go to zero (therefore making it impossible to find a $M$ to make the $\geq$ in \eqref{eq:lower-bound-holder-special-q} hold): \begin{prop} Let $f(x)=\Theta(\left|x\right|^\beta)$ at $x\to\infty$ be a function that satisfies the condition in \thmref{thm:main-result-lower}. Then for any $q>\alpha/\left(1+\frac{1}{k}\right)$, there exists a sequence of probability distributions $\mathcal{P}^{(1)}, \mathcal{P}^{(2)}, \ldots$ such that $\sigma_{r}^{(j)}$ is non-increasing with respect to $j$ for all $r\leq k(\alpha-\beta)$ and \[ \lim_{j\to+\infty}\frac{\mathcal{J}\left(f,X\sim\mathcal{P}^{(j)}\right)}{\left[\sigma_q^{(j)}\right]^\alpha}=0 \] \end{prop} \begin{proof} Let $m=k(\alpha-\beta)$. Without loss of generality, assume $\mu=0$, $f(x)$ is even, and $f(0)=0$. Let $\mathcal{P}$ be a discrete probability distribution that has \[ \mathcal{P}\left(\left\{j\right\} \right)=\mathcal{P}\left(\left\{ -j\right\} \right)=\frac{1}{2j^m} \] \[ \mathcal{P}\left(\left\{0\right\}\right)=1-\frac{1}{j^m} \] \[ \mathcal{P}\left(\mathbb{R}\backslash\left\{ 0,\pm j\right\} \right)=0 \] It is easy to see that \[ \sigma_r^{(j)}=j^{1-\frac{m}{r}} \] does not increase as $j$ increases for $r\leq m$, and \[ \mathcal{J}\left(f,X\sim\mathcal{P}^{(j)}\right)=\frac{f(j)}{j^m}=\Theta\left(j^{\beta-m}\right) \] at $j\to+\infty$. We then have \[ \frac{\mathcal{J}\left(f,X\sim\mathcal{P}^{(j)}\right)}{\left[\sigma_q^{(j)}\right]^\alpha}=\Theta\left[j^{\beta-m-\alpha\left(1-\frac{m}{q}\right)}\right] \] In the case that $q>\alpha/\left(1+\frac{1}{k}\right)$, we have $\beta-m-\alpha\left(1-\frac{m}{q}\right)<0$. Therefore \[ \Theta\left[j^{\beta-m-\alpha\left(1-\frac{m}{q}\right)}\right]\to0 \] as $j\to+\infty$. \end{proof} \subsection{The lower bound for convex functions\label{subsec:lower-convex}} The conditions for \thmref{thm:main-result-lower} are hard for a general function to satisfy. In fact, Jensen's inequality only holds for convex functions, so it is not surprising that we are unable to obtain a lower bound of the Jensen gap. In this section, we show that convexity implies the conditions in \thmref{thm:main-result-lower}. The argument in this section also applies to concave functions. In order for the condition in \thmref{thm:main-result-lower} to be satisfied, a convex function $f(x)$ needs to be non-increasing at $\left(-\infty,\mu\right]$ and non-decreasing at $\left[\mu,+\infty\right)$. The following proposition shows that it is always possible to shift a convex function by a linear function to make it so. \begin{prop} For any convex function $f\left(x\right)$, and any real number $a$ satisfying $f'_-(\mu)\leq a\leq f'_+(\mu)$, the linear shift $g\left(x\right)=f\left(x\right)-a\left(x-\mu\right)$ is non-increasing at $\left(-\infty,\mu\right]$ and non-decreasing at $\left[\mu,+\infty\right)$. Specially, if $f\left(x\right)$ is differentiable at $\mu$, $a$ is unique and given by $a=f'\left(x\right)$. \end{prop} \begin{proof} For $\mu \leq x < x'$, we have \[ \frac{g(x')-g(x)}{x'-x} = \frac{f(x')-f(x)}{x'-x} - a \geq f'_+(\mu)-a\geq0 \] That is, $g(x')\geq g(x)$. Similar argument applies to the $x\leq\mu$ half of $g(x)$. \end{proof} The convexity also implies that $\alpha$ can only take values from $\left[1,+\infty\right)$, as shown in the following proposition: \begin{prop} There does not exist any convex function that has $f\left(x\right)-f\left(\mu\right)=\Omega\left(\left|x-\mu\right|^{\alpha}\right)$ at $x\to\mu$ with $\alpha<1$. \end{prop} \begin{proof} Since $f\left(x\right)-f\left(\mu\right)=\Omega\left(\left|x-\mu\right|^{\alpha}\right)$ as $x\to\mu$, there exists positive number $d$ and $M$ such that $f\left(x\right)-f\left(\mu\right)\geq M\left|x-\mu\right|^{\alpha}$ at $\left|x-\mu\right|\leq d$. Then for any $x$ that has $\mu<x<\mu+d$, we have \[ \frac{f\left(x\right)-f\left(\mu\right)}{x-\mu}\geq M\left(x-\mu\right)^{\alpha-1} \] Since $\frac{f\left(x\right)-f\left(\mu\right)}{x-\mu}$ is non-decreasing with respect to $x$, if $\alpha<1$, $\left(x-\mu\right)^{\alpha-1}$ will becomes arbitrarily high as $x\to\mu^+$, making it impossible to for the above inequality to be true as $x$ decreases to $\mu$. \end{proof} The following proposition shows that for convex functions, it is possible to find a $\beta$ at least as large as $1$: \begin{prop} If $f\left(x\right)$ is convex, then at $x\to\infty$, $\left|f\left(x\right)\right|$ is either constant or $\Omega\left(\left|x\right|\right)$. \end{prop} \begin{proof} If $f\left(x\right)$ is constant, then this proposition automatically holds true. Otherwise, let $x_{0}<x_{1}$ be two real numbers such that $f\left(x_{0}\right)\neq f\left(x_{1}\right)$. Without loss of generality, let us assume $f\left(x_{0}\right)<f\left(x_{1}\right)$. Since $f\left(x\right)$ is convex, then for any $x>x_{1}$ \[ \frac{f\left(x\right)-f\left(x_{0}\right)}{x-x_{0}}\geq\frac{f\left(x_{1}\right)-f\left(x_{0}\right)}{x_{1}-x_{0}}>0 \] That is \[ f\left(x\right)-f\left(x_{0}\right)\geq\frac{f\left(x_{1}\right)-f\left(x_{0}\right)}{x_{1}-x_{0}}\cdot\left(x-x_{0}\right) \] i.e., $f\left(x\right)=\Omega\left(x\right)$ at $x\to+\infty$. Considering all the cases, i.e. when $x\to-\infty$, and when $f\left(x_{0}\right)>f\left(x_{1}\right)$, we get $\left|f\left(x\right)\right|=\Omega\left(\left|x\right|\right)$. \end{proof} Although for the lower bound case our result has no similar equivalence relation with \cite{liao2017sharpening} as the one discussed in \ref{subsec:upper-shifting}, the preceding constant $M$ in our theorem \ref{thm:main-result-lower}, after the linear shift, can be written as $M=2\cdot\sup_{x\neq\mu}h\left(x;\mu\right)$ when $\alpha=\beta=2$. For this special case, when $f'\left(x\right)$ is convex or concave, the Lemma 1 in \cite{liao2017sharpening} is still helpful. \section{Further Discussion, Conclusion and Open problems} The procedure in the proofs of \ref{thm:main-result-upper} and \ref{thm:main-result-lower} can be thought of as a general scheme, which is also followed by \cite{liao2017sharpening}, of obtaining bounds on the Jensen gap. This procedure first writes $f\left(x\right)-f\left(\mu\right)$ as a product of two functions, say $s\left(x\right)t\left(x\right)$, where the $\sup$ and $\inf$ of $s$ are easy to compute and the integral $\int t\left(X\right)d\mathcal{P}\left(X\right)$ can be easily computed or further bounded. We then have \[ \inf s\left(x\right)\cdot\int t\left(X\right)d\mathcal{P}\left(X\right)\leq\J{f}\leq\sup s\left(x\right)\cdot\int t\left(X\right)d\mathcal{P}\left(X\right) \] The above formula gives a very general way to bound the Jensen gap. For example, instead of using $t\left(x\right)=\left|x-\mu\right|^{\alpha}+\left|x-\mu\right|^{n}$ as in theorem \ref{thm:main-result-upper}, the reader can choose a more general form $t\left(x\right)=\sum_{\alpha\leq\eta\leq n}a_{\eta}\left|x-\mu\right|^{\eta}$ where values of $\eta$ and $a_{\eta}$ are chosen based on the application to better approximate $f\left(x\right)$, and obtain an improved upper bound \[ \mathcal{J}\leq\sup\frac{f\left(x\right)}{t\left(x\right)}\cdot\left(\sum_{\alpha\leq\eta\leq n}a_{\eta}\sigma_{\eta}^{\eta}\right) \] Similarly, instead of using $t\left(x\right)=\left(\frac{1}{\left|x-\mu\right|^{\alpha}}+\frac{1}{\left|x-\mu\right|^{\beta}}\right)^{-1}$ as in theorem \ref{thm:main-result-lower}, the reader can choose $t\left(x\right)=\left(\sum_{\beta\leq\eta\leq\alpha}\frac{a_{\eta}}{\left|x-\mu\right|^{\eta}}\right)^{-1}$ where values of $\eta$ and $a_{\eta}$ depend on the application, and obtain an improved lower bound \[ \mathcal{J}\geq\inf\frac{f\left(x\right)}{t\left(x\right)}\cdot\frac{\sigma_{\alpha/2}^{\alpha}}{\sum_{\beta\leq\eta\leq\alpha}a_{\eta}\sigma_{\alpha-\eta}^{\alpha-\eta}} \] or \[ \mathcal{J}\geq\inf\frac{f\left(x\right)}{t\left(x\right)}\cdot\frac{\sigma_{\alpha/\left(1+\frac{1}{k}\right)}^{\alpha}}{\left(\sum_{\beta\leq\eta_{1},\cdots,\eta_{k}\leq\alpha}a_{\eta_{1}}\cdots a_{\eta_{k}}\sigma_{k\alpha-\eta_{1}-\cdots-\eta_{k}}^{k\alpha-\eta_{1}-\cdots-\eta_{k}}\right)^{1/k}} \] We have obtained general upper and lower bounds on Jensen's gap that depend on the asymptotic growth of the function and related moments of the random variable's distribution and compared the new bounds with existing upper and lower bounds. Although fairly general, some conditions in our theorems are still too strong for some situations. For example, in our upper bound, we require the function to grow no faster than polynomial at $x\to\infty$, which excludes some useful functions, such as exponential functions. Also, we require the function to be bounded on any compact subset of $\mathbb{R}$ in our upper bound, which exclude the study of the Jensen gap for functions like $\log(x)$, $\frac{1}{x}$ with random variable $X$ on $\left(0,+\infty\right)$. Future work is proposed to extend our results to include such cases. \section{Acknowledgement} This work is partly funded by National Institutes of Health [grant number R01GM110077]. The authors would like to thank Justin S. Smith for a thorough proof reading to correct grammatical errors. The authors would like to thank J.G. Liao for the discussion on the connection between our results and his work\cite{liao2017sharpening}. \section*{References} \bibliographystyle{plain}
{ "timestamp": "2018-11-06T02:44:31", "yymm": "1712", "arxiv_id": "1712.05267", "language": "en", "url": "https://arxiv.org/abs/1712.05267" }
\section{Introduction} A small value (several tens keV) of the spreading width of the Isobaric Analog Resonances (IARs), $\Gamma^{\downarrow}_{A}$, is the impressive manifestation of the approximate isospin-symmetry conservation in medium-heavy mass nuclei. The spreading width of an arbitrary giant resonance (including the IAR) is determined by coupling of corresponding particle-hole-type excitations to many-quasiparticle configurations (chaotic states). In the case of the IAR, this coupling is significantly suppressed and realized only due to isospin mixing. In medium-heavy mass nuclei, the main mixing mechanism consists in IAR coupling to its overtone (the Isovector Monopole Giant Resonance in the $\beta^{-}$-channel (IVMGR$^{(-)}$)) via a variable part of the mean Coulomb field (see, e.g., Ref. \cite{Auerbach}). A realistic attempt to estimate quantitatively the IAR spreading width has been undertaken rather recently \cite{GRU_2010} within the approach, that includes the "Coulomb description" of IAR properties and consideration of the spreading effect on properties of giant resonances, having the "normal" isospin, within a semi-microscopic model \cite{Urin_NPA_2008}. The shortcoming of this model is a non-correct description of mentioned giant resonances at their distant "tails" (the IAR is located at the low-energy "tail" of the IVGMR$^{(-)}$). In the present work, for the description of the spreading effect we apply the newly developed particle-hole dispersive optical model \cite{Urin_PRC_2013}, which is free from the above-mentioned shortcoming. In Sect. 2, we present the basic relationships used for the description of the IAR spreading width within the proposed approach. The choice of model parameters, the calculation results obtained for the IARs based on the $^{208, 209}$Pb parent-nuclei ground state, and comparison with the corresponding experimental data are given in Sect. 3. Conclusive remarks and perspectives for further studies of IAR damping by the use of the proposed approach are contained in Sect. 4. \section{"Coulomb description" of IAR damping} \subsection{Coulomb strength function and the IAR total width} Existence and properties of the IARs are closely related to the approximate isospin-symmetry conservation in nuclei. Let $\hat{H}$ be a model Hamiltonian that includes the mean Coulomb field $\widehat{U}_{C} = \frac{1}{2} \sum \limits_{a}(1 - \tau^{(3)}_a) U_C(r_{a})$. In medium-heavy mass nuclei, a variable (in space) part of this field is mainly responsible for isospin-symmetry violation (see, e. g., Ref. \cite{Auerbach}). In such a case, the equation of motion for the Fermi operator $\widehat{T}^{(-)} = \sum \limits_{a} \tau^{(-)}_{a}$, that generates proton-neutron-hole $(p\bar{n})$ monopole excitations associated with the IAR, can be presented in the form \cite{GRU_2010}: \begin{equation}\label{eq_commut} [\widehat{H}, \widehat{T}^{(-)}] -\Delta_{C}\widehat{T}_{(-)} = \widehat{V}^{(-)}_{C}, \quad \widehat{V}^{(-)}_{C} = \sum \limits_{a} \left( U_{C}(r_a)-\Delta_C \right) \tau^{(-)}_a, \end{equation} where the parameter $\Delta_{C}$ is defined below. This equation allows one to get a correspondence between the Fermi and Coulomb energy-averaged strength functions $S^{(-)}_{F}((\omega)$ and $S^{(-)}_{C}(\omega)$, respectively: \begin{equation}\label{eq_SF-SC} S^{(-)}_{F}((\omega) = \frac{S^{(-)}_{C}(\omega)}{|\omega - \Delta_C|^2}. \end{equation} Here, $\omega$ is the excitation energy counted off the energy of the parent-nucleus ground state, the Fermi and Coulomb energy-averaged strength functions are related to the monopole probing operators (external fields) $V^{(-)}_{F}(x) = \tau^{(-)}$ and $V^{(-)}_{C}(x) = (U_{C}(r) - \Delta_{C})\tau^{(-)}$, respectively. In the experimental excitation functions of $(pp')$- and $(pn_{tot})$-reactions, the IAR is found as a narrow well-formed resonance. For this reason, the Fermi strength function in a vicinity of IAR can be parameterized by a Lorenzian: \begin{equation}\label{eq_SF_lorentz} S^{(-)}_{F} = \frac{1}{2\pi}\frac{S_A \Gamma_A}{|\omega - \omega_A + i\Gamma_A /2|^2}, \end{equation} where $S_{A}$ is the IAR Fermi strength (close to the neutron excess), $\omega_{A}$ and $\Gamma_{A}$ are, respectively, the IAR excitation energy and total width. As follows from Eqs. (\ref{eq_commut}) --- (\ref{eq_SF_lorentz}), the IAR total width is determined by the, generally speaking, transcendental equation: \begin{equation}\label{eq_width_SC} \Gamma_{A} = \frac{2 \pi}{S_A} S^{(-)}_{C}(\omega_A). \end{equation} In this equation, the Coulomb strength function is defined with the use of the complex-valued quantity $\Delta_{C} = \omega_{A} - (i/2)\Gamma_{A}$. As a function of $\omega$, the mentioned strength function exhibits the maximum, corresponding to the IVGMR$^{(-)}$. Considering this resonance as a Lorenzian, one gets from Eq.(\ref{eq_width_SC}) the known qualitative estimation of the IAR total width: $\Gamma_{A} = \beta^2_{M, A} \Gamma_{M}(\omega_A)$ \cite{Auerbach}. Here, $\beta_{M, A}$ is the amplitude of IAR and IVGMR$^{(-)}$ isospin mixing caused by a variable part of the mean Coulomb field, $\Gamma_{M}$ is the IVGMR$^{(-)}$ total width taken at the IAR energy. Such an estimation shows that the quantitative description of the IAR total width and its main components, the proton-escape width $\Gamma^{\uparrow}_{A}$ and spreading width $\Gamma^{\downarrow}_{A}$ ($\Gamma^{\uparrow}_{A} + \Gamma^{\downarrow}_{A} = \Gamma_{A}$), needs the correct description of the distant low-energy "tail" of the IVGMR$^{(-)}$. Hereafter, we neglect by IAR "rare" decays (such as radiation and direct-neutron decays). As any strength function taken at the energy, that exceeds the nucleon separation energy, the Coulomb strength function can be divided into the direct (one-nucleon escape) and spreading (statistical) parts: \begin{equation}\label{eq_SC_stat} S^{(-)}_{C}(\omega) = S^{(-), \uparrow}_{C}(\omega) + S^{(-), \downarrow}_{C}(\omega). \end{equation} These parts describe, respectively, direct proton and statistical (mainly neutron) decays of high-energy monopole $(p\bar{n})$-type states. As follows from Eqs.(\ref{eq_width_SC}),(\ref{eq_SC_stat}), the main components of the IAR total width, $\Gamma^{\uparrow}_{A}$ and $\Gamma^{\downarrow}_{A}$, are determined by the respective components of the Coulomb strength function taken at the IAR energy. \subsection{Coulomb strength function within the PHDOM} Abilities to describe direct-decay properties and distant "tails" of various giant resonances are related to the specific features of the newly developed particle-hole dispersive optical model (PHDOM) \cite{Urin_PRC_2013}. Being an extension of the standard \cite{Urin_PRC_2013} and nonstandard \cite{Urin_NPA_2008} continuum-RPA (cRPA) versions to a phenomenological (and in average over the energy) description of the spreading effect in closed-shell nuclei, the model includes a few ingredients. They are: (i) the Landau-Migdal p-h interaction $F(x_{1}, x_{2}) \rightarrow 2F' \vec{\tau}_{1} \vec{\tau}_{2} \delta(\mathbf{r}_{1} - \mathbf{r}_{2})$ (below charge-exchange monopole (p-h)-type excitations are only considered); (ii) a realistic phenomenological mean field, in which essential for the description of above-mentioned excitations the symmetry potential and mean Coulomb field are evaluated self-consistently (see, e.g., Ref. \cite{KIU_2014}); (iii) the imaginary and real (dispersive) parts, respectively $W(\omega)$ and $P(\omega)$, of the strength of the phenomenological energy-averaged specific p-h interaction (p-h self-energy term) responsible for the spreading effect. Below the PHDOM basic equations are first given in applying to the description of high-energy charge-exchange monopole excitations, having "normal" isospin. Let $V^{(-)}(x) = V(r)\tau^{(-)}$ be a monopole Fermi-type probing operator (external field), in which the radial part $V(r)$ might be complex-valued quantity. The energy-averaged strength function $S^{(-)}_{V}(\omega)$ and polarizability $P^{(-)}_{V}(\omega)$, corresponding to this operator, are determined by an effective field $ \widetilde{V}(r, \omega)$. After separation of spin-angular and isospin variables, one gets the expressions for the above-mentioned quantities and the equation for the effective field: \begin{equation}\label{eq_SV_ImP} S^{(-)}_{V}(\omega) = -\frac{1}{\pi} \Im P^{(-)}_{V}(\omega), \end{equation} \begin{equation}\label{eq_P} P^{(-)}_{V}(\omega) = \int V^{*}(r) A^{(-)}(r,r',\omega) \widetilde{V}(r', \omega) drdr', \end{equation} and \begin{equation}\label{eq_V_eff} \widetilde{V}(r, \omega) = V(r) + \frac{F'}{2\pi r^2} \int A^{(-)}(r,r',\omega) \widetilde{V}(r', \omega) dr'. \end{equation} Here, $A^{(-)}(r, r', \omega)$ is the radial monopole component of the "free" p-h propagator in the $\beta^{(-)}$-channel. The expression for this component, obtained within the PHDOM with taking approximately the single-particle continuum into account \cite{Urin_PRC_2013}, can be presented as the sum: \begin{eqnarray}\label{eq_A_sum_components} \begin{aligned} A^{(-)} = A^{(-)}_{1} + A^{(-)}_{2} + A^{(-)}_{3},\\ A^{(-)}_{1}(r,r',\omega) = \sum \limits_{\nu,(\pi)} t^2_{(\pi)(\nu)} n_{\nu} \chi_{\nu}(r)\chi_{\nu}(r')g_{(\pi)}(r,r',\epsilon_{\nu}+\omega),\\ A^{(-)}_{2}(r,r',\omega) = \sum \limits_{(\nu),\pi} t^2_{(\pi)(\nu)} n_{\pi} \chi_{\pi}(r)\chi_{\pi}(r')g_{(\nu)}(r,r',\epsilon_{\pi}-\omega),\\ A^{(-)}_{3}(r,r',\omega) = \sum \limits_{\nu,\pi} t^2_{(\pi)(\nu)} n_{\pi} n_{\nu} \chi_{\pi}(r)\chi_{\pi}(r')\chi_{\nu}(r)\chi_{\nu}(r') a_{\pi \nu}(\omega),\\ a_{\pi \nu} (\omega) = \frac{2\left( iW(\omega) - P(\omega) \right) f_{\pi}f_{\nu}}{(\epsilon_{\pi} - \epsilon_{\nu} -\omega)^2-\left( iW(\omega) - P(\omega) \right)^2 f^2_{\pi}f^2_{\nu}}. \end{aligned} \end{eqnarray} Here, the bound-state radial wave functions $\chi_{\mu}(r)$ satisfy the equation $(h_{(\mu)}(r) - \epsilon_{\mu})\chi_{\mu}(r) = 0$, where $\mu$ is the set of single-particle quantum numbers $n_{r}, j, l$ ($(\mu) = j, l$) for neutrons ($\mu = \nu$) and protons ($\mu = \pi$), $h_{(\mu)}(r)$ is the radial part of a single-particle Hamiltonian (this part includes the spin-orbit and centrifugial terms); $n_{\mu} = N_{\mu}/(2j_{\mu}+1)$ is the occupation number ($N_{\mu}$ is the number of nucleons filling the single-particle level $\mu$); $t^{2}_{(\pi)(\nu)} = (2j_{\nu}+1)\delta_{(\pi)(\nu)}$ is the squared kinematical factor; the optical-model-like Green functions $g_{(\pi)}(r, r', \epsilon_{\nu} +\omega)$ and $g_{(\nu)}(r,r', \epsilon_{\pi} -\omega)$ satisfy the equations: \begin{eqnarray}\label{eq_sp_gf_prot} \begin{aligned} \left\{ h_{(\pi)}(r) - \left[ \epsilon_{\nu} + \omega + \left( iW(\omega)-P(\omega) \right) f_{\nu} f(r) \right]\right\} g_{(\pi)}(r,r',\epsilon_{\nu}+\omega) = -\delta(r-r'),\\ \left\{ h_{(\pi)}(r) - \left[ \epsilon_{\nu} + \omega + \left( iW(\omega)-P(\omega) \right) f_{\nu} f(r) \right]\right\} \chi_{\epsilon,(\pi)}(r) = 0;\\ \end{aligned} \end{eqnarray} \begin{equation}\label{eq_sp_gf_neut} \left\{ h_{(\nu)}(r) - \left[ \epsilon_{\pi} - \omega + \left( iW(\omega)-P(\omega) \right) f_{\pi} f(r) \right]\right\} g_{(\nu)}(r,r',\epsilon_{\pi}-\omega) = -\delta(r-r'). \end{equation} We show here also the equation for the proton optical-model-like continuum-state wave functions $\chi_{\epsilon,(\pi)}(r)$ ($\epsilon = \epsilon_{\nu} + \omega > 0$), having standing-wave asymptotical behavior. These wave functions (in the limit $W=P=0$, these are normalized to $\delta$-function of the energy) enter in the definition of the above-mentioned proton-escape Coulomb strength function \begin{eqnarray}\label{eq_SC_sp_wf} \begin{aligned} S^{(-), \uparrow}_{C}(\omega) =& \sum_{\nu} S^{(-), \uparrow}_{C,\nu}(\omega),\\ S^{(-), \uparrow}_{C,\nu}(\omega) =& N_{\nu} \delta_{(\pi)(\nu)} \left| \int \chi^{*}_{\epsilon, (\pi)}(r) \widetilde{V}^{(-)}_C(r,\omega) \chi_{\nu}(r) dr \right|^2 \end{aligned} \end{eqnarray} that determines the IAR total proton-escape width $\Gamma^{\uparrow}_{A} = \frac{2\pi}{S_{A}} S^{(-), \uparrow}_{C}(\omega_{A})$. Finally, we get the expression for the IAR spreading width \begin{equation}\label{eq_spread_width} \Gamma^{\downarrow}_{A} = \Gamma_{A} - \Gamma_{A}^{\uparrow} \end{equation} in terms of the proper Coulomb strength functions, as it follows from Eqs. (\ref{eq_width_SC}), (\ref{eq_SC_stat}), (\ref{eq_SC_sp_wf}). In ignoring the spreading effect ($W(\omega) = P(\omega) =0$ in Eqs. (\ref{eq_A_sum_components})---(\ref{eq_sp_gf_neut})), when $\Gamma^{\downarrow}_{A} = 0$, the IAR energy $\omega_{A, 0}$ and Coulomb polarizability $P^{(-)}_{C, 0}(\omega_{A, 0})$ can be evaluated within the cRPA. That allows one to get an estimation of the (relatively small) IAR spreading shift by the relationship: \begin{equation}\label{eq_omega_shift} \omega_{A} - \omega_{A, 0} = \frac{1}{S_A} \Re \left\{ P^{(-)}_{C}(\omega_A) - P^{(-)}_{C,0}(\omega_{A, 0}) \right\}. \end{equation} This statement completes presentation of the approach to a quantitative description of the IAR spreading width for medium-heavy mass closed-shell and closed-shell+valence-neutron parent nuclei. In conclusion of this Section, we note, that in applying to the description of high-energy $(n\bar{p})$-type monopole excitations, the PHDOM basic equations can be obtained from Eqs. (\ref{eq_SV_ImP}) --- (\ref{eq_sp_gf_neut}) by the substitution $\pi \leftrightarrow \nu$, or (that is the same) $\omega \rightarrow -\omega$. In the so obtained equations, $\omega$ means the excitation energy counted off the parent-nucleus ground-state energy. The above-mentioned equations can be used, in particular, for description of the IVGMR$^{(\mp)}$ strength functions, $S^{(\mp)}_M(\omega)$. \section{Choice of model parameters. Calculation results} As an example of implementations of the above-described approach, we consider below the spreading width of the IARs based on the ground state of the $^{208, 209}$Pb parent nuclei. In such a consideration, we turn first to ingredients of the PHDOM, which is the basic element of the proposed approach (Subsection II.B). A realistic partially self-consistent phenomenological mean field is described in details in Ref. \cite{KIU_2014}, where the list of mean-field parameters (including the Landau-Migdal parameter $f' = F'/(300 \text{ MeV} \cdot \text{fm}^{3}))$ for the $^{208}$Pb parent nucleus is also given. The parameterization of the phenomenological quantity, the imaginary part $W(\omega)$ (and, therefore, the expression for the dispersive real part $P(\omega))$ of the strength of the energy-averaged p-h self-energy term, is given in Refs. \cite{Urin_PRC_2013, TU_2009} for excitations in the neutral channel. In consideration of (p-h)-type excitations in the charge-exchange channels, we use the similarly parameterized quantity $W(E_{x})$, where $E_{x} = \omega - Q$ is the excitation energy, counted off the compound-nucleus ground-state energy ($Q$ is the difference of the ground-state energies of the corresponding compound and parent nuclei). Two sets of the ``spreading'' parameters (the strength $\alpha$ and ``saturation'' parameter $B$), which enter in the quantity $W(E_x)$, are chosen to describe within the PHDOM the monopole strength function $S^{(-)}_M (\omega)$ and, therefore, the IVGMR$^{(-)}$ energy and total width experimentally known for $^{208}$Pb parent nucleus with poor accuracy \cite{Errel_PRC_1986}. The first (``traditional'') set is close to that used previously for the description within the PHDOM of the low-energy giant resonances (isovector dipole \cite{tulupov2014description} and isoscalar monopole \cite{gorelik2016investigation}). The strength function $S^{(-)}_M (\omega)$ is calculated with the use of the probing operator $V^{(-)}_M(x)$ chosen to minimize the excitation of IAR: $V^{(-)}_M(x) = (r^{2} - \langle r^{2} \rangle)\tau^{(-)}$. Here, the brackets $\langle ... \rangle$ mean averaging over the neutron-excess density. The strength function $S^{(-)}_M (\omega)$ calculated within the PHDOM and cRPA for $^{208}$Pb are compared in Fig. \ref{pict_SMonopole}. As follows from this comparison, the single-particle continuum gives essential contribution to formation of the IVGMR$^{(-)}$. Two sets of ``spreading'' parameters $\alpha$ and $B$, the calculated and experimental energy and total width of the IVGMR$^{(-)}$ are given in Table \ref{table_gamma}. The use of the new set of adjustible parameters is found to be preferable. After the above-described choice of model parameters, the IAR energy $\omega_{A}$ and total width $\Gamma_{A}$ can be evaluated within the approach, as follows. First, the corresponding cRPA equations are solved to calculate the Fermi strength function $S^{(-)}_{F, 0}(\omega)$ and then to evaluate the quantities $S_{A}$, $\omega_{A, 0}$ and $\Gamma_{A, 0}$. Using these quantities, one can calculate within the cRPA the Coulomb polarizibility $P^{(-)}_{C, 0}(\omega)$. Secondly, from the system of transcendental equations (\ref{eq_width_SC}) and (\ref{eq_omega_shift}) one can evaluate the IAR parameters $\omega_{A}$ and $\Gamma_{A}$ by means of an iterative procedure, which is well converged. These quantities are finally used to evaluate within the approach the IAR proton-escape and spreading widths in accordance with Eqs. (\ref{eq_SC_sp_wf}) and (\ref{eq_spread_width}), respectively. The calculated values $\Gamma^{\downarrow}_{A}$ given in Table \ref{table_gamma} for the IARs based on the ground state of the $^{208, 209}$Pb parent nuclei can be considered as a quantitative estimation of the corresponding experimental values \cite{Reiter_ZPA_1990}. In conclusion of this Section, we show in Fig. \ref{pict_SCoul} the Coulomb strength functions $S^{(-)}_{C}(\omega)$ and $S^{(-)}_{C, 0}(\omega)$ calculated within the PHDOM and cRPA. Differently from the data shown in Fig. \ref{pict_SMonopole}, these strength functions exhibit no resonance structure in the IAR region. This point can be considered as an evidence of consistency of the proposed approach. \section{Conclusive remarks. Perspectives} As applied to the closed-shell and closed-shell + valence-neutron parent nuclei, we have used the newly developed particle-hole dispersive optical model for the description of high-energy charge-exchange monopole excitations. In particular, for the description of the Isobaric Analog Resonances we have proposed the approach, in which the ``Coulomb description'' of isospin-forbidden processes is incorporated into the above-mentioned model. As the first step in implementations of the approach, we have formulated the method for evaluation of the impressive quantity, the IAR spreading width. The method has been realized for the IARs based on the ground state of the $^{208, 209}$Pb parent nuclei, and a quantitative estimation of the corresponding experimental data has been obtained without the use of specific adjustable parameters. The following steps in studying monopole charge-exchange excitations within the proposed approach might be the following. 1) A quantitative estimation of the IAR partial proton-escape widths with taking into account their sharp energy-dependence on the escaped-proton energy. 2) The description of the IAR asymmetry (determined by the so-called IAR mixing -phase) in the excitation functions of proton-induced reactions. 3) A quantitative estimation of the partial branching ratios for direct proton (neutron) decay of the Isovector Giant Monopole Resonance in the $\beta^{(-)}$ - ($\beta^{(+)}$ - ) channel. An extension of the approach in applying to medium-heavy mass spherical nuclei, having developed nucleon pairing, is also in sight. \begin{acknowledgments} This work is partially supported by the Russian Foundation for Basic Research under grant No. 15-02-080007 (G. V. K., M. L. G., M. H. U.), and by the Competitiveness Program of National Research Nuclear University ``MEPhI'' (G. V. K., M. H. U.). \end{acknowledgments}
{ "timestamp": "2017-12-15T02:05:48", "yymm": "1712", "arxiv_id": "1712.05146", "language": "en", "url": "https://arxiv.org/abs/1712.05146" }
\section{Introduction} \label{sec:intro} One of the outstanding problems of statistical physics is the nature of the ordered phase of spin glasses. While this problem is primarily of interest to researchers in statistical and condensed matter physics, spin-offs from its study have found their way into different fields of research, such as computer science and neural networks. Unfortunately, standard methods used in condensed matter physics, such as the renormalization group and mean-field theory, have resulted in a confusing situation for the nature of the spin-glass state. The picture that derives from mean-field theory---valid for infinite-dimensional systems---is that of replica symmetry breaking (RSB) \cite{parisi:79,parisi:83,rammal:86,mezard:87,parisi:08}. However, results using real-space renormalization group (RG) methods---which are better for low-dimensional systems---suggest a spin-glass state with replica symmetry \cite{moore:98, monthus:15,wang:17a,angelini:15,angelini:17}. The purpose of this work is to present additional numerical results beyond those presented in Ref.~\cite{wang:17a} that suggest that in space dimension $d \le 6$ the low-temperature phase of spin glasses is replica symmetric, and that it is only for dimensions $d > 6$ that RSB prevails. In the absence of RSB, the droplet picture (DP) \cite{mcmillan:84a,bray:86,fisher:88} is expected, i.e., when $d \le 6$. In the DP the low-temperature phase is replica symmetric and there is no de Almeida-Thouless line \cite{dealmeida:78} in the presence of an applied field. Its properties are determined by the excitation of droplets whose free-energy cost on a length scale $\ell$ goes as $\ell^{\theta}$ and which have fractal dimension $d_{\rm s} < d$. In the RSB picture there exist system-size excitations which have a free-energy cost of $O(1)$ and which are space filling, i.e., have $d_{\rm s}=d$. Thus by investigating the value of $d_{\rm s}$ of interfaces in the low-temperature phase, it is possible to determine whether the low-temperature state is best described by RSB or DP. Direct Monte Carlo simulations to determine the value of $d_{\rm s}$ in $d = 3$ have proved inconclusive (see, for example, Ref.~\cite{katzgraber:01} and references therein). This is because the numerically accessible system sizes in equilibrated simulations are just too small to distinguish RSB \cite{marinari:00, billoire:12a} from DP behavior \cite{wang:17c}. One advantage of using real-space RG methods such as the strong-disorder renormalization group (SDRG) method is that one can study much larger system sizes than can be thermalized in Monte Carlo simulations. Therefore, in this study we use SDRG, as well as a greedy algorithm to estimate $d_{\rm s}$ for spin glasses in different space dimensions $d$. The paper is structured as follows. In Sec.~\ref{sec:formalism} we introduce the model studied, and describe how by studying the link overlap one can determine the fractal dimension of interfaces. In Sec.~\ref{sec:SDRG} we give some details of the SDRG procedure as developed by Monthus \cite{monthus:15} and outline why it is expected to work better in two dimensions than in six space dimensions. Our results for $d_{\rm s}$ in dimensions $d=2$, $3$, $4$, $5$, and $6$ are reported in Sec.~\ref{sec:SDRGresults}. The greedy algorithm (GA) used here as well is described in Sec.~\ref{sec:greedy}. We conclude with a brief discussion in Sec.~\ref{sec:discussion}. \section{Model and observables} \label{sec:formalism} We study the Edwards-Anderson (EA) Ising spin-glass model \cite{edwards:75} on a $d$-dimensional hypercubic lattice of linear extent $L$ described by the Hamiltonian \begin{equation} H = - \sum_{\langle ij \rangle} J_{ij} S_i S_j, \label{eq:ham} \end{equation} where the summation is over nearest-neighbor bonds and the random couplings $J_{ij}$ are chosen from a standard Gaussian distribution of unit variance and zero mean. The Ising spins take the values $S_i \in \{\pm 1\}$ with $i = 1,2, \ldots, L^d$. The fractal dimension $d_{\rm s}$ can be obtained from the link overlap \begin{equation} q_{\ell} =\frac{1}{N_b} \sum_{\langle ij \rangle} S_i^{(\pi)}S_j^{(\pi)} S_i^{(\overline{\pi})}S_j^{(\overline{\pi})} \left(2 \delta_{J_{ij}^{\pi},J_{ij}^{\overline{\pi}}} - 1\right). \label{eqn:pqdef} \end{equation} Here $S_i^{(\pi)}$ and $S_i^{(\overline{\pi})}$ denote the ground states found with periodic $(\pi)$ and antiperiodic $(\overline{\pi})$ boundary conditions, respectively. One can change from periodic to antiperiodic boundary conditions by flipping the sign of the bonds crossing a hyperplane of the lattice. $N_b$ is the number of nearest-neighbor bonds in the lattice which for a $d$-dimensional hypercube is given by $N_b=d L^d$. The $L$ dependence of the quantity $\Gamma$ determines $d_{\rm s}$ via \begin{equation} \Gamma \equiv 1-q_{\ell}=\frac{2\Sigma^{\rm DW}}{d L^d} \sim L^{d_{\rm s}-d}, \label{eqn:gammadef} \end{equation} where $\Sigma^{DW}$ is the number of bonds crossed by the domain wall bounding the flipped spins \cite{hartmann:02}. The domain wall could be fractal, i.e., its ``length'' $\Sigma^{DW} \sim A L^{d_{\rm s}}$. If the interface were straight across the system, its length would be $\sim L^{d-1}$. In the RSB phase $d_{\rm s}=d$, so that $d-1 \le d_{\rm s} \le d$. The SDRG (and also the GA) methods are just means by which one can determine the (approximate) ground states needed in Eqs.~\eqref{eqn:pqdef} and \eqref{eqn:gammadef}. \section{The SDRG algorithm} \label{sec:SDRG} In this Section we outline the SDRG method as described by Monthus in Ref.~\cite{monthus:15}. For each spin $S_i$, the local field is \begin{eqnarray} h^{\rm loc}_i && = \sum_{j} J_{ij} S_j. \label{hloci} \end{eqnarray} The SDRG focuses on the largest term in absolute value in the sum corresponding to some index $j_{\rm max}(i)$ \begin{eqnarray} \max_{j} (\vert J_{ij} \vert) \equiv \vert J_{i,j_{\rm max}(i)} \vert. \label{omegai} \end{eqnarray} The question for the accuracy of the SDRG is whether the local field $h^{\rm loc}_i$ \begin{eqnarray} h^{\rm loc}_i && = J_{i,j_{\rm max}(i)} S_{j_{\rm max}(i)}+ \sum_{j \ne j_{\rm max}(i)} J_{ij} S_j \label{hloci2} \end{eqnarray} is dominated by the first term. The ``worst case'' is when the spins $S_j$ of the second term in Eq.~(\ref{hloci2}) are such that $( J_{ij} S_j)$ all have the same sign; their contribution to the local field is then maximal. Monthus introduced the difference \begin{eqnarray} \Delta_i && \equiv \vert J_{i,j_{\rm max}(i)} \vert - \sum_{j \ne j_{\rm max}(i)} \vert J_{ij} \vert. \label{deltai} \end{eqnarray} For $\Delta_{i_0}>0$, the sign of the local field $h^{\rm loc}_{i_0}$ is determined by the sign of the first term $J_{i_0j_{\rm max}(i_0)} S_{j_{\rm max}(i_0)} $ for all values taken by the other spins $S_j$ with $j \ne j_{\rm max}(i_0)$; \begin{eqnarray} {\rm sgn} ( h^{\rm loc}_{i_0} ) && =S_{j_{\rm max}({i_0})} {\rm sgn} \left[ J_{{i_0},j_{\rm max}({i_0})}\right]. \label{hlocisgn} \end{eqnarray} Then the spin $S_{i_0}$ can be eliminated via \begin{eqnarray} S_{{i_0}} = S_{j_{\rm max}({i_0})} {\rm sgn} \left[J_{{i_0} j_{\rm max}({i_0})}\right] \label{elimsi0} \end{eqnarray} so that Eq.~\eqref{eq:ham} becomes \begin{eqnarray} H &=& -\vert J_{{i_0}j_{\rm max}({i_0})}\vert - \sum_{(i,j)\ne i_0}J_{ij}^{\rm R} S_i S_j, \label{hsgdeci} \end{eqnarray} where the renormalized couplings connected to the spin $S_{j_{\rm max}(i_0)}$ are \begin{eqnarray} J^{\rm R}_{j_{\rm max}(i_0),j} = J_{j_{\rm max}(i_0),j}+ J_{i_0,j} {\rm sgn} \left[J_{i_0 j_{\rm max}(i_0)}\right]. \label{jr} \end{eqnarray} Let $z$ be the number of neighbors of a site, where $z=2d$. Then in $d=1$, $z=2$, and the difference $\Delta_{i_0}$ defined in Eq. (\ref{deltai}) would be always positive, i.e., the SDRG would be exact. Alas it fails to be exact in higher dimensions as $\Delta_{i0}$ is not always positive. Monthus argued that ``the worst is not always true.'' Indeed, in a frustrated spin glass, the worst case discussed above where all the spins $S_j$ are such that $( J_{ij} S_j)$ have all the same sign, is atypical. It is much more natural to compare with a sum of random terms of absolute values $J_{ij}$ and of random signs, i.e., to replace the difference $\Delta_i$ of Eq.~(\ref{deltai}) by \begin{eqnarray} \Omega_i && \equiv \vert J_{i,j_{\rm max}(i)} \vert - \sqrt{ \sum_{j \ne j_{\rm max}(i)} \vert J_{ij} \vert^2 }. \label{omegaii} \end{eqnarray} Note that for the case of $z=2$ neighbors, $\Omega_i$ actually coincides with $\Delta_i$, so that the exactness discussed above is the same. But for $z>2$, it is expected that $\Omega_i$ is a better indicator of the relative dominance of the maximal coupling for the different spins. Monthus' version of the SDRG procedure was based on the variable $\Omega_i$. At each step, the spin-glass Hamiltonian is similar to that of Eq.~(\ref{eq:ham}). The variable $\Omega_i$ of Eq.~(\ref{omegaii}) is computed from the couplings $J_{ij}$ connected to $S_i$. The iterative renormalization procedure is defined by the following decimation steps. \smallskip \noindent (1) Find the spin $i_0$ with the maximal $\Omega_i$, i.e., \begin{eqnarray} \Omega_{i_0} \equiv \max_{i} ( \Omega_{i} ). \label{omegaimax} \end{eqnarray} \smallskip \noindent (2) The elimination of the spin $S_{i_0}$ proceeds via Eq.~(\ref{elimsi0}) and all its couplings $J_{i_0,j} $ with $j \ne j_{\rm max}(i_0)$ are transferred to the spin $S_{j_{\rm max}(i_0)}$ via the renormalization rule of Eq.~(\ref{jr}). \smallskip \noindent (3) The procedure ends when only a single spin $S_{\rm last}$ is left. The two values $S_{\rm last}=\pm 1$ label the two ground states related by a global flip of all the spins. \smallskip \noindent From the choice $S_{\rm last}=+1$, one can reconstruct all the values of the decimated spins via the rule of Eq.~(\ref{elimsi0}). Monthus \cite{monthus:15} studied how the value of $\Omega_i$ evolves with each iteration for the EA model for $d=2$ and $d=3$. For the SDRG to be exact one needs $\Delta_i$ to be always positive and hopefully $\Omega_i$ acts as a useful proxy for $\Delta_i$. She found that for the early iterations the $\Omega_i$ were indeed positive but turned negative for the later stages of the iteration procedure, indicating that the SDRG was failing. She suggested that the fractal dimension $d_{\rm s}$ was dominated by the early stages of the iteration, which correspond to long length scales. We have extended her studies of $\Omega_i$ up to $d=6$ and have found that as the dimension $d$ increases, the crossover where the SDRG would appear to become steadily worse (i.e., where the $\Omega_i$ turn negative) occurs at successively earlier stages of the RG iterations. Figure \ref{OmegaEA} shows the form of the $\Omega_i$ in $d=2$ and $d=6$ space dimensions. Because the SDRG could be exact only if $\Omega_i > 0$ for all $i$, the data for $d=6$ are far from satisfying this criterion. A defect of the SDRG is that when it terminates it can give a spin state in which not all the spins are even parallel to their local fields. We have investigated the problem carefully in two dimensions and found a small fraction of spins fail to be parallel to their local fields, and these seem to be the spins which sit in very small values of the local field. We have generated from these states a one-spin flip stable state by flipping these spins and their neighbours thereon until there are no spins left that are not parallel to their local fields. With these new states we find that the coefficient $A$ in $\Sigma^{DW} \sim A L^{d_s}$ is slightly modified: Its logarithm $\ln(\Gamma)$ is shifted by a small amount (of order $0.005$) for a wide range of $L$ values. Because it does not seem to significantly influence the value of $d_s$, we choose not to investigate this problem in greater detail here. \begin{figure}[htb] \begin{center} \includegraphics[width=\columnwidth]{Omega.pdf} \caption{ Representative evolution of $\Omega_i$ of the decimated spin as a function of the RG step, which corresponds to the number of spins which have been decimated for the EA model for (a) $d=2$ and (b) $d=6$. Over most of the iteration range for $d=2$, $\Omega_i$ is positive. The SDRG estimate for the exponent $d_{\rm s}$ is also quite accurate in this case. As $d$ increases, the values of $\Omega_i$ turn negative after a decreasing number of iterations, suggesting that the SDRG becomes less accurate in higher dimensions, as can be seen for $d=6$ [panel (b)]. Note the different horizontal scales.} \label{OmegaEA} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=\columnwidth]{EA.pdf} \caption{ $\ln \Gamma$ for various space dimensions $d$ as a function of $\ln L$ computed using the SDRG algorithm. Note that $\Gamma \sim L^{d_{\rm s}-d}$. Our estimate of $d_{\rm s}$ is determined by the slope of the straight lines drawn through the points at large-$L$ values. Note how the data for $d = 6$ level off, i.e., $d_{\rm s} \to d$. (See Fig. \ref{d6fig} for an enlarged figure in six dimensions). Error bars are smaller than the symbols.} \label{SDRGfig} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=\columnwidth]{EA2.pdf} \caption{ $\ln \Gamma$ for $d=6$ as a function of $\ln L$ computed using the SDRG and GA algorithms. Our estimate of $d_{\rm s}$ is determined by the slope of the straight lines drawn through the points at large-$L$ values. Using $\Gamma \sim L^{d_{\rm s}-d}$ the levelling off of the lines at the larger values of $L$ implies that $d_{\rm s} \to d$ in six dimensions. Error bars are smaller than the symbols.} \label{d6fig} \end{center} \end{figure} \begin{table} \caption{ Dimensionality $d$, system size $L$, and the number of disorder realizations $M$ studied using the GA and SDRG methods. Part of the SDRG data used here are taken from Ref.~\cite{wang:17a}. \label{table} } \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}} c c c c} \hline \hline Method &$d$ &$L$ &$M$ \\ \hline SDRG &$2$ &$\{10, 20, 30, 40, 50, 100, 200, 400, 800\}$ &$10000$ \\ SDRG &$2$ &$1200$ &$3000$ \\ SDRG &$2$ &$1600$ &$1000$ \\ SDRG &$3$ &$\{4, 6, 8, 10, 12, 16, 20, 24, 32\}$ &$3000$ \\ SDRG &$3$ &$\{64, 128\}$ &$1000$ \\ SDRG &$4$ &$\{4, 5, 6, 7, 8, 9, 10, 12, 16, 20, 24\}$ &$3000$ \\ SDRG &$4$ &$28$ &$717$ \\ SDRG &$4$ &$32$ &$121$ \\ SDRG &$5$ &$\{4, 5, 6, 7, 8, 9, 10, 12, 14\}$ &$3000$ \\ SDRG &$5$ &$16$ &$1000$ \\ SDRG &$6$ &$\{4, 5, 6, 7, 8\}$ &$3000$ \\ SDRG &$6$ &$9$ &$1843$ \\ SDRG &$6$ &$10, 11, 12$ &$1000$ \\ SDRG &$6$ &$\{13, 14\}$ &$200$ \\[2mm] GA &$2$ &$\{4, 8, 12, 16, 32, 64, 128, 256, 512\}$ &$3000$ \\ GA &$3$ &$\{4, 6, 8, 10, 12, 16, 20, 24, 32, 64\}$ &$3000$ \\ GA &$4$ &$\{4, 6, 8, 10, 12, 16, 20, 24, 32\}$ &$3000$ \\ GA &$5$ &$\{4, 5, 6, 7, 8, 9, 10\}$ &$6000$ \\ GA &$5$ &$\{12, 14, 16\}$ &$3000$ \\ GA &$6$ &$\{4, 5, 6, 7, 8, 9, 10\}$ &$3000$ \\ GA &$6$ &$\{12, 14\}$ &$1000$ \\ \hline \hline \end{tabular*} \end{table} \section{SDRG results} \label{sec:SDRGresults} In Fig.~\ref{SDRGfig} we plot $\ln \Gamma$ versus $\ln L$ using the SDRG method of Monthus \cite{monthus:15} to compute the link overlap. One change from our previous work in Ref.~\cite{wang:17a} is that we have added more data. Especially for $d= 6$ we have increased the largest system studied from $L=10$ to $L=14$. The new data show that for $d=6$ the curve is levelling off, implying that $d_{\rm s} \to d$. We have also increased the values of $L$ studied in $d=2$ and $3$, going far beyond the system sizes studied in Ref.~\cite{monthus:15}. Table~\ref{table} lists simulation parameters, such as the number of bond configurations $M$ for each value of the linear system size $L$ in space dimension $d$. The SDRG seems to give quite accurate results for the value of $d_{\rm s}$ at least in low space dimensions. Thus, in $d = 2$, Monthus found from the SDRG a value of $d_{\rm s} \approx 1.27$ from $L$ values up to $340$, a result which is similar to a recent study of systems up to $L = 10^4$ \cite{khoshbakht:17a} based on fast polynomial time algorithms for finding ground states (which, however, only work in two space dimensions) which gives $d_{\rm s} =1.27 319(9)$. In $d=3$, Monthus finds $d_{\rm s} =2.55$ for systems of size up to $L=45$. In Ref.~\cite{wang:17c} a value of $2.57$ is quoted from studies on systems up to $L=12$. The SDRG is just an algorithm which attempts to find the ground-state spin configuration. It is exact in one space dimension. While it seems to give excellent values for $d_{\rm s}$, it gives poor values for the actual ground-state energy itself and the energy cost of the interface. If the domain-wall energy scales $\sim L^{\theta}$, then Monthus reports $\theta \approx 0$ whereas the recent high-precision calculations show that $\theta = -0.2793(3)$ \cite{khoshbakht:17a}. Because Monthus' value for $d_{\rm s}$ in $d=2$ seemed to be compatible with the high-precision calculations \cite{khoshbakht:17a}, we speculated in Ref.~\cite{wang:17a} that the SDRG might be accurate because the interface is a self-similar fractal \cite{mandelbrot:67}. The SDRG seems to be accurate in the early stages of the RG process where the $\Omega_i$ are positive (see Fig.~\ref{OmegaEA}) where a coarse approximation of the domain lengths is performed (see Fig.~\ref{selfsimilar}). In the later stages of determining the domain length, the SDRG's accuracy will decrease. In particular, in the relation $\Sigma^{DW} \sim A L^{d_{\rm s}}$ we suspect that the SDRG might determine $d_{\rm s}$ quite accurately, but that the coefficient $A$ might be obtained with less accuracy. To estimate $A$ to high accuracy would require an RG process accurate on all length scales, both short and long. In this paper we have extended the system sizes studied far beyond those studied by Monthus in $d=2$, and find that $d_{\rm s}=1.2529(14)$ which indicates that the SDRG is not exact for $d_{\rm s}$ in $d=2$, but just a good approximation. Our estimate of $A$ is $1.4040(106)$ whereas the recent high-precision estimate is $1.222(3)$ \cite{khoshbakht:17a}. \begin{figure}[htb] \begin{center} \includegraphics[width=\columnwidth]{fractal.pdf} \caption{ The bifurcation of a tree is a self-similar fractal. The four figures are measurements of its length using square domains whose linear size is reduced at each step of the renormalization. For a self-similar fractal, like the ponderosa pine depicted here, the scaling dimension $d_{\rm s}$ is the same no matter what length scale is used to determine it. Panel (a) shows the coarsest measurements which are successively refined by reducing the size of the squares in panels (b) -- (d). Note that the domains are smaller than the image resolution in panel (d). The fractal dimension of the ponderosa pine is approximately $1.88$. One could in principle obtain the correct fractal dimension by studies at the coarsest length scales which is why we suspect that the SDRG, which works better on the coarsest length scales, is capable of getting accurate answers for $d_{\rm s}$. } \label{selfsimilar} \end{center} \end{figure} We have also extended Monthus' work in $d = 3$ from $L = 45$ to $L= 128$ and find $d_{\rm s} = 2.5256(30)$. If we had only system sizes up to $12$ in $d=3$, as in the Monte Carlo studies of Ref.~\cite{wang:17c}, then because of finite-size effects (visible in Fig.~\ref{SDRGfig}), we would have reported a value of $d_{\rm s} \approx 2.6093(50)$. A value of $2.57$ was reported in Ref.~\cite{wang:17c} based on the same range of $L$ values up to $L=12$. The SDRG is not an analytical treatment, but a numerical technique and in high dimensions (e.g., $d = 5$ and $6$) this limits us to studying rather small linear system sizes. As a consequence, estimates of exponents can be affected by finite-size corrections as aforementioned for $d=3$. Thus, it is hard to be certain that $d_{\rm s}=d$ in six dimensions. We therefore decided to also use a greedy algorithm (GA) to complement the SDRG results. It is already known from analytical studies that $6$ is the ''upper critical dimension'' for the GA, at least for the fractal dimension associated with minimum spanning trees. \cite{jackson:10,jackson:10a}. Here, we want to know whether numerical studies of the value of $d_{\rm s}$ would also show that six is a similarly special dimension for the fractal dimension of domain walls with the GA algorithm. \begin{figure}[htb] \begin{center} \includegraphics[width=\columnwidth]{GA.pdf} \caption{ $\ln \Gamma$ for various space dimensions $d$ for the EA model as a function of $\ln L$ computed using the GA. Note that $\Gamma \sim L^{d_{\rm s}-d}$. Our estimate of $d_{\rm s}$ is determined by the slope of the straight lines drawn through the points at large $L$-values. Error bars are smaller than the symbols.} \label{GAfig} \end{center} \end{figure} \section{The greedy algorithm} \label{sec:greedy} The GA (also studied by Monthus \cite{monthus:15}) works as follows. The bonds in the order of decreasing absolute magnitude are satisfied in turn, unless a closed loop appears then the bond is skipped, until the relative orientation of all the spins is determined. In Table \ref{table}, we have given details of the system sizes and numbers of different bond realizations which we have studied in dimensions $d=2$, $\cdots$, $6$. In Fig.~\ref{GAfig} we plot $\ln \Gamma$ versus $\ln L$ determining the link overlap using the GA. Notice that the corrections to scaling in $d = 6$ seem smaller for the GA than for the SDRG method, because the data seem independent of $L$ even for the smallest system sizes. \begin{figure}[htb] \begin{center} \includegraphics[width=\columnwidth]{GA3.pdf} \caption{ Greedy algorithm (GA) results (blue pentagons) compared with strong-disorder renormalization group (SDRG) results (red squares) for $d =2$, $3$, $4$, $5$, and $6$. The upper bound $d_{\rm s}-d+1$ at unity is marked by a horizontal blue line, while the lower bound at zero is marked with a horizontal red line. The value $d_{\rm s}=0$ for $d=1$ is exact and given by both methods. Only statistical errors are included and error bars are smaller than the symbols. Numerical values are summarized in Table.~\ref{table2}. } \label{GA3} \end{center} \end{figure} Like the SDRG procedure, the GA is just a way of finding the spin configuration for a putative ground state of the system. There is no bond renormalization as in the SDRG [see Eq.~(\ref{jr})]. It is just as poor for the ground-state energy and the exponent $\theta$ as the SDRG \cite{monthus:15}. In $d =2$ we obtain $d_{\rm s}^{\rm GA} \simeq 1.2196(11)$, which is comparable with Ref.~\cite{sweeney:13} who quote $d_{\rm s}^{\rm GA}=1.216(1)$. Note that the SDRG value for $d_{\rm s}$ is in much better agreement with the high-precision value of Ref.~\cite{khoshbakht:17a}. In $d =3$ the GA result is $d_{\rm s}^{\rm GA} \simeq 2.4962(19)$, which is closer to that of the SDRG. An earlier estimate in three dimensions is that of Ref.~\cite{cieplak:94} who quote $d_{\rm s}^{\rm GA} \simeq 2.5 \pm 0.05$. In Fig.~\ref{GA3} we have plotted $d_{\rm s}-d+1$ versus $d$ using the $d_{\rm s}$ from both the GA and SDRG algorithms. As the dimension $d$ approaches $6$ the two estimates appear to merge and give $d_{\rm s}=d$ in $d = 6$. The analytical expectation of Refs.~\cite{jackson:10,jackson:10a} was that $6$ is the upper critical dimension for the fractal dimension of minimum spanning trees within the GA. Our numerical work suggests that within the GA, domain walls also have $6$ as their upper critical dimension. \section{Discussion} \label{sec:discussion} \begin{table} \caption{ Numerical estimates of the fractal dimension $d_s$ of the SDRG and GA methods. $d_s$=0 for $d=1$, as both methods are exact for the one-dimensional model. Error bars are statistical errors. \label{table2} } \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}} c c c c c c c} \hline \hline Method &$d=2$ &$d=3$ &$d=4$ &$d=5$ &$d=6$ \\ \hline SDRG &1.2529(14) &2.5256(30) &3.7358(36) &4.884(60) &5.9899(60) \\ GA &1.2196(11) &2.4962(19) &3.7190(47) &4.9068(32) &6.0023(22) \\ \hline \hline \end{tabular*} \end{table} We have obtained numerical results (Fig.~\ref{GA3}) using a strong-disorder renormalization group method and a greedy algorithm that are consistent with $6$ being a special space dimension above which the conventional EA model with a Gaussian bond distribution has RSB behavior and summarized them in Table \ref{table2}. For $d \le 6$, we have found that within our numerical procedures that the EA model is behaving according to droplet model expectations because $d_{\rm s} < d$. That $6$ is a special dimension for the behavior of spin glasses is in accord with some older expectations based on analytical results \cite{bray:80,moore:11}, but these have been controversial \cite{parisi:12,moore:18x}. Because both the GA and the SDRG are approximations, we regard the results presented here as not decisive. We note, however, that real-space RG methods such as the SDRG are capable of endless refinements. Monthus \cite{monthus:15} herself discussed a variant, the ``box'' method, which improved the value of the zero-temperature exponent $\theta$ in $d=2$ from the very poor value $\theta \approx 0$ obtained by the SDRG method described in this paper to at least a negative value of $\theta \approx -0.09$ [the high-precision estimate of Ref.~\cite{khoshbakht:17a} is $\theta=-0.2793(3)$]; note that the value of $d_{\rm s}$ was hardly altered. It might be possible to find a real-space RG procedure that gives accurate numbers on all quantities of interest for three-dimensional spin glasses. The SDRG and the GA have a common feature in that they both recognize that the largest bonds are likely to be satisfied in the ground state. We suspect that will be an ingredient of any future successful RG scheme for spin-glass systems. \begin{acknowledgments} M.A.M.~would like to thank Nick Read for email discussions. We thank Martin Weigel for supplying more details of his results. H.G.K.~would like to thank Della Vigil at the Santa Fe Institute for helping with determining the type of pine photographed in Fig.~\ref{selfsimilar} and appreciates award No.~06210311-251521-23011407. W.W.~acknowledges support from the Swedish Research Council Grant No.~642-2013-7837 and Goran Gustafsson Foundation for Research in Natural Sciences and Medicine. W.W.~and H.G.K.~acknowledge support from NSF DMR Grant No.~1151387. The work of H.G.K.~and W.W~is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via MIT Lincoln Laboratory Air Force Contract No.~FA8721-05-C-0002. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, or the U.S.~Government. The U.S.~Government is authorized to reproduce and distribute reprints for Governmental purpose notwithstanding any copyright annotation thereon. We thank Texas A\&M University for access to their Ada and Curie clusters. \end{acknowledgments}
{ "timestamp": "2018-03-08T02:10:23", "yymm": "1712", "arxiv_id": "1712.04971", "language": "en", "url": "https://arxiv.org/abs/1712.04971" }
\section{Group reconstruction Methods} \label{appendix:method} It is known that the identification of the groups in redshift space is hampered by several difficulties. Of particular concern are redshift-space distortions such as the elongation of groups along the line of sight due to the peculiar velocities of the group galaxies (the ``fingers-of-God" effect), and the less significant distortions caused by the coherent infall of galaxies towards the center of assembling structures (the ``Kaiser effect"). Both of these effects cause confusion when determining group membership, resulting in excessive merging of galaxies into large structures or, contrarily, the fragmentation of large groups into smaller structures, depending on the adopted strategy for finding the groups. Without the measure of an absolute distance it is impossible to separate the peculiar velocities from the Hubble flow and these complications can not be fully avoided. To surmount these difficulties, different algorithms have been developed, most notably the Friends-of-friends (FoF\xspace) algorithm \citep{HuchraGeller1982} recursively creating links between galaxies within a specified volume around each galaxy. An alternative and more complicated approach to group identification is the Voronoi-Delaunay method developed by \cite{Marinoni2002} which makes use of the Voronoi-Delaunay tessellation to locally measure clustering parameters on a cluster-to-cluster basis (rather than on a galaxy-to-galaxy basis as in the FoF\xspace algorithm). This method was introduced to overcome some of the difficulties inherent to the standard FoF\xspace algorithm, but modified FoF\xspace implementations have proved competitive (e.g., \citealp{Robotham2011} found that while the VDM variant of \cite{Gerke2005} worked reasonably well for larger groups and clusters, their FoF\xspace implementation performed better in the low halo mass regime; see also \citealp{Knobel2009} for an extensive performance comparison of both algorithms). In this paper we adopted a FoF\xspace algorithm, as described below. \section{Friends-of-friends} \label{appendix:fof} To deal with the effects of redshift distortions, the distance between two galaxies $i$ and $j$ is measured with two coordinates, the parallel ($d_{\parallel,ij}$) and perpendicular ($d_{\perp,ij}$) projected comoving separations to the mean line-of-sight. Let $\vec{r_i}$ and $\vec{r_j}$ denote the redshift positions of a pair of objects $i$ and $j$, we define the mean line-of-sight (LoS\xspace) as : \begin{equation} \label{eq:l} \vec{l} \equiv \frac{1}{2}\left( \vec{r_i} + \vec{r_j} \right) \end{equation} and the redshift space separation is given by : \begin{equation} \label{eq:s} \vec{s} \equiv \vec{r_i} - \vec{r_j}. \end{equation} The projected parallel and perpendicular line-of-sight separations between $i$ and $j$, $d_{\perp,ij}$ and $d_{\parallel,ij}$ respectively, are then given by : \begin{equation} \label{eq:d_parallel} d_{\parallel,ij} = \frac{\vec{s} \cdot \vec{l}}{\parallel \vec{l} \parallel} \end{equation} and \begin{equation} \label{eq:d_perp} d_{\perp,ij} = \sqrt{\vec{s} \cdot \vec{s} - d_{\parallel,ij}^2}. \end{equation} The two galaxies are linked to each other if : \begin{equation} \label{eq:link_perp} d_{\perp,ij} < b_\perp \overline{r}_{ij} \end{equation} and \begin{equation} \label{eq:link_parallel} d_{\parallel,ij} < b_\parallel \overline{r}_{ij}. \end{equation} Here $b_\perp$ and $b_\parallel$ are the projected and line-of-sight linking lengths in units of the mean intergalactic separation given by : \begin{equation} \label{eq:mean_sep} \overline{r}_{ij} = \frac{1}{2}\left(n_i^{-1/3} + n_j^{-1/3}\right), \end{equation} with $n_i$ and $n_j$ being the galaxy number densities at the redshifts of galaxies $i$ and $j$. The parameter $b_\parallel$ is related to $b_\perp$ through the radial expansion factor $R = b_\parallel / b_\perp$ accounting for the peculiar motions of galaxies within groups. \section{Parameters optimization strategy} \label{appendix:optimization} The FoF\xspace algorithm described above has two free parameters, the linking length $b_\perp$ and the radial expansion factor $R$ (or equivalently the perpendicular and line-of-sight linking lengths). Their values will affect the quality of the resulting group catalog: too small values will tend to break up single groups into several groups, while too large values will merge multiple groups into single ones. These free parameters can be determined by optimizing a group cost function (a measure of the grouping quality) when tested on the mock catalogs. No combination of linking lengths will create a group catalog recovering simultaneously all aspects of the underlying halo distribution, no matter how large the systems are \citep[see e.g.][]{Berlind2006}. The optimization strategy depends on the scientific purpose of the group catalog. Our objective is to obtain a catalog with a high group detection rate and a low contamination by galaxies coming from different groups. We followed the definition of the group cost function of \cite{Robotham2011}, with slightly different notations and minor modifications. We need to define the way reconstructed groups (the FoF\xspace groups) are associated with the underlying real groups in the mock catalogs (the mock groups). Ideally, the associations between the real and reconstructed groups should be bijective, meaning that the joint galaxy population of the FoF\xspace and mock groups includes more than 50 \% of their respective members. Such an association is unambiguous, as each group can bijectively match one group at most, and the reconstructed group catalog and the corresponding real group catalog are mutually an accurate representation of each other. To cast such a two-way grouping quality into a statistical measure, let $N_{\mathrm{g}_\mathrm{bij}}$, $N_{\mathrm{g}_\mathrm{FoF}}$ and $N_{\mathrm{g}_\mathrm{mock}}$ denote the number of bijective, FoF\xspace and mock groups, respectively. Following~\cite{Robotham2011}, we define : \begin{equation} E_{\mathrm{FoF}} = \frac{N_{\mathrm{g}_\mathrm{bij}}}{N_{\mathrm{g}_\mathrm{FoF}}}, \end{equation} \begin{equation} E_{\mathrm{mock}} = \frac{N_{\mathrm{g}_\mathrm{bij}}}{N_{\mathrm{g}_\mathrm{mock}}}, \end{equation} and \begin{equation} \label{Eq:etot} E_{\mathrm{tot}} = E_{\mathrm{FoF}} E_{\mathrm{mock}}. \end{equation} By definition, these measures take values in the range 0-1. $E_{\mathrm{tot}}$ can be seen as the global halo finding efficiency, telling us how well the groups are recovered. It will be 1 if all groups are bijective, and 0 if there is no group with a bijective match. Another measure of the grouping quality is how well galaxies within the groups are recovered. Let $P\ensuremath{_{\mathrm{FoF},ij}}$ and $P\ensuremath{_{\mathrm{mock},ij}}$ denote the purity products defined as follows : \begin{equation} P\ensuremath{_{\mathrm{FoF},ij}} = \frac{N_{\mathrm{m}_{\mathrm{FoF},i \cap \mathrm{mock},j}} }{N_{\mathrm{m}_{\mathrm{FoF}},i}} \frac{N_{\mathrm{m}_{\mathrm{FoF},i \cap \mathrm{mock},j}} }{N_{\mathrm{m}_{\mathrm{mock}},j}}, \end{equation} where $N_{\mathrm{m}_{\mathrm{FoF},i \cap \mathrm{mock},j}}$ is number of galaxies the $i^{th}$ FoF group shares with the $j^{th}$ mock group, and \begin{equation} P\ensuremath{_{\mathrm{mock},ij}} = \frac{N_{\mathrm{m}_{\mathrm{mock},i \cap \mathrm{FoF},j}} }{N_{\mathrm{m}_{\mathrm{mock}},i}} \frac{N_{\mathrm{m}_{\mathrm{mock},i \cap \mathrm{FoF},j}} }{N_{\mathrm{m}_{\mathrm{FoF}},j}}, \end{equation} where $N_{\mathrm{m}_{\mathrm{mock},i \cap \mathrm{FoF},j}}$ is the number of galaxies that the $i^{th}$ mock group shares with the $j^{th}$ FoF\xspace group. We will call the best match the association between the $i^{th}$ mock (FoF\xspace) group and the $j^{th}$ FoF\xspace (mock) group for which $P\ensuremath{_{\mathrm{mock},ij}}$ ($P\ensuremath{_{\mathrm{FoF},ij}}$) is highest. We then define the following measures : \begin{equation} Q_{\mathrm{FoF}} = \frac{ \sum_{i=1}^{N_{\mathrm{g}_\mathrm{FoF}}} N_{\mathrm{m}_{\mathrm{FoF},i \cap \mathrm{mock},j}} Q\ensuremath{_{\mathrm{FoF},i}}}{\sum_{i=1}^{N_{\mathrm{g}_\mathrm{FoF}}} N_{\mathrm{m}_{\mathrm{FoF}},i} }, \end{equation} where \begin{equation} Q\ensuremath{_{\mathrm{FoF},i}} = \frac{ N_{\mathrm{m}_{\mathrm{FoF},i \cap \mathrm{mock},j}}}{ N_{\mathrm{m}_{\mathrm{FoF}},i} }, \end{equation} and \begin{equation} Q_{\mathrm{mock}} = \frac{ \sum_{i=1}^{N_{\mathrm{g}_\mathrm{mock}}} N_{\mathrm{m}_{\mathrm{mock},i \cap \mathrm{FoF},j} Q\ensuremath{_{\mathrm{mock},i}}} }{\sum_{i=1}^{N_{\mathrm{g}_\mathrm{mock}}} N_{\mathrm{m}_{\mathrm{mock}},i}}, \end{equation} where \begin{equation} Q\ensuremath{_{\mathrm{mock},i}} = \frac{ N_{\mathrm{m}_{\mathrm{mock},i \cap \mathrm{FoF},j}}}{ N_{\mathrm{m}_{\mathrm{mock}},i} }. \end{equation} $N_{\mathrm{m}_{\mathrm{FoF}},i}$ and $N_{\mathrm{m}_{\mathrm{mock}},i}$ denote the number of members in the $i^{th}$ FoF\xspace and mock group, respectively, and $N_{\mathrm{m}_{\mathrm{FoF},i \cap \mathrm{mock},j}}$ ($N_{\mathrm{m}_{\mathrm{mock},i \cap \mathrm{FoF},j}}$) the number of $i^{th}$ FoF\xspace (mock) group members recovered by its corresponding matched mock (FoF\xspace) group. Finally the global grouping purity is defined as : \begin{equation} \label{Eq:qtot} Q_{\mathrm{tot}} = Q_{\mathrm{FoF}} Q_{\mathrm{mock}}. \end{equation} $Q_{\mathrm{tot}}$ will be 1 if all galaxies within all groups are perfectly recovered, its lower limit will be however greater than 0 as it is always possible to break a catalog with $N$ galaxies into a catalog of N groups. The final quantity defining our group cost function is : \begin{equation} \label{Eq:stot} S_{\mathrm{tot}} = E_{\mathrm{tot}} Q_{\mathrm{tot}}, \end{equation} which takes values between 0 and 1 and should be maximised. In order to find the right combination of linking lengths, we use the publicly available GAMA mock catalogs\footnote{ \url{http://www.gama-survey.org} } described below, on which we run our FoF\xspace group-finder algorithm for a grid of $b_\perp$ and $R$ values, and compute $S_{\mathrm{tot}}$. \section{Mock catalogs} \label{appendix:mocks} The GAMA mock galaxy catalogs (see \cite{Robotham2011} for a detailed description) were constructed from the Millennium dark matter N-body simulation \citep{Springel2005} run with the following cosmological parameters: $\Omega_m = 0.25$, $\Omega_{\Lambda} = 0.75$, $\Omega_b = 0.045$, $h = 0.73$, $n = 1$ and $\sigma_8 = 0.9$. The DM halos were then populated with galaxies using the \cite{Bower2006} variant of the GALFORM semi-analytic model of galaxy formation. There are nine mock catalogs, each of which covers a complete analog of the full GAMA I survey, i.e. three regions of 12 $\times$ 4 deg$^2$ out to redshift 0.5, while preserving the true angular separation between them. The public mocks are limited to the apparent magnitude $r < 19.4$ but as shown by \cite{Robotham2011}, there is no difference in the resulting optimised parameters to $r=19.8$. Since the nine mock catalogs are constructed form a single simulation, they are not statistically independent. In spite of this limitation, the construction method guarantees that no spatial overlap between the nine light cone mocks is present and there is no single galaxy at the exact same stage of evolution in more than one mock. \section{Results of the optimization} \label{appendix:results} As we already mentioned, the optimization of the group-finding parameters should be carried out in a way that the resulting group catalog best fulfils the desired scientific goals. For our optimization we use only groups with five or more members which are allowed to match groups with two or more members. This choice is motivated by the desire to estimate the global group properties of the resulting group catalog. The results of the optimization computation are shown in Fig.~\ref{Fig:stot}. The global maximum of $S_{\mathrm{tot}}$ is obtained for the combination of two parameters ($b_\perp$,$R$) = (0.06,27.5). We note however, that between $R \gtrsim 16$ and $R=30$, $S_{\mathrm{tot}}$ does not evolve significantly. This remains true for $b_\perp = 0.07$ as well. In addition, the values of $S_{\mathrm{tot}}$ for these two linking lengths, and in the $R$ range 16-30 are very similar within the error bars. This means that increasing the value of $R$ in a given range leads to very similar global statistical properties of the reconstructed catalogs. We will thus include an additional criterion by considering the contribution from the mock and FoF\xspace components to the overall cost function. This contribution should be as symmetric as possible, indicating that the FoF\xspace algorithm recovers on average similar groups, in terms of number and quality of reconstruction, that actually exist in the mock catalog. Table~\ref{tab:opt} shows the FoF\xspace parameters corresponding to the maximum global cost function $S_{\mathrm{tot}}$, and the most symmetric contribution to $S_{\mathrm{tot}}$ from the mock and FoF\xspace components ($S_{\mathrm{mock}} = E_{\mathrm{mock}} Q_{\mathrm{mock}}$ and $S_{\mathrm{\fof}} = E_{\mathrm{FoF}} Q_{\mathrm{FoF}}$, respectively). For the global maximum of the cost function, corresponding to ($b_\perp$, $R$) = (0.06, 27.5), the cost from mock groups to $S_{\mathrm{tot}}$ is lower than the one from FoF\xspace groups indicating that the group-finding algorithm globally recovers less groups than actually exist in the mock catalog. Similar asymmetry is found for the combination ($b_\perp$, $R$) = (0.07, 19.0), corresponding to the maximum of $S_{\mathrm{tot}}$ for $b_\perp$ = 0.07, and ($b_\perp$, $R$) = (0.07, 15.0), which is the most symmetric solution for $b_\perp$ = 0.07. The most equilibrated contribution is obtained for ($b_\perp$, $R$) = (0.06, 19.0), which is our preferred choice for the FoF\xspace parameters. \begin{table \centering \caption{The FoF\xspace parameters.\label{tab:opt}} \begin{tabular}{cccccc} \hline \hline &$b_\perp$ & $R$ & $S_{\mathrm{tot}}$ & ${S_{\mathrm{mock}}}^e$ & ${S_{\mathrm{\fof}}}^f$\\ \hline \hline A$^a$ & 0.06 & 27.5 & 0.405 & 0.602 & 0.673 \\ B$^b$ & 0.07 & 19.0 & 0.393 & 0.556 & 0.707 \\ C$^c$ & 0.06 & 19.0 & 0.381 & 0.618 & 0.617 \\ D$^d$ & 0.07 & 15.0 & 0.381 & 0.568 & 0.671 \\ \hline \end{tabular} \begin{description} \itemsep0em \item $^{a}${\small global maximum of $S_{\mathrm{tot}}$}\\ \item $^{b}${\small maximum $S_{\mathrm{tot}}$ for $b_\perp$ = 0.07}\\ \item $^{c}${\small the most symmetric contribution to the $S_{\mathrm{tot}}$ from the mock and FoF\xspace components for $b_\perp$ = 0.06} \\ \item $^{d}${\small the most symmetric contribution to the $S_{\mathrm{tot}}$ from the mock and FoF\xspace components for $b_\perp$ = 0.07}\\ \item $^{e}${\small contribution to the $S_{\mathrm{tot}}$ from the mock component}\\ \item $^{f}${\small contribution to the $S_{\mathrm{tot}}$ from the FoF\xspace component} \end{description} \end{table} \section{Quality of group reconstruction} \label{appendix:quality} The statistical measures used to define our cost function introduced in the previous Section already allow us to assess the performance of our group-finding algorithm. We will however introduce additional statistical quantities, different from those used in the optimization, in order to test the quality of the reconstructed catalog independently. Following~\cite{Knobel2009}\footnote{The statistical quantities defined in this Section follow with some modifications those introduced in~\cite{Knobel2009}. Some of these measures can be found in their original form in~\cite{Gerke2005}.}, we define two classes of statistical quantities. One on a group-to-group basis, including the completeness, purity, overmerging and fragmentation, and the second one on a galaxy-to-group basis, with the galaxy success rate and the interloper fraction. We will define each of these quantities for the best and bijective matches as introduced in Section~\ref{appendix:optimization}. Let $N^{\mathrm{best}}_{\mathrm{g}_\mathrm{mock}}(\geq N)$ and $N^{\mathrm{bij}}_{\mathrm{g}_\mathrm{mock}}(\geq N)$ be the number of mock groups with $N$ or more members for the best and bijective matches respectively\footnote{In all the following definitions in this section, if not stated differently, the richness $N$ refers to the richness of groups that are being matched (mock groups in this case). In the matching procedure, the minimum richness of groups in the reference sample (FoF\xspace groups in this case) is two.} We define the completeness $c^{\mathrm{best}}(N)$ and $c^{\mathrm{bij}}(N)$ as the fraction of real groups with $N$ or more members that are successfully recovered in the reconstructed group catalog corresponding to the best and bijective match, respectively: \begin{equation} \label{Eq:cbest} c^{\mathrm{best}}(N) = \frac{N^{\mathrm{best}}_{\mathrm{g}_\mathrm{mock}}(\geq N)}{N_{\mathrm{g}_\mathrm{mock}}(\geq N)}, \end{equation} \begin{equation} \label{Eq:cbij} c^{\mathrm{bij}}(N) = \frac{N^{\mathrm{bij}}_{\mathrm{g}_\mathrm{mock}}(\geq N)}{N_{\mathrm{g}_\mathrm{mock}}(\geq N)}. \end{equation} Similarly, if $N^{\mathrm{best}}_{\mathrm{g}_\mathrm{FoF}}(\geq N)$ and $N^{\mathrm{bij}}_{\mathrm{g}_\mathrm{FoF}}(\geq N)$ denote the number of FoF\xspace groups with $N$ or more members in the best and bijective matches respectively, the purity $p^{\mathrm{best}}(N)$ and $p^{\mathrm{bij}}(N)$ are defined as the fractions of reconstructed groups with $N$ or more members belonging to real groups: \begin{equation} \label{Eq:pbest} p^{\mathrm{best}}(N) = \frac{N^{\mathrm{best}}_{\mathrm{g}_\mathrm{FoF}}(\geq N)}{N_{\mathrm{g}_\mathrm{FoF}}(\geq N)}, \end{equation} \begin{equation} \label{Eq:pbij} p^{\mathrm{bij}}(N) = \frac{N^{\mathrm{bij}}_{\mathrm{g}_\mathrm{FoF}}(\geq N)}{N_{\mathrm{g}_\mathrm{FoF}}(\geq N)}. \end{equation} By definition, the quantities $c^{\mathrm{best}}$, $c^{\mathrm{bij}}$, $p^{\mathrm{best}}$ and $p^{\mathrm{bij}}$ take values between 0 and 1. The completeness is one if all real groups are reconstructed while it is 0 if no real group is detected. Similarly, the purity is 1 if all reconstructed groups are matched with real groups, 0 if all reconstructed group are spurious (none is associated with any real group). For each mock catalog, we define fragmentation as the inverse of the number of FoF\xspace groups it is matched with. Similarly, for each FoF\xspace group overmerging is given as the inverse of the number of mock groups it is matched with. These quantities are well defined only for groups that have been actually matched (bijective or best match). Fragmentation (overmerging) is equal to one if each matched mock (FoF\xspace) group is associated with only one FoF\xspace (mock) group and the smaller the values are, the more FoF\xspace (mock) groups are associated with a given mock (FoF\xspace) group. The galaxy success rate $S_{\mathrm{gal}}(N)$ is defined as the fraction of galaxies in real groups with $N$ or more members that are found to belong to any reconstructed group with 2 or more members. For best and bijective matches, these definitions become : \begin{equation} \label{Eq:Sgalbest} S^{\mathrm{best}}_{\mathrm{gal}} = \frac{ N^{\mathrm{best}}_{\mathrm{m}_{\mathrm{mock} \cap \mathrm{FoF}}}(\geq N)}{ N_{\mathrm{m}_{\mathrm{mock}}}(\geq N) }. \end{equation} and \begin{equation} \label{Eq:Sgalbij} S^{\mathrm{bij}}_{\mathrm{gal}} = \frac{ N^{\mathrm{bij}}_{\mathrm{m}_{\mathrm{mock} \cap \mathrm{FoF}}}(\geq N)}{ N_{\mathrm{m}_{\mathrm{mock}}}(\geq N) }, \end{equation} respectively. We define the interloper fraction $f_{\mathrm{I}}(N)$ as the fraction of galaxies in reconstructed groups having $N$ or more members that do not belong to any matched real group of richness $\geq 2$ We again distinguish best and bijective match: \begin{equation} \label{Eq:fibest} f^{\mathrm{best}}_{\mathrm{I}} = \frac{ N^{\mathrm{best}}_{\mathrm{m}_{\mathrm{FoF}}}(\geq N) - N^{\mathrm{best}}_{\mathrm{m}_{\mathrm{mock} \cap \mathrm{FoF}}}(\geq N) }{ N_{\mathrm{m}_{\mathrm{FoF}}}(\geq N) }, \end{equation} \begin{equation} \label{Eq:fibij} f^{\mathrm{bij}}_{\mathrm{I}} = \frac{ N^{\mathrm{bij}}_{\mathrm{m}_{\mathrm{FoF}}}(\geq N) - N^{\mathrm{bij}}_{\mathrm{m}_{\mathrm{mock} \cap \mathrm{FoF}}}(\geq N) }{ N_{\mathrm{m}_{\mathrm{FoF}}}(\geq N) }. \end{equation} In the above definitions, an interloper is defined as any galaxy in a reconstructed group not belonging to its matched real group. We also define interlopers as field galaxies that end up in reconstructed groups. The corresponding interloper fraction $f_{\mathrm{I,field}}(N)$ is the fraction of galaxies belonging to reconstructed groups with $N$ or more galaxies that are field galaxies. $S_{\mathrm{gal}}$, $f_{\mathrm{I}}$ and $f_{\mathrm{I,field}}$ take values in the range 0-1. By definition, $f_{\mathrm{I,field}}$ is lower or equal to $f_{\mathrm{I}}$ (equal if all interlopers are field galaxies). \begin{figure \begin{center} \includegraphics[width=\columnwidth]{c_p_vs_N_l_0p06_R_19.pdf} \caption{\label{Fig:c_p_vs_N} Completeness (upper panel) and purity (lower panel) as a function of richness $N$ for the best (blue) and bijective (red) match. The points represent the mean values among the 9 mock catalogs and the error bars show their scatter. } \end{center} \end{figure} \begin{figure \begin{center} \includegraphics[width=\columnwidth]{frag_over_l_0p06_R_19.pdf} \caption{\label{Fig:frag_over} Fragmentation (upper panel) and overmerging (lower panel) as a function of richness $N$ for the best (blue) and bijective (red) match.} \end{center} \end{figure} \begin{figure \begin{center} \includegraphics[width=\columnwidth]{gal_succ_interloper_l_0p06_R_19.pdf} \caption{\label{Fig:gal_success} Galaxy success rate (upper panel) and interloper fraction (lower panel) as a function of richness $N$ for the best (blue) and bijective (red) match. In the lower panel, the solid line corresponds to the interloper fraction as defined by Equations~\ref{Eq:fibest} and \ref{Eq:fibij}, and the dashed line refers to the definition of interlopers as field galaxies ($f_{\mathrm{I,field}}$). For the sake of clarity in the lower panel, error bars are shown only for $f_{\mathrm{I}}$.} \end{center} \end{figure} \begin{figure \includegraphics[width=\columnwidth]{all_vs_z_N_l_0p06_R_19.pdf} \caption{\label{Fig:all_vs_z_N} Completeness (blue), purity (red), galaxy success rate (green) and interloper fraction (black) as a function of redshift for different bins of $N$. The solid and dotted lines correspond to the best and bijective matches, respectively. For the sake of clarity, error bars and bijective matches are shown only for completeness and purity.} \end{figure} All of the above defined quantities can be combined into three statistical measures of quality of our group reconstruction as follows : \begin{equation} \label{eq:g1} g_1 = \sqrt{(1-c^{\mathrm{best}})^2+(1-p^{\mathrm{best}})^2}, \end{equation} \begin{equation} \label{eq:g2} g_2 = \mathrm{overmerging}^{\mathrm{best}} \times \mathrm{fragmentation}^{\mathrm{best}} \end{equation} and \begin{equation} \label{eq:g3} g_3 = \sqrt{(1-S^{\mathrm{best}}_{\mathrm{gal}})^2+(f^{\mathrm{best}}_{\mathrm{I}})^2}. \end{equation} For a perfectly reconstructed group catalog $c^{\mathrm{best}}$, $p^{\mathrm{best}}$, overmerging, fragmentation and $S_{\mathrm{gal}}$ are equal to one, and $f_{\mathrm{I}} = 0$. As the reconstruction of such a perfect catalog is not possible, the best we can do is to try and find a balance between different quantities that often tend to be exclusive. This is the case of completeness and purity, the balance of which is measured by $g_1$. Increasing the completeness of a group catalog often results in decreasing its purity (less undetected groups, more spurious ones). Similarly, $g_3$ is a measures of balance between spurious groups and interlopers. $g_1$ and $g_3$ should thus be minimised. A good group catalog should also avoid overmerging and fragmentation, so $g_2$ should be maximised. We recall that the purpose of the above defined measures is not the optimization of the group-finder parameters, but rather an attempt to break the degeneracy between several combinations of potentially optimal parameters. A summary of the statistics computed for $N\geq 5$ for such combinations is shown in Table~\ref{tab:stats}. We notice that there is no couple of linking lengths for which all three goodness parameters are optimised at the same time. Based on the argument of symmetry between real and reconstructed group catalogs, we selected in Section~\ref{appendix:optimization} the combination of linking lengths ($b_\perp$, $R$) = (0.06, 19.0) as being optimal. This set of parameters leads to the highest balance between completeness and purity (lowest $g_1$), and lowest interloper fraction among all four considered combinations. The FoF\xspace parameters ($b_\perp$, $R$) = (0.07, 19.0) result in a catalog with the lowest overmerging and fragmentation (highest $g_2$), and lowest number of spurious groups (highest $S_{\mathrm{gal}}$), again among the four selected combinations. However, they also produce the highest asymmetry between the mock and FoF\xspace counterparts. Thus it seems reasonable to keep ($b_\perp$, $R$) = (0.06, 19.0) as our best choice of optimal FoF\xspace parameters for the construction of our group catalog. \begin{table \centering \caption{Summary of statistics for $N \geq 5$ (same combinations of FoF\xspace parameters as in Table~\ref{tab:opt}). \label{tab:stats} } \begin{tabular}{cccccccc} \hline \hline &$b_\perp$ & $R$ & $g_1$ & $g_2$ & $g_3$ & $S_{\mathrm{gal}}$ & $f_{\mathrm{I}}$ \\ \hline \hline A & 0.06 & 27.5 & 0.016 & 0.724 & 0.308 & 0.772 & 0.208 \\ B & 0.07 & 19.0 & 0.024 & 0.739 & 0.306 & 0.808 & 0.238 \\ C & 0.06 & 19.0 & 0.013 & 0.723 & 0.329 & 0.729 & 0.188 \\ D & 0.07 & 15.0 & 0.021 & 0.733 & 0.318 & 0.776 & 0.226 \\ \hline \end{tabular} \end{table} The following figures illustrate the quality level of this particular reconstructed catalog. Figure~\ref{Fig:c_p_vs_N} shows the completeness and purity for the best and bijective match as a function of richness $N$. The completeness for the best match $c^{\mathrm{best}}$ is about 0.99, showing no dependence on the richness $N$, while the completeness for the bijective match $c^{\mathrm{bij}} \simeq 0.84$ only shows a weak dependence on $N$. The purity parameters, $p^{\mathrm{best}} \simeq 0.94$ and $p^{\mathrm{bij}} \simeq 0.76$, are also almost independent on richness for $N \geq 3$. For $N=2$ both purities drop significantly (to $\simeq 0.7$ for $p^{\mathrm{best}}$ and $\simeq 0.56$ for $p^{\mathrm{bij}}$). Fragmentation and overmerging are shown in Fig.~\ref{Fig:frag_over}. Bijectively matched groups are neither overmerged nor fragmented no matter their richness. For groups matched according to our best criterion, fragmentation is more severe than overmerging. The overall average fragmentation is $\simeq 0.74$, while this value drops to $\simeq 0.65$ for groups with $N \geq 5$. Overmerging decreases with $N$, however only mildly with an average value of $\simeq 0.91$. The galaxy success rate and interloper fraction, two statistical quantities on a galaxy-to-group basis, are shown in Fig.~\ref{Fig:gal_success}. For both best and bijective matches these measures show similar dependences on $N$. The galaxy success rates $S^{\mathrm{best}}_{\mathrm{gal}} \simeq 0.73$ and $S^{\mathrm{bij}}_{\mathrm{gal}} \simeq 0.65$, both decline until $N \simeq 5$, after which the dependence on $N$ weakens. The interloper fraction is overall very low ($f^{\mathrm{best}}_{\mathrm{I}} \simeq 0.17$ and $f^{\mathrm{bij}}_{\mathrm{I}} \simeq 0.09$) with almost no dependence on $N$, except for $N = 2$, where it reaches its minimum value. Finally, Fig.~\ref{Fig:all_vs_z_N} shows the statistics as a function of redshift for different bins of richness $N$. All quantities are consistent with no or very weak evolution with redshift. Only in the highest redshift bin do we note an increase in purity for groups with $N \leq 4$, and a decrease in completeness and interloper fraction for bijective matches among groups of richness $\geq 10$. \section{Introduction} \label{sec:introduction} Most galaxies are either disky and actively forming stars, or spheroidal, with little or no ongoing star formation \citep[e.g.][]{Strateva2001,Baldry2004}. This bimodality has been shown to exist up to redshift $z \sim 1$ \citep[e.g.][]{Bell2004,Tanaka2005,Willmer2006}, and possibly to $z \sim 2$ and beyond \citep[e.g.][]{Kriek2008,Brammer2009}. In between these so-called blue and red populations lies the little populated ``green valley" \citep{Wyder2007}, in which galaxies are thought to transit from star-forming to ``red and dead'' \citep{Krause2013}. The star-formation rate (SFR), which correlates with the density of gas in galaxy disks\citep{Schmidt1959,Kennicutt1998}, drops when the gas goes missing, a phenomenon known as quenching. Stellar mass and environment on various scales both seem to play a role in quenching galaxies \citep[e.g.][]{Peng2010}, but the physical processes most responsible for it remain elusive. The specific star-formation rate (sSFR\xspace, the SFR per unit stellar mass) of blue galaxies is shown to decrease with increasing mass \citep[e.g.][]{Elbaz2007,Noeske2007,Salim2007}, the most massive galaxies being almost completely quenched. Several ``mass quenching'' mechanisms limited to massive halos have been proposed. Supplemented by virial shock heating of infalling cold gas \citep[][]{Birnboim2003, Keres2005, Dekel2006}, and gravitational heating due to clumpy accretion \citep[][]{Birnboim2007, Dekel2008, Dekel2009}, feedback from active galactic nuclei (AGN) has been found to be the most efficient mechanism to self-regulate the mass content of simulated massive galaxies \citep[e.g.][]{Croton2006, Hopkins2006, Sijacki2007} and the key ingredient to produce realistic mock massive galaxies in state-of the-art hydrodynamical cosmological simulations \citep[][]{Dubois2014, Dubois2016, Vogelsberger2014, Schaye2015}. The stabilisation of gas disks \citep[``morphological quenching";][]{Martig2009} can explain quenching in less massive halos. The properties of galaxies with respect to their local environment -- on the scale of dark matter (DM) halos, i.e. groups and clusters -- have long been investigated, e.g. their color \citep[e.g.][]{Butcher1984,Balogh2004a}, morphology \citep[e.g.][]{Dressler1980,Dressler1997} or SFR \citep[e.g.][]{Gomez2003,Balogh2004b,Wijesinghe2012,Brough2013,Robotham2013,Davies2015}: galaxies are known to be more often red and elliptical in clusters \citep[e.g.][]{Oemler1974,DavisGeller1976,Balogh1997,Balogh1999}. Various local environmental quenching processes have been proposed. These include cluster-specific processes, such as galaxy harassment \citep{Moore1996}, ram pressure stripping of gas \citep{GunnGott1972} or interactions with the tidal field of clusters \citep{Byrd1990}, and group-specific processes, such as galaxy mergers \citep[e.g.][]{Toomre1972} and strangulation \citep{Larson1980,Balogh2000}. \citet{Prescott2011} found that the effect of environment on satellite galaxies was primarily a function of their host's stellar mass rather than their own stellar mass, and that strangulation was likely to be the main gas removal process that quenches them. \citet{Grootes2017} argued on the contrary that it was the ongoing substantial accretion of gas in groups that led to the buildup of spheroidal components in satellite disk galaxies, and eventually to their ``death by gluttony" rather than starvation. It is also debated whether it is the central-satellite dichotomy \citep[e.g.][]{Peng2012,Kovac2014} or simply being a member of a group that is crucial for the quenching of star formation, i.e. are we mostly seeing satellite quenching \citep[e.g.][]{Bosch2008,Hartley2015,Carollo2016} or group quenching \citep{Knobel2015}? In addition to local (halo scale) effects, the formation epoch and subsequent accretion history of a halo depend on its locus in the large-scale environment, a phenomenon referred to as ``assembly bias" \citep[e.g.][]{ShethTormen2004,Gao2005,Wechsler2006,Musso2017}. An observation, first made and coined ``galactic conformity" by \citet{Weinmann2006}, who analysed galaxy groups in the Sloan Digital Sky Survey \citep[SDSS;][]{sdss2000}, has been suggested as evidence of this past large scale environmental effect: the fraction of quenched satellites around a quenched central is found to be significantly higher than around a star-forming central\footnote{\citet{Holmberg1958} first observed color conformity in galaxy pairs and concluded that it could not be accounted for by the known correlation with morphological type. For 20 years the Holmberg effect received attention from neither observers nor theoreticians, but it was later confirmed by several groups as evidence of coupled evolution (http://ned.ipac.caltech.edu/level5/Sept02/Keel/Keel5\_4.html).}, {\it at fixed halo mass}. An interpretation might be that galaxy properties depend not only on the mass of their DM halos but also on their assembly history \citep[e.g.][]{Hearin2015}. Galactic conformity has since been confirmed by other analysis of the SDSS and observed at redshift $z\geq 2$ \citep{Hartley2015,Kawin2016}. It was also claimed to persist out to scales far larger than the virial radius of halos \citep{Kauffmann2013,Hearin2015}, but this puzzling result, named ``2-halo conformity", and which several studies have accounted for by advocating the mutual evolution of halos in the same large-scale tidal field \citep{Hearin2015,Hearin2016,ZuMandelbaum2017,RafieferantsoaDave2017}, was recently questioned and found attributable to methodological biases \citep{Sin2017}. Quenching in groups has been studied \citep[e.g.][]{Bosch2008,Prescott2011,Peng2012,Wetzel2013,Knobel2015,Kawin2016} by means of red fractions (fractions of quiescent galaxies) and quenching efficiencies (excess quenching with respect to some control sample) as a function of various environmental parameters, such as halo mass, halo-centric distance, local galaxy density, or mass of the central galaxy. While there is general agreement on the dependence of satellite quenching on these parameters, a physical interpretation is not straightforward, mainly because the parameters are correlated and difficult to disentangle, even through multidimensional analysis \citep{Knobel2015}. The aim of this paper is to revisit the quenching impact of group environment, and to probe ``1-halo'' galactic conformity in particular, using the spectroscopic survey Galaxy and Mass Assembly \citep[GAMA; ][]{Driver2009,Driver2011,Liske2015}. Thanks to its depth and spectroscopic completeness, GAMA allows us to expand the SDSS investigations to $z\sim0.2$ in a significant volume of the Universe. From a group catalog we constructed using an anisotropic Friends-of-Friends algorithm taking into account the effects of redshift-space distortion, we study the red fraction, quenching efficiency and star-formation activity of galaxies in groups as a function of central galaxy color, group stellar mass, large-scale density, and finally halo mass. This group catalog, corrected for the finger-of-god effects, is used in a companion paper \citep{Kraljic2018} to improve the reconstruction of the cosmic web and to explore the impact of its anisotropic features (nodes, filaments, walls, voids) on galaxy properties. The outline of the paper is as follows: in Section \ref{sec:data}, we describe the GAMA data, the derivation of the physical properties of the galaxies and our criterion for classifying them into star-forming and quiescent. We present the group catalog in Section \ref{sec:groups}. The stellar mass and environmental dependences of quenching and conformity are analysed in Section \ref{sec:results} and \ref{sec:environment} respectively. Star-formation in groups is explored in Section \ref{sec:sf}. We discuss the uncertainties of our analysis in Section \ref{sec:caveats} and conclude in Section \ref{sec:conclusions}. The paper also contains appendices dedicated to the detailed description of our group finder (adopted algorithm, mock catalogs used for its calibration, optimization strategy and tests of group reconstruction quality). Throughout our analysis, we adopt a flat $\Lambda$CDM cosmology with H$_0 =$ 67.3 km s$^{-1}$ Mpc$^{-1}$, $\Omega_{m} = 0.3$ and $\Omega_{\Lambda} = 0.7$ \defcitealias{Planck_cosmoparam2016}{Planck Collaboration 2016} \citepalias{Planck_cosmoparam2016}. All magnitudes are quoted in the AB system and the physical parameters are derived assuming a Chabrier IMF \citep{Chabrier2003}. \section{Data} \label{sec:data} \begin{figure*} \includegraphics[width=6.05cm]{NRK2.png} \hspace{-0.5cm} \includegraphics[width=11.9cm]{CMD_U2.png} \caption{\label{cmd} Rest frame $(NUV-r)$ versus $(r-k)$ colors (left panel), and dust corrected $(u - r)$ color versus $r$ magnitude (middle panel) and versus stellar mass (right panel) at $0.02 < z < 0.24$. Stellar masses are in units of $\log (M_{\star} / M_\odot)$.The contours are unweighted number density contours. The color code reflects the average specific star-formation rate (sSFR\xspace) per pixel. The straight color cut at $(u - r)_{corr} = 1.8$ (dotted line) is statistically consistent with a cut in sSFR\xspace at $\sim 10^{-10.5}$ yr$^{-1}$ except for low mass red galaxies. The solid and dashed lines show the median relations for the quiescent and star-forming populations defined in terms of these $(u-r)_{corr}$\ and sSFR\xspace cuts. In the rest of this paper, the blue and red populations are defined in terms of dust corrected $(u-r)$ with a dividing line at $(u-r)_{corr}=1.8$.} \end{figure*} The GAMA\footnote{http://www.gama-survey.org/} survey \citep{Driver2009,Driver2011,Liske2015} is a joint European-Australian spectroscopic survey combining multi-wavelength photometric data from several ground and space-based programs. The photometric coverage includes data from the Galaxy Evolution Explorer (GALEX) in the far and near-ultraviolet (FUV and NUV), the Sloan Digital Sky Survey (SDSS) at optical wavelengths ($u$, $g$, $r$, $i$ and $z$ passbands), the VISTA Kilo-degree Infrared Galaxy (VIKING) Survey in the ZYJHK bands, and the Wide-Field Infrared Survey Explorer (WISE) in four mid-infrared bands from 7 to 22 $\mu$m. Far-infrared (FIR) data from Herschel ATLAS (H-ATLAS) and radio data from the Giant Meterwave Radio Telescope have also been acquired. GAMA was intended to link wide and shallow surveys such as the SDSS Main Galaxy Sample \citep{Strauss2002} to narrow and deep surveys such as DEEP2 \citep{DEEP2}. The photometric data used in this work is the LAMBDAR\footnote{Lambda Adaptive Multi-Band Deblending Algorythm in R} panchromatic photometric catalog LambdarCatv01 \citep{Wright2016}, consisting of three equatorial fields (G09, G12, G15) covering a total of 180 deg$^2$ (3 times $12\times5$ deg$^2$). The spectroscopy was carried out using the 2dF/AAOmega multi-object spectrograph on the Anglo-Australian Telescope (AAT), building on previous spectroscopic surveys such as SDSS, the 2dF Galaxy Redshift Survey (2dFGRS) and the Millennium Galaxy Catalogue (MGC). It is nearly complete (98\%) to $r=19.8$ \citep{Liske2015}, each region of the sky being observed multiple times (the target density being much higher than the available fiber density), with at least one member of any given close-packed group receiving a fiber whenever that region was visited \citep{Robotham2010}. This makes GAMA a more suitable dataset to study galaxies in groups and close pairs than other spectroscopic surveys (e.g. SDSS) that miss a fraction of close targets, especially in high density regions (see \citet{Liske2015} for the GAMA completeness on small scales). \subsection{Physical parameters} The physical quantities used in this work were derived from the spectral energy distribution (SED) fitting code LEPHARE\footnote{http://cesam.lam.fr/lephare/lephare.html} using the FUV to NIR photometry (11 bands). We used a set of model spectra from \citet{BruzualCharlot2003}, assuming a range of exponentially declining star-formation histories and a Chabrier IMF \citep{Chabrier2003}, as well as three dust obscuration laws: the commonly used starburst law of \citet{Calzetti2000}, an exponential law with exponent 0.9 and the Small Magellanic Cloud law of \citet{Prevot1984}. The physical quantities of interest in this paper are the stellar mass (defined as the median of the probability distributions), the dust-corrected absolute magnitudes, the specific star-formation rate (sSFR\xspace) and the maximum volume $V_{max}$ in which a galaxy would remain observable above the survey flux limit given its luminosity and spectral type. \subsection{Star-forming (blue) vs quiescent (red) classification} While the bimodality in galaxy properties has been observed as far back as \citet{Hubble1926}, with late type, star-forming, spiral or irregular, blue galaxies on the one hand and early type, elliptical, ``red and dead" galaxies on the other, defining a transition between the two populations is not straightforward as the distributions overlap \citep{Taylor2015}. Color-magnitude or color-mass diagrams are most often used to draw the line, or a smooth transition zone, e.g. \citet{Taylor2015} introduced a statistical approach allowing for the natural overlap of the two populations in the ($g - i$) versus stellar mass diagram using GAMA at $z < 0.12$. However shorter wavelengths prove most discriminating \citep[e.g. UV-optical colors;][]{Wyder2007}, even though dust causes confusion since dusty star-forming galaxies look red and may be mistaken for quiescent galaxies. This mixing can be efficiently sorted out by using color-color diagrams, such as ($NUV-r$) vs $(r-k)$ which was shown to be a powerful diagnostic to separate dusty star-forming galaxies from intrinsically red, quiescent ones \citep{Arnouts2013}. Figure \ref{cmd} shows the distribution of galaxies in the ($NUV-r$) versus $(r-k)$ diagram for the $\sim 85\%$ that are UV detected (left panel), and the dust corrected ($u - r$) versus $r$ and versus stellar mass diagrams (middle and right panels respectively), where $NUV$, $u$, $r$ and $k$ refer to the rest-frame magnitudes. The color code reflects the average sSFR\xspace per pixel. The dust corrected color and sSFR\xspace are correlated through the modeling of the attenuation, but the two populations are separated equally well around the same sSFR\xspace values in the uncorrected ($NUV-r$) vs $(r-k)$ diagram. The consistency between the dust uncorrected bi-color diagram and the dust corrected color gives us confidence in the dust recipes used in our SED fitting. We find that a straight color cut at $(u - r)_{corr} = 1.8$ is consistent with a cut in sSFR\xspace at $\sim 10^{-10.5}$ yr$^{-1}$. The solid and dashed lines show the median relations for the quiescent and star-forming populations defined in terms of these $(u-r)_{corr}$\ and sSFR\xspace boundaries, respectively. They form the well-known red and blue sequences. In the range of stellar masses that we use in this work ($\log(M_{\star} /M_\odot) \gtrsim 10.25$, see next section), the above separation criteria yield undistinguishable results. Thus we simply define the blue and red populations in terms of dust corrected $(u-r)$ color with a dividing line at $(u-r)_{corr}=1.8$, and we will use the terms red (blue) and quiescent (star-forming) galaxies interchangeably. The term $(u-r)$, or U-R in figure labels, will always refer to the dust-corrected color. \subsection{Stellar mass completeness} \label{subsec:mass_completeness} In order to compute redshift and mass unbiased red fractions, we must restrict our sample to group members more massive than the completeness limit at the maximum redshift considered. Figure \ref{minmassz} shows the mass completeness limits as a function of redshift for the blue and red galaxies as blue and red dashed lines respectively. These limits are defined as the mass above which 90\% of the galaxies reside at a given redshift $z \pm 0.004$. The redshift/mass compromise used in the rest of this paper is $z<0.21$ and $\log (M_\star/M_\odot)>10.25$ (the red mass limit in this redshift range). \begin{figure} \begin{center} \includegraphics[width=9.cm]{MINMASSZ.png} \caption{The stellar mass in units of $\log (M_{\star} / M_\odot)$ versus redshift distribution, with number density contours for the red population. The blue and red dashed lines represent the mass completeness limits for the blue and red galaxies respectively as a function of $z$. The vertical lines and the red horizontal line are the redshift and mass limits used in our analysis: $0.02<z<0.21$ and $\log (M_\star/M_\odot)>10.25$} \label{minmassz} \end{center} \end{figure} \section{Group catalog} \label{sec:groups} \subsection{Group catalog construction} \label{subsec:fofconstruction} Although a GAMA group catalog already exists \citep{Robotham2011}, we developed our own tool for the purpose of an ongoing cosmic web study in this and other datasets \citep[e.g.][]{Malavasi2017,Kraljic2018}. A detailed description of the Friends-of-Friends (FoF\xspace) algorithm we adopted to detect the groups is presented in the appendices. A schematic illustration of the method is depicted in Fig.~\ref{Fig:scheme}. In order to deal with the effects of redshift-space distortions, the distance between two galaxies $i$ and $j$ is measured in two coordinates: the parallel $d_{\parallel,ij}$ (Eq.~\ref{eq:d_parallel}) and perpendicular $d_{\perp,ij}$ (Eq.~\ref{eq:d_perp}) projected comoving separations to the mean line-of-sight $\vec{l}$ (Eq.~\ref{eq:l}). We next introduce two linking lengths $b_\perp$ and $b_\parallel$, the projected and line-of-sight linking lengths in units of the mean intergalactic separation $\overline{r}_{ij}$ (Eq.~\ref{eq:mean_sep}), respectively, related through the radial expansion factor $R = b_\parallel / b_\perp$ accounting for the peculiar motions of galaxies within groups. Two galaxies are assumed to be linked to each other if their projected perpendicular and parallel separations are smaller than the corresponding linking lengths (Eq.~\ref{eq:link_perp} and Eq.~\ref{eq:link_parallel}). \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{vector.pdf} \caption{\label{Fig:scheme} Schematic illustration of the definitions used in the FoF\xspace algorithm. In order to deal with the effects of redshift distortions, the separation $\vec{s}$ between two galaxies $i$ and $j$ at positions $\vec{r_i}$ and $\vec{r_i}$ is measured in two coordinates: the parallel $d_{\parallel,ij}$ (Eq.~\ref{eq:d_parallel}) and perpendicular $d_{\perp,ij}$ (Eq.~\ref{eq:d_perp}) projected comoving separations to the mean line-of-sight $\vec{l}$ (LoS; Eq.~\ref{eq:l}). } \end{center} \end{figure} The linking length $b_\perp$ and the radial expansion factor $R$ (or equivalently the perpendicular and line-of-sight linking lengths) are the two free parameters to be optimized. Their values will affect the quality of the resulting group catalog: too small values will tend to break up single groups into several groups, while too large values will merge multiple groups into single ones. These free parameters can be determined from the optimization of some group cost function, depending on the scientific purpose of the group catalog, when tested on mock catalogs. Our objective is to obtain a catalog with a high group detection rate and a low contamination by galaxies coming from different groups. We followed the definition of the group cost function of \citet{Robotham2011}, $S_{\mathrm{tot}}$, with slightly different notations and minor modifications. This cost function is meant to fulfil the requirement that the reconstructed and underlying real group catalog are mutually accurate representations of each other. By definition, $S_{\mathrm{tot}}$ takes values between 0 and 1 and must be maximised. The method is described in Appendix \ref{appendix:optimization}. Results of the optimization computation are shown in Fig.~\ref{Fig:stot}. As can be seen, there is a degeneracy between the parameters $b_\perp$ and $R$: for values of $b_\perp$ equal to 0.06 and 0.07, $S_{\mathrm{tot}}$ does not evolve significantly between $R \gtrsim 16$ and $R \simeq 30$, and between $R \gtrsim 14$ and $R \simeq 25$, respectively. This means that the global statistical properties of group catalogs constructed using a combination of $b_\perp$ and $R$ in these ranges will be similar. Given this degeneracy, we include an additional criterion of symmetry between the recovered and real groups: we request that the individual contribution of the mock and FoF\xspace components to the overall cost function be similar (the more similar these contributions, the more similar the reconstructed groups of the mock and FoF\xspace catalogs). With this additional constraint, our final choice for the FoF\xspace parameters are $b_\perp$ = 0.06 and $R$ = 19.0. This combination of parameters is optimal when considering statistical measures of the group reconstruction quality independent of those used in the optimization, as shown in Appendix \ref{appendix:quality}. Our linking lengths are in good agreement with the combination found to be optimal for studies of environmental effects by \citet{DuarteMamon2014}, who tested the parameters according to the scientific goal of the group catalog. We did not apply any completeness correction to the linking parameters since the GAMA survey is spectroscopically extremely complete \citep[$\sim$ 98\% within the $r$-band limit,][]{Driver2011} and the mean modifications would be less than 1\% \citep{Robotham2011}. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{optimisation_v4.pdf} \caption{\label{Fig:stot} Group cost function $S_{\mathrm{tot}}$ as a function of the radial expansion factor $R$ for different values of the linking length $b_\perp$. The global maximum (A) is obtained for ($b_\perp$,$R$) = (0.06,27.5). However given the degeneracy between the two parameters, we include an additional criterion of symmetry between the recovered and real group and apply it to the following local maxima: ($b_\perp$, $R$) = (0.07, 19.0) corresponding to the maximum $S_{\mathrm{tot}}$ for $b_\perp$=0.07 (B), ($b_\perp$, $R$) = (0.06, 19.0) corresponding to the most symmetric contribution from the mock and FoF\xspace groups to $S_{\mathrm{tot}}$ for $b_\perp$=0.06 (C) and ($b_\perp$, $R$) = (0.07, 15.0) corresponding to the most symmetric solution for $b_\perp$=0.07 (D). Our final choice is (C) (see the appendices for more details). } \end{center} \end{figure} \begin{figure*} \includegraphics[width=8.7cm]{NFOF_HIST_V2.png} \includegraphics[width=8.7cm]{NFOF_MASS_V2.png} \caption{{\bf Left:} The richness distribution of the groups derived in this work (Section \ref{sec:groups}) and by \citet{Robotham2011}. {\bf Right:} The mean group richness as a function of the central's stellar mass for groups with at least 2 members above the stellar mass limit. 76\% of group centrals are red in both group catalogs and collectively own $\sim 82\%$ ($85\%$ for \citet{Robotham2011}) of the satellites above the mass limit. } \label{nfof_hist} \end{figure*} \begin{table} \centering \caption{Group catalog. \label{tab:galaxy_sample}}{No. of objects with $\log (M_{\star} / M_\odot) > 10.25$ at $0.02<z<0.21$} \begin{tabular}{cccccc} \hline \hline \multicolumn{2}{c}{isolated centrals} & \multicolumn{2}{c}{group centrals} & \multicolumn{2}{c}{ satellites } \\ red & blue & red & blue & red & blue \\ \hline \hline 9659 & 8392 & 3092 & 975 & 4850 & 2378 \\ 32.9\% & 28.6\% & 10.5\% & 3.3\% & 16.5\% & 8.1\% \\ \hline \end{tabular} \end{table} \subsection{Central vs satellite classification} Central galaxies are often assumed to be the most massive galaxies in a halo lying at the minimum of its gravitational potential well, while satellites are all remaining group members orbiting the centrals within the group potential. This central-satellite dichotomy is easily applied in simulations but represents a non-trivial challenge in real data. Groups may be fragmented, over-merged, they can contain interlopers or miss actual members, and spurious groups may be generated. To minimize the impact of group membership misidentification, in addition to a given physical property of galaxies (stellar mass and/or luminosity), some information about their spatial \citep[e.g.,][]{Robotham2011,Knobel2012,Knobel2015} and velocity \citep{Carollo2013} distribution within the group may be included. Following \citet{Robotham2011} \citep[see also][for a different implementation]{Eke2004}, we tested an iterative approach. The first step of this method is to compute the centre of mass of the group (CoM\xspace), then to proceed iteratively by computing the projected distance (as in Eq. \ref{eq:d_perp}) from the CoM\xspace for each group member and by rejecting the most distant galaxy. This process stops when only two galaxies remain: the most massive of the two is identified as the central galaxy of the group and all other group members are classified as satellites. As this iterative center coincides with the most massive galaxy in 98\% of the groups, we chose the most massive galaxies as centrals. Thus all groups keep their central when applying a mass cut-off, or else disappear completely. For each group in the redshift range $0.02<z<0.21$ with $N_{\mathrm{FoF}}$ members, we defined $N_{\mathrm{FoF}}^{m_{cut}}$ as the number of group members with mass $\log (M_\star/ M_\odot)>10.25$ (see Section \ref{subsec:mass_completeness}). In the following sections, ``group galaxies'' refer to all galaxies in groups with $N_{\mathrm{FoF}}^{m_{cut}}\geq 2$. ``Isolated", ``lone" or ``field" galaxies refer to galaxies for which $N_{\mathrm{FoF}}=1$ instead of $N_{\mathrm{FoF}}^{m_{cut}}=1$ to exclude known group centrals with satellites detected below the mass limit, although this makes negligible difference in our results. Of course group centrals with faint satellites undetected in the survey will still be present in the lone category, especially at the lower mass limit and upper redshift limit of the sample. The total numbers of blue and red isolated centrals, group centrals and satellites are reported in Table~\ref{tab:galaxy_sample}. Figure \ref{nfof_hist} shows the group richness distribution (left panel) and the mean group richness as a function of the central's stellar mass (right panel) for groups with blue and red centrals. For comparison, we also show these distributions for the group catalog of \citet{Robotham2011}. Our algorithm leaves more galaxies alone (61.5\% vs 51.8\%), yields more small groups and less satellites (24.6\% vs 34.5\%) than \citet{Robotham2011}, who generate larger groups. 20\% of their satellites are field galaxies and 6\% group centrals in our own catalog. However but we find that these differences in the rates of fragmentation/merging is not significant enough to alter our statistical results (see also Section \ref{sec:caveats}). In both cases, 76\% of group centrals are red and these red centrals tend to have more satellites than blue ones of the same stellar mass, in agreement with \citet{WangWhite2012}. Red centrals collectively own over 80\% of the satellites above the chosen mass limit. \section{Quenching and conformity} \label{sec:results} As mentioned in the introduction, \citet{Weinmann2006} first coined the concept of ``galactic conformity'', referring to the observation derived from a SDSS DR2 galaxy group catalog, that the fraction of quiescent satellites around a quiescent central was significantly higher than around a star-forming central. They observed this phenomenon at all satellite luminosities and at all halo masses, computed via abundance matching from the total luminosity of the groups. We explore this effect in the GAMA group catalog described in the previous section, as a function of stellar mass, of various local and large-scale environmental parameters, and finally of halo mass. \begin{figure} \includegraphics[width=9.cm]{FIG_5.png} \caption{The red fraction of galaxies as a function of stellar mass in units of $\log (M_{\star} / M_\odot)$ in different environments as specified in the legend. The 1-$\sigma$ error bars are computed using the beta distribution quantile technique \citep{Cameron2011}. The excess quenching in satellites as well as in group centrals with respect to lone centrals suggests a form of ``group quenching" is at play. The excess quenching in satellites of red centrals with respect to satellites of blue centrals is the galactic conformity signal. } \label{redfraction_mass} \end{figure} \begin{figure} \includegraphics[width=9.cm]{FIG_14_MASS.png} \caption{The quenching efficiency (QE) of galaxies as a function of stellar mass in units of $\log (M_{\star} / M_\odot)$ in different environments as in Fig.~\ref{redfraction_mass}. The 1-$\sigma$ error bars are based on 1000 bootstrap samples. This QE is defined as the excess red fraction of each population with respect to the red fraction of lone centrals, normalized by the star-forming fraction of that reference sample, at fixed stellar mass (Eq.~\ref{eq:eps_q_i}). It is, by definition, null for lone centrals. Up to $\log(M_\star/M_\odot) \approx 11$, all the QE curves are nearly independent of mass. } \label{qe_mass} \end{figure} \subsection{Mass quenching} \label{subsec:mquenching} We estimate the fraction of quiescent galaxies (red fraction) in a given sample as: \begin{equation} f_{q} = \frac{ \sum_i q_i w_i} {\sum_i w_i }, \end{equation} where $q_i$ is unity if the $i^{\rm th}$ galaxy in the sample is quenched and zero otherwise. The parameter $w_i$ is the 1/$V_{\rm max}$ weight of the $i^{\rm th}$ galaxy (note that no weighting yields undistinguishable results above the chosen completeness limit). Figure \ref{redfraction_mass} shows the red fraction of galaxies as a function of stellar mass in different environments as follows: (i) isolated galaxies (gray solid line for the data and dotted line for its polynomial fit), (ii) centrals of groups (dashed green line), (iii) satellites of blue central galaxies (blue line), (iv) satellites of red central galaxies (red line) and (v) satellites of any central (dotted line with gray shaded errors). The fractions are estimated globally in each mass bin, not averaged over the red fractions of individual groups. In all environments, the red fraction increases strongly with increasing stellar mass. In the mass range that we are able to probe, a tenfold increase in stellar mass roughly doubles the red fraction. We will refer to this trend as ``mass quenching" \citep[e.g.][]{Peng2012}, as opposed to the ``environmental quenching" observed in the vertical direction at a given stellar mass. \subsection{Conformity} \label{subsec:conformity} Figure \ref{redfraction_mass} shows that the red fraction of satellite galaxies, which is dominated by satellites of red centrals, is significantly higher at all masses than that of lone centrals. The difference is largest at low masses and getting smaller towards the highest masses, in general agreement with \citet{Bosch2008}, although the trend they observe is more dramatic (but see Section \ref{subsec:density}). The red fraction of satellites around blue centrals is about 10\% higher than the red fraction of isolated galaxies, except perhaps in the two highest mass bins ($\log(M_\star/M_\odot) \gtrsim 10.8$) where uncertainties are large. At $\log(M_\star/M_\odot) \gtrsim 11$, blue centrals become rare ($<20$\%) and we find no massive satellites around them. There are mixed results in the literature about whether or not satellites around star-forming central galaxies are quenched in excess of field galaxies. By examining the star formation properties of bright satellites around isolated Milky Way-like hosts in the local Universe, \citet{Phillips2014} found that quenching occurs only for satellites of quenched hosts while star formation is unaffected in the satellites of star-forming hosts. \citet{Hartley2015} also reported that the satellite population of star-forming centrals was similar to the field population of equal stellar mass at intermediate to high redshift. Conversely, \citet{Kawin2016} found a higher quenching efficiency of satellites around star-forming centrals compared to the background galaxies at $0.3< z <1.6$, in agreement with the study of \citet{Phillips2015} in the local Universe considering Milky Way-like systems hosting two satellites. Our analysis is more consistent with the latter studies, at least for low mass satellites. Satellites of red centrals exhibit the highest level of quenching. Their red fraction is systematically about 10\% higher than that of satellites around star-forming centrals at all masses $\log(M_\star/M_\odot) \lesssim 11$. This difference between the blue and the red lines is a significant galactic conformity signal, an environmental effect that adds to mass quenching: satellites around blue centrals have to be $\sim$ 2 times more massive than those around red centrals to exhibit the same level of quenching. In the high mass regime, $\log(M_\star/M_\odot) \gtrsim 11$, where conformity disappears for lack of massive blue centrals hosting massive satellites, the red fraction becomes closer to that of field galaxies. \subsection{Group quenching} \label{subsec:gquenching} Lastly, group centrals appear to follow the minimum quenching behavior of the satellite population as a function of mass: at $\log(M_\star/M_\odot) \lesssim 11$, they quench like satellites of blue centrals, while at higher mass they converge with satellites of red centrals, in such a way that their red fraction runs $\sim 10 \%$ above that of isolated galaxies at all masses, reaching 100\% at $\log(M_\star/M_\odot) \gtrsim 11.4$. This excess quenching of group centrals over isolated centrals, despite the fact that the latter are likely to be contaminated by group members, supports the idea that quenching in groups is not reduced to ``satellite quenching'', as advocated by \citet{Knobel2015}. We refer to this mass-independent excess as ``group quenching", which, at this point, does not necessarily mean that satellites and group centrals respond equivalently to the group environment \citep{Knobel2015}. We return to this point later. \begin{figure} \hskip -0.4cm \includegraphics[width=9.5cm]{ENVIRONMENTS.png} \caption{Correlations between the three environmental parameters: dust corrected $(u-r)$ color of the central galaxies, group stellar mass and large scale density contrast at the central's location (see text for details). The solid blue and red lines show the mean values for blue and red group central galaxies respectively. The dashed blue and red lines are the mean values for satellites of blue and red centrals respectively (i.e. the same distributions weighted with the number of satellites in each bin). The green line shows the mean color/density relation of group centrals when blue and red ones are mixed.} \label{environments} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{FIG_13.png} \caption{The red fraction of satellites in four bins of stellar mass as a function of their central's $(u-r)$ color (left), their group's stellar mass in units of $\log (M_{\star} / M_\odot)$ (middle), and the large scale density contrast in units of $\log (1+\delta)$ (right). Mass quenching shows in the vertical direction at fixed environmental parameter. The upward trends with increasing central color, group mass and density, at fixed stellar mass, are essentially driven by satellites of red centrals (dot-dashed lines). The red fractions for satellites of blue centrals (dashed lines) are lower and consistent with being independent of all three environmental parameters. } \label{redfraction_ur} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{FIG_14.png} \caption{Quenching efficiencies with respect to lone centrals at fixed mass (Eq.~\ref{eq:eps_q_i}) as a function of central color (left), group stellar mass (middle), and density contrast (right). The blue and red curves are the QE of satellites of blue and red centrals respectively in mass matched samples. The adopted shape for the mass distribution is shown in the inset of the left panel (shaded histogram), with the matched distributions for the blue and red central samples (unnormalized). The orange histogram is the parent stellar mass distribution of the satellites of red centrals. This QE formalism reinforces the conclusions of Fig.~\ref{redfraction_ur} with better statistics. The QE of lone centrals (gray curve in the right panel), which is calibrated over their own red fraction as a function of mass but independently of density, is also a strongly increasing function of density. } \label{qe_ur} \end{figure*} Figure \ref{qe_mass} shows another view of these results, called quenching efficiency (QE or $\epsilon_{q}$), following the formalism of \citep[]{Knobel2015} \citep[see also ][]{Bosch2008}. This formalism which separates out the dependence on stellar mass, helps to highlight the dependence on other parameters. The QE represents the excess red fraction of a given population with respect to the red fraction of a reference sample, $f_{q,\rm ref}$, normalized by the star-forming fraction of that reference sample: \begin{equation} \label{eq:eps_q} \epsilon_{q} (M_\star) = \frac{f_{q}(M_\star) - f_{q,\rm ref}(M_\star)} {1 - f_{q,\rm ref}(M_\star) }. \end{equation} We choose the lone central population as our reference sample\footnote{This QE is interpreted as the probability that a central becomes quenched upon falling into another DM halo and becoming a satellite \citep{Knobel2013,Knobel2015}}, and fit its red fraction curve, $f_{q,\rm ref}(M_\star)$ (solid gray line in the left panel of Fig.~\ref{redfraction_mass}), with a polynomial function of order 2, $f_{q,\rm ref}^{\rm fit}(M_\star)$ (the overlapping dotted line): \begin{equation} f_{q,\rm ref}^{\rm fit}(M_\star) = 6.580-1.535\log\left(\frac{M_\star}{M_\odot}\right) + 0.091\log^2\left(\frac{M_\star}{M_\odot}\right). \end{equation} This allows us to define the quenching efficiency of any individual galaxy $i$ of mass $M_\star$ as: \begin{equation} \label{eq:eps_q_i} \epsilon_{q,i} (M_\star) = \frac{q_i - f_{q,\rm ref}^{\rm fit}(M_{\star})} {1 - f_{q,\rm ref}^{\rm fit}(M_{\star}) }. \end{equation} The meaning of these individual $\epsilon_{q,i}$ is limited ($\epsilon_{q,i}(M_\star)=1$ for red galaxies ($q_i=1$) and negative for blue galaxies ($q_i=0$)), but allows the QE of any set of galaxies to be computed as a function of any parameter of interest by simply averaging them over the sample \citep{Knobel2015}. Figure \ref{qe_mass} shows that this QE as a function of stellar mass, is, by definition, null for lone centrals. Up to the transition mass $\log(M_\star/M_\odot) \approx 11$, all the QE curves are nearly independent of mass, with satellites of blue centrals and centrals of groups at the level of what we called group quenching (QE $\sim 0.15$). Galactic conformity manifests in the difference between the blue and the red curves, the QE of satellites around red centrals being about twice the group quenching value. In the higher mass regime where satellites of blue centrals no longer exist, the QEs of satellites of red centrals and of centrals of groups fast increase towards complete quenching (which can be inferred from the parallel behavior of their red fraction curves with respect to isolated galaxies in the left panel). \section{Environmental effects} \label{sec:environment} \subsection{Environmental parameters} \label{subsec:parameters} We now address the dependence of the satellite red fractions and quenching efficiencies on three environmental parameters: \begin{itemize} \item{the dust corrected $(u-r)$ color of central galaxies,} \item{the group stellar mass $M_{gr}$, defined as the total stellar mass from group members with $\log(M_\star/M_\odot)>10.25$,} \item{the density contrast ($1+\delta$) at the centrals' location, defined as the density of central galaxies -- satellites are excluded -- smoothed by a 3D Gaussian kernel of $\sigma=5$ Mpc and normalized by the redshift-dependent mean density of the survey.} \end{itemize} These parameters are labelled ``U-R OF CENTRAL", ``GROUP MASS" and ``DENSITY CONTRAST" for visual clarity in all figures. The group mass and the density contrast are always expressed logarithmically as $\log(M_{gr}/M_\odot)$ and $\log(1+\delta)$, respectively. The density estimator used in satellite quenching studies is generally based on the ``fifth nearest neighbor" \citep[e.g.][]{Knobel2015}. A major disadvantage of this approach is that it doesn't probe the same environment for small and large groups \citep{Peng2012,Carollo2013}: for small groups ($N_{\mathrm{FoF}} \la 5$), the density is estimated on a scale much larger than the size of their DM halo, whereas for rich groups, it is measured well within their virial radius. Our choice of density estimator intentionally probes scales beyond the virial radius of all groups. Figure \ref{environments} shows how these environmental parameters may be correlated. The solid blue and red lines show the median values for blue and red centrals of groups respectively. The dashed blue and red lines are the median values for satellites of blue and red central galaxies respectively (i.e. the same distributions weighted with the number of satellites in each bin). A weak correlation is seen between group mass and color, expected since the group mass is dominated by the central's mass, which correlates with color (Fig.~\ref{cmd}). None is found between density and color. A color-density relation appears only when star-forming and quenched centrals are mixed (green line), reflecting their evolving ratio with density (see Section \ref{subsec:density}). Density does correlate with group mass for groups with red centrals. The correlation is all the more apparent when the medians are weighted by the number of satellites in each group (solid versus dashed lines): rich, massive groups with red centrals are clearly more common in high density environments. The satellite weighting effect needs to be kept in mind when considering environmental correlations in general. Artificially boosted trends may be induced by a few very rich groups, as emphasized by \citet{Sin2017} in the case of 2-halo conformity signals. \subsection{Satellite quenching} \label{subsec:satquenching} Figure \ref{redfraction_ur} shows the red fraction of satellite galaxies as a function of the above three environmental parameters in four bins of stellar mass, which displays mass quenching in the vertical direction. The left panel shows the red fraction of satellites as a function of their central's color, with the vertical dotted line indicating our boundary between blue and red centrals. This expands the blue/red central dichotomy in Fig.~\ref{redfraction_mass} to a continuum in central color, similar to the trend in sSFR\xspace reported by \citet{Knobel2015}. The fraction of red satellites increases in all mass bins, most noticeably in the lowest, as their centrals redden, in such a way that the red fraction of low mass satellites around very red centrals exceeds that of satellites several times more massive around very blue centrals. More significant trends are found with the groups' stellar mass (middle panel), in agreement with earlier studies showing that satellite quenching is more efficient in more massive systems \citep[using the mass or the magnitude of the central galaxy or some proxy for the halo mass;][]{Weinmann2006,Ann2008,Prescott2011,Knobel2015}. In addition, two distinct sequences for satellites of blue and red centrals appear (dashed and dot-dashed lines respectively): satellites of similar stellar mass, in groups of similar group mass above $\sim 10^{11}M_\odot$, are more likely to be red if their central is red than if it is blue, as also found by \citet{Knobel2015} using the central's mass and halo mass. The upward trends with group mass are essentially driven by satellites of red centrals, which vastly dominate the satellite population. The curves for satellites of blue centrals, which span a narrower range of group masses and have poorer statistics, are consistent with being flat. Lastly, significant correlations are observed between the red fractions in all mass bins and the density contrast (right panel). Again two sequences are seen that exhibit conformity at fixed density contrast, at least for $\log(1+\delta)>0$. For satellites of red centrals, quenching increases significantly with $\log(1+\delta)$, while the statistics are inconclusive for satellites of blue centrals. These trends may simply duplicate the middle panel: satellite quenching is enhanced in massive groups whose central is red and these are found preferentially in high density regions (cf. Fig.~\ref{environments}). We will attempt to disentangle these effects in the next section. \begin{figure*} \includegraphics[width=\textwidth]{FIG_5_DENS.png} \caption{Same as the left panel of Fig.~\ref{redfraction_mass} in 3 bins of density contrasts: $\log(1+\delta)<0.1$ (left), $0.2<\log(1+\delta)<0.35$ (middle) and $\rm log(1+\delta)>0.45$ (right). The dotted orange line in all 3 panels fits the lone central curve in the lowest bin to guide the eye. The vertical rise as density increases is clear for all galaxies at all masses, indicating that quenching is affected by the environment far beyond the virial radius of DM halos. At low and medium density, conformity is marginal and group centrals do not distinguish themselves from satellites, supporting the idea of group quenching by \citet{Knobel2015}. Conformity is most significant in the highest density bin. } \label{redfraction_mass_dens} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{FIG_14_DENSCORR.png} \caption{Quenching efficiencies with respect to lone centrals at fixed mass {\it and} density (Eq.~\ref{eq:eps2_q_i}) as a function of the three environmental parameters. In solid lines, the satellites of red centrals are picked to have the same stellar mass distribution as the satellite of blue centrals; in dashed lines, they have the same distribution of group stellar mass (yellow shaded, blue and red histograms in insets). The orange histograms are the parent distributions of the satellites of red centrals. The solid gray curves show the new QE of lone centrals, designed to be null with respect to both stellar mass and density. Most of the conformity signal originates from comparing groups of different stellar masses. } \label{qe_ur_dens} \end{figure*} \begin{figure} \includegraphics[width=9cm]{FIG_5_GROUPMASS.png} \caption{The red fraction of galaxies as a function of group stellar mass, which, for field galaxies, equals their stellar mass by definition. The red fraction gap between group centrals and lone centrals at fixed stellar mass is filled at fixed group mass, as if group centrals ``carried" the extra weight of their satellites. Conformity is present at fixed group mass as expected from Fig.~\ref{qe_ur_dens}. } \label{redfraction_groupmass} \end{figure} Since satellites behave similarly in the four mass bins we probed, we may increase our statistics, especially for satellites of blue centrals, by using mass-matched samples instead of bins, i.e. samples of satellites of blue centrals and of red centrals having similar stellar mass distributions. We also make use of the QE formalism described in the previous section, to detach mass quenching from the environmental quenching we are trying to highlight. Figure \ref{qe_ur} recasts Fig.~\ref{redfraction_ur} using this new methodology. The adopted common mass distribution is the intersection of the two samples, i.e. the mass distribution of satellites of blue centrals. The unnormalized blue and red distributions are shown in the leftmost inset, with the intersection shaded in yellow. The orange histogram is the total mass distribution of the satellites of red centrals. As it is significantly larger than the blue histogram, we make several draws in it in order to pick every galaxy at least once, and compute the final QE curve as the average of the QE curves computed for each individual draw. The three panels of Fig.~\ref{qe_ur} reinforce the findings of Fig.~\ref{redfraction_ur}: the QE of satellites is a smoothly rising function of central color from blue to red; the QE of satellites around red centrals increases with their group stellar mass, while it is independent of it for satellites of blue centrals, but also consistently positive (i.e. they quench more efficiently than field galaxies); the QE of both populations strongly increases with large scale density. This figure also emphasizes that much of the excess quenching in satellites of red centrals originates from the most massive groups that have no counterpart with blue centrals. The QE of lone centrals - which is calibrated over their own red fraction as a function of mass but independently of density - is also a strongly increasing function of density contrast, negative below the peak density, positive above. This point is addressed in the next section. We note too that there exists circumstances in which conformity disappears, i.e. the blue and red curves converge: first for centrals in the ``green valley", the QE being a monotonically increasing function of central color from the bluest to the reddest; secondly in low mass groups ($M_{gr} \lesssim 10^{11}M_\odot$) and thirdly at the lowest density contrasts ($\log(1+\delta) \lesssim 0$) where the QE of both populations reaches zero or negative values. The last two circumstances are clearly correlated (Fig.~\ref{environments}). We will now attempt to disentangle them. \subsection{Density quenching} \label{subsec:density} Figure \ref{redfraction_mass_dens} reproduces the left panel of Fig.~\ref{redfraction_mass} in three bins of density contrast. The middle bin spans the peak of the density distribution of the full sample. The dotted orange baseline in the three panels is a fit to the lone central curve in the lowest bin to guide the eye: the uplift as density increases is clear for all galaxies, indicating that quenching, is affected by the environment far beyond the virial radius of DM halos\footnote{Qualitatively similar results are also found using densities computed on a scale of 8 Mpc instead of 5.}. For central galaxies, this effect is interpreted as reflecting the earlier collapse of proto-halos in large-scale overdense regions. At a given halo mass, the halos populating denser environments are older on average, with different accretion histories (delayed or quenched mass inflow), a phenomenon referred to as assembly bias \citep{Sheth2004,Croton2007}. The age of a halo is usually defined as the epoch at which the halo has assembled one half of its current mass, however \citet{Tinker2017} showed that the amplitude of assembly bias was significantly reduced if age was defined using halo mass at its peak rather than current value (removing the effect of ``splashback" halos), and in better agreement with their analysis of SDSS data. They find a $\sim 5\%$ increase from low to high large-scale (10 $h^{-1}$Mpc) density in the red fraction of central galaxies with $\log(M_\star/M_\odot) \gtrsim 10.3$. The increase between the orange and gray dotted line in the right panel of Fig.~\ref{redfraction_mass_dens} is $\sim 10\%$, a reasonable agreement given the many differences in the two analyses. For comparison, a factor of 2 in stellar mass at fixed density roughly induces the same increase in red fraction. Also notable in this figure is the increasing vertical gradient between the satellites of red centrals and the other curves, in particular that of satellites of blue centrals, at $\log(M_\star/ M_\odot) \la 11$ as density increases: conformity is hardly detectable in the lowest density bin, emergent in the middle bin, and very significant in the highest bin, where it is also mass dependent. The difference between satellites of red centrals and lone centrals in this bin is significantly larger at lower masses than in the average density case (Fig.~\ref{redfraction_mass}), in better agreement with \citet{Bosch2008} (their Fig.~8, top right panel, for all satellites combined) In all three panels, the red fraction curve of group centrals runs roughly $10\%$ higher than that of lone centrals at all masses, as in the global case of Fig.~\ref{redfraction_mass}. Thus group quenching, which we defined in Section \ref{subsec:gquenching} as the difference between the dashed green curve and the gray curve, adds a somewhat constant boost to density quenching. At low and medium density, where conformity is marginal, group centrals do not distinguish themselves from the satellite population as a whole, in agreement with \citet{Knobel2015} and in support of their group quenching definition, whereby, in the restricted part of the mass and (5$^{th}$ nearest neighbor) density parameter space that they share in the SDSS, group centrals and satellites ``feel environment in the same way". However at high density, satellite quenching, which is dominated by satellites of red centrals, far exceeds the red fraction of group centrals at $\log(M_\star/M_\odot)\lesssim11$. To separate out the effect of large-scale density, we fit the red fraction of lone galaxies as a function of both mass and density in the following empirical way: \begin{equation} \label{eq:fq_md} \tilde f_{q,\rm ref}(M_\star,\delta) = f_{q,\rm ref}^{\rm fit}(M_\star) \times (1+g(\delta)). \end{equation} In practise, $f_{q,\rm ref}^{\rm fit}(M_\star)$ is a new polynomial fit (of order 2) to the red fraction of lone galaxies in the middle bin (the gray line in the middle panel of Fig.~\ref{redfraction_mass_dens}): \begin{equation} f_{q,\rm ref}^{\rm fit}(M_\star) = 5.43 -1.31\log\left(\frac{M_\star}{M_\odot}\right)+ 0.08\log^2\left(\frac{M_\star}{M_\odot}\right), \end{equation} and $g(\delta)$ is a polynomial fit (of order 3) to the QE curve of lone galaxies as a function of $\Delta=\log(1+\delta)$ (the equivalent of the gray line in Fig.~\ref{qe_ur} using the new $f_{q,\rm ref}^{\rm fit}(M_\star)$), multiplied by a fudge factor of 0.8: \begin{eqnarray} g(\delta) = 0.8\times( -0.07+0.23\Delta+ 0.25\Delta^2 -0.02\Delta^3) \end{eqnarray} This empirical recipe provides a good fit to the red fraction of lone galaxies as a function of both mass and density. It allows us to redefine the quenching efficiency $\epsilon_{q,i} (M_\star,\delta)$ of an individual galaxy $i$ of mass $M_\star$ living in a region of density contrast $\delta$, as: \begin{equation} \label{eq:eps2_q_i} \epsilon_{q,i} (M_\star,\delta) = \frac{q_i - \tilde f_{q,\rm ref}(M_{\star},\delta)} {1 - \tilde f_{q,\rm ref}(M_{\star},\delta) }, \end{equation} and the QE of a galaxy sample as the average of these individual $\epsilon_{q,i}$. The solid lines in Fig.~\ref{qe_ur_dens} represent this new QE for satellites of blue and red centrals as a function of the three environmental parameters in stellar mass-matched samples as in Fig.~\ref{qe_ur}. The gray curve in the right panel represents the QE of lone centrals, designed to be null with respect to both stellar mass and density. The dependence on density completely disappears for satellites of blue centrals, whereas is remains significant for satellites of red centrals. As was already visible in Fig.~\ref{qe_ur}, much of the excess quenching in the satellites of red central population originates from the most massive groups that have no blue central counterpart. The dashed lines in Fig.~\ref{qe_ur_dens} show that if we match the group stellar mass distributions of satellites around blue and red centrals, which excludes these massive groups and which also results in nearly perfectly matching the stellar mass and density distributions of both populations (distributions shown in the insets), the dependence with density and color are strongly reduced for the remaining satellites of red centrals. Some amount of conformity persists as a function of group mass, which affects the QE of satellites of red centrals only, but this figure allows us to conclude that most of the conformity signal arises from the most massive groups with the reddest centrals that non only have no counterpart with blue centrals in terms of group mass, but in which satellite quenching is more dependent on density than in other groups, including low mass groups with red centrals. In Fig.~\ref{redfraction_groupmass}, we show the red fraction of all galaxies as a function of group stellar mass. For field galaxies, the group stellar mass equals their stellar mass by definition. We find that the quenching gap between group centrals and lone centrals at fixed stellar mass is filled at fixed group mass, as if group centrals ``carried" the weight of their satellites (an extra $\sim 0.25$ dex in stellar mass on average). Satellites, although they are also boosted by the group environment, do not ``feel'' the added weight of their more massive central: at a given group mass, they remain significantly less quenched than their central, as expected from their having lower masses, with a conformity effect expected from the previous figure. Since the QE efficiency of satellites of blue centrals is independent of group mass, the positive slope of their red fraction in this figure may simply be attributed to their increasing mean stellar mass and surrounding density as group mass increases. We go back to this point in Section \ref{subsec:halo}. \begin{figure} \includegraphics[width=8.5cm]{FIG_14_HALO.png} \caption{ The QE of satellites around blue and red centrals (blue and red curves respectively), as a function of halo mass in units of $\log(M_{h}/M_\odot)$ estimated from the blue/red halo-to-stellar-mass ratios of \citet{ZuMandelbaum2015} (see text for details). The inset shows the normalized halo mass distributions for satellites of blue and red centrals. This model highlights two regimes: at $\log(M_{h}/M_\odot) \gtrsim 13$, centrals are fully quenched and conformity has no meaning, while at $\log(M_{h}/M_\odot) \lesssim 13$, central quenching is still ongoing and conformity is insignificant. } \label{shmr} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{FIG_5_DENS_HALOSPLIT.png} \caption{The red fraction of satellites as a function of stellar mass in 3 bins of density as in Fig.~\ref{redfraction_mass_dens}, and in two regimes of halo mass: $\log(M_{h}/M_\odot)<12.9$ and $>12.9$. In the low halo mass regime, conformity is insignificant and satellites and group centrals (collectively shaded in gray) exhibit the same red fraction excess over field galaxies at all mass $\log(M_{\star}/M_\odot)\lesssim11$ and density, in agreement with \citet{Knobel2015}. In the high halo mass regime (highlighted curves), the centrals are massive, fully quenched galaxies, and their satellites undergo significantly more quenching than their counterparts in smaller halos, all the more so as density increases, giving rise to the increasing conformity signal observed in Fig.~\ref{redfraction_mass_dens}. } \label{redfraction_halosplit} \end{figure*} \begin{figure} \includegraphics[width=9cm]{FIG_5_HALOMASS.png} \caption{The red fraction of centrals and satellites as a function of halo mass. The orange triangles are the predictions based on the stellar mass and density dependence of the satellite red fraction in the two regimes of halo mass (see text for details). } \label{redfraction_halomass} \end{figure} \begin{figure} \includegraphics[width=9cm]{HALOMASS_DENS.png} \caption{Correlations between the halo mass and the mean surrounding density and the mean stellar mass of the satellites. The numbers are the fraction of groups and of satellites (in parenthesis), in each bin of halo mass. Most of the conformity signal in Fig.~\ref{redfraction_mass} originates from a small minority of groups with high halo mass at high density. } \label{halo_dens} \end{figure} \subsection{Halo quenching} \label{subsec:halo} Assuming that group mass is a good proxy for halo mass, our results confirm that galactic conformity is indeed observed at fixed halo mass as previously claimed \citep{Weinmann2006,Hartley2015,Knobel2015,Kawin2016}. A widely discussed explanation for this has been the ``assembly bias" \citep{Gao2005,Croton2007}, which refers to the dependence of halo clustering on properties other than mass. One such property is age: older halos (halos that assembled earlier) presumably cluster more strongly than younger halos (that formed later) of the same mass, the effect being stronger for less massive halos. Another property, which correlates with halo age, is concentration: more concentrated halos are typically more clustered than less concentrated ones. If there exists a sufficiently tight relation between one of these halo properties and galaxy color or star-formation history, galaxy conformity is expected to arise. A relation between galaxy color and halo age was indeed found in the cosmological hydrodynamical simulation Illustris \citep{Bray2016}, where the reddest galaxies preferentially reside in the oldest halos. Similarly, \citet{Paranjape2015} and \citet{Pahwa2016} found conformity in models correlating galaxy color with halo concentration. However inferring halo masses in real data is an uncertain task and those derived from total group luminosities or masses, as used in establishing galactic conformity, are not flawless. In fact it was observed that red galaxies reside in more massive halos than average galaxies of similar luminosity or mass \citep{Zehavi2011,WangWhite2012,Krause2013}. More recently, \citet{ZuMandelbaum2016} claimed to have discovered, from weak lensing measurements of local bright galaxies in the SDSS, that there exists a strong bimodality in the average host halo mass of blue versus red galaxies: at fixed stellar mass, red centrals preferentially reside in halos that are a factor of 2 to 10 more massive than halos hosting blue galaxies. They found that a model in which ``halo quenching" -- referring to all the physical processes tied to halo mass (virial shocks, accretion shocks, AGN feedback, gas stripping) -- is the main driver of galaxy quenching, best fits the available data and explains galactic conformity without assembly bias \citep{WangWhite2012,Phillips2014, Phillips2015,ZuMandelbaum2017}. To test this scenario, we derive halo masses for all groups using the double, blue/red, average halo-to-stellar-mass-ratio (HSMR) of \citet{ZuMandelbaum2015} (their Fig.~16), applied to the central galaxies. Figure \ref{shmr} shows the quenching efficiency of satellites as a function of this halo mass estimate: conformity at fixed halo mass has disappeared. This model splits the halo mass into two distinct regimes that were blurred with the group stellar mass: a high halo mass regime, $\log(M_{h}/M_\odot) \gtrsim 13$ (29\% of the groups, 46\% of the satellites, 5\% of the field galaxies), in which centrals are fully quenched and conformity has no meaning, and a low halo mass regime, $\log(M_{h}/M_\odot) \lesssim 13$ (71\% of the groups, 54\% of the satellites, 95\% of the field galaxies), in which centrals still experience quenching, i.e. can be either red or blue at a given halo mass, but with no significant impact on the quenching of their satellites. The satellite QE in this regime is also consistent with being independent of halo mass. Figure \ref{redfraction_halosplit} revisits Fig.~\ref{redfraction_mass_dens}, in the two regimes of halo mass. We note that halos of both types exist in all tree density bins although they are not equally represented ($\sim 24,\ 25,\ 22\%$ of groups in the low mass regime are in the low, medium and high density bins respectively vs. $\sim$ 13, 28, 31\% for groups in the high halo mass regime). In the low halo mass regime, conformity is insignificant indeed. Satellites and group centrals also exhibit the same quenching excess ($\sim$ 10\%) over field galaxies at all mass and density, in agreement with the group quenching definition of \citet{Knobel2015}. In the high halo mass regime, in which the centrals are all massive ($\log(M_{\star}/M_\odot)>11$), fully quenched galaxies, satellites undergo significantly more quenching than their counterparts in smaller halos. This effect increases with density, giving rise to the increasing conformity signal observed in Fig.~\ref{redfraction_mass_dens} from low to high density when the two regimes are mixed. The dependence on stellar mass also appears to be less significant than for galaxies in low halo mass groups and field galaxies (except in the leftmost bin of both stellar mass and density, but this part of the parameter space is poorly sampled). We stress that the two regimes, brought out by the two \citet{ZuMandelbaum2015} HSMRs, are equivalent to setting apart groups with massive red centrals (in the top right tip of Fig.~\ref{cmd}) from all the others {\it regardless of halo mass}. Figure \ref{redfraction_halomass} shows the red fractions of both satellites and centrals as a function of halo mass. The red fraction of group and lone centrals (matched in stellar mass distribution) is a much steeper function of this halo mass than of the stellar group mass (Fig.~\ref{redfraction_groupmass}), reaching unity at $\sim 10^{13}M_\odot$, while all satellites follow the same gentler path regardless of central type. Since the red fraction of satellites and group centrals depends on their own stellar mass equivalently in low mass halos, and halo mass is strongly correlated to the stellar mass of the central only, it is not surprising that central quenching depends strongly on halo mass while satellite quenching does not, at least in the low halo mass regime. The slope might simply be explained by the increasing mean stellar mass and surrounding density of the satellites as halo mass increases. At $\log(M_{h}/M_\odot) \sim 12$ and 12.7, the mean stellar masses of the satellites are $<\log(M_{\star}/M_\odot)>=$ 10.41 and 10.54 respectively, and their mean surrounding densities are 0.24 and 0.30. Using Eq. \ref{eq:fq_md}, which models the red fraction of the field population as a function of mass and density, to which we add 0.1 to account for the group quenching excess, we compute the mean red fractions expected from these two combinations of stellar mass and density. The resulting values are shown as orange, upward pointing triangles. The reasonable agreement indicates that satellite quenching depends on halo mass inasmuch as more massive halos contains more massive satellites on average (the difference in mean density makes negligible difference). At $\log(M_{h}/M_\odot) \sim 13$ and 13.8, in the high halo mass regime, the mean stellar masses of the satellites are $<\log(M_{\star}/M_\odot)>=$ 10.61 and 10.65 while their mean surrounding densities are 0.37 and 0.6, respectively. The saturation in mean stellar mass is expected from the increasingly small contribution of high mass satellites to the overall mass distribution. Here we simply compute the mean red fractions of satellites in small bins around these two combinations, with the condition that their central is red and massive. The resulting values are shown as orange downward triangles. The agreement shows that the dependence on halo mass in this regime is equivalent to a dependence on density. Figure \ref{halo_dens} shows how the two properties are strongly correlated, as in the case of the group stellar mass shown in Fig.~\ref{environments} for groups with red centrals. We conclude that halo mass, as defined in this section, is the ``hidden" parameter \citep{Knobel2015} behind galactic conformity, due to comparing satellites in two distinct ranges of halo mass. The \citet{ZuMandelbaum2015} HSMRs separate groups with massive ($\log(M_{\star}/M_\odot)>11$), red centrals from all the others, and offers a halo mass interpretation to our finding that satellite quenching appears to proceed differently in these two types of groups. The change of regimes marks a change in the stellar mass and density dependence of satellite quenching, from mostly stellar mass dependent to mostly density dependent. We find that the concept of group quenching advocated by \citet{Knobel2015}, whereby satellites and group centrals quench similarly at a given stellar mass and density (``feel environment in the same way''), holds in the low halo mass regime ($\sim 70\%$ of the groups): the red fraction of all galaxies in these groups is determined by the same combination of their own stellar mass and large scale density, with a constant $\sim 10\%$ boost with respect to their field counterpart. In more massive halos whose centrals are massive, fully quenched galaxies ($\sim 30\%$ of the groups), satellite quenching also depends on stellar mass and surrounding density, but less on the former and more on the latter, which strongly correlates with halo mass. This may point to different, satellite-specific quenching processes in these massive halos, or simply to the increasing importance of group pre-processing, i.e. satellites first quenching in lower mass groups prior to infall into more massive ones, as halo mass increases \citep{Wetzel2013}. \begin{figure} \includegraphics[width=9.cm]{FIG_GROOTES_HALO.png} \caption{The $\Delta(u-r)$ distributions (Eq.~\ref{delta_ur}) of blue field galaxies and blue satellites in low mass and high mass halos ($\log(M_{h}/M_\odot) < $ or $> 12.9$) respectively, with corresponding median values (vertical lines). The star-forming activity of blue satellites in low mass halos ($\sim 70\%$) is similar to that of blue isolated galaxies, while it is mildly suppressed for blue satellites in massive halos ($\sim 30\%$). } \label{grootes_halo} \end{figure} \section{Star-formation in groups} \label{sec:sf} We now examine the level of star-forming activity of blue satellites compared to their field counterparts. \citet{Grootes2017} found that the median sSFR\xspace-stellar mass relation of a morphologically selected sample of disk-dominated satellites in the GAMA survey was mildly suppressed compared to isolated spiral galaxies. They also find that this mild suppression originates from a minority population ($\lesssim 30\%$) with strongly suppressed sSFR\xspace with respect to isolated spirals, whereas the majority of spiral satellites show no sign of being affected by their group environment. Nor do spiral group centrals. These results led them to conclude that the gas cycle of spiral galaxies is largely independent of environment, the intrahalo medium of the host group being the most likely reservoir of cold gas fueling star-formation in satellite galaxies. Our sample of star-forming galaxies is unlikely to be purely disk-dominated, even if most of the star-formation in the local Universe is shown to occur in disk regions \citep[e.g.][]{James2008}. To exclude the less disky galaxies, we apply an upper limit of 2 to the $r$-band GALFIT Sersic index of our blue population, which excludes 33\% of the star-forming sample. Transposing the methodology of \citet{Grootes2017} to $(u-r)_{corr}$\ colors, we consider the offset of a galaxy's color from the median value measured for isolated blue (disk) galaxies of the same mass: \begin{equation} \Delta(u-r)=(u-r)_{corr} - \overline{(u-r)}^{field}_{corr}(M_\star), \label{delta_ur} \end{equation} where $\overline{(u-r)}^{field}_{corr}(M_\star)$ is a linear fit to the median color-mass relation (Fig.~\ref{cmd}) of blue field galaxies in our mass and redshift ranges. We do not find that this relation varies with large-scale density, nor does the mean color of blue group centrals (Fig.~\ref{environments}). Figure \ref{grootes_halo} shows the $\Delta(u-r)$ distributions of blue field galaxies, blue satellites in the two regimes of halo mass ($\log(M_{h}/M_\odot) < $ or $> 12.9$), and blue groups centrals (in small halos by design), with their corresponding median values. Two-sample Kolmogorov-Smirnov tests show that: i) blue group centrals are very similar to blue field galaxies in color-mass relation (two-tailed p-value $p=0.2$); ii) blue satellites in small halos (69.4\%) are also compatible with being drawn from the same distribution ($p=0.04$), and iii) blue satellites in massive halos (30.6\%) differ significantly from blue field galaxies ($p \sim 10^{-9}$), with a red offset $\Delta(u-r) = 0.087 \pm 0.015$ mag. The cut in Sersic index sharpens the distinction between the populations but not applying one does not alter these conclusions. We propose that the blue satellites of massive red centrals (in the massive halo regime) correspond to the population of disk satellite galaxies in which \citet{Grootes2017} found evidence of suppressed star-formation with respect to field galaxies. Gas fueling must be hindered in these massive halos, in which the fraction of red satellites is also significantly enhanced. However most halos appear not to have a significant impact on the star-formation activity of their blue satellites, in agreement with \citet{Grootes2017} who argue for a change in morphological mix rather than a change in the gas fueling process in groups to explain the well-known color-(local) density relation \citep[e.g.][and references therein]{Zehavi2011}, stating that galaxies are redder in denser environments. We postpone a morphological investigation of group galaxies to a future study. \section{Caveats} \label{sec:caveats} Our analysis is based on a FoF\xspace algorithm that was optimized on mock catalogs \citep{Robotham2011} to yield a high group detection rate and a low rate of contamination by interlopers (the best balance between completeness and purity). The linking lengths we selected are in good agreement with the combination found to be optimal for studies of environmental effects by \citet{DuarteMamon2014}. However no group finder is perfect, and it is impossible to recover all the groups exactly as they are known to be in the mocks: some groups will be fragmented and/or merged, contain galaxies unrelated to them, and miss members that will be misplaced in neighboring groups or left alone in the field. Misidentifications of satellites and centrals will follow, and group masses will be wrongly attributed, adding to the uncertainty inherent to any method of assigning a central and a halo mass to each group. As mentionned in Section \ref{subsec:fofconstruction} and detailed in Appendix \ref{appendix:results}, there is a degeneracy between the two free parameters $b_\perp$ and $R$ used to optimize the cost function, which is why we considered additional statistical quantities, as described in Appendix \ref{appendix:quality}. Our choice of parameters, ($b_\perp$,$R$)=(0.06,19), corresponds to the highest purity. However while purity may be most important when comparing satellites to centrals, prioritizing completeness may be preferred to study conformity. To try and evaluate the impact of our choice, we tested a combination of parameters (0.07,19.0) that maximises the completeness instead of purity and found no substantial changes in any of our results (conformity, group quenching, star-formation in groups). Additionally, instead of using an optimization based on the global grouping efficiency and global grouping purity, we considered the combination of parameters that maximises the global purity alone (0.07,23.5), and the one that maximizes the completeness alone (0.06,25.5). Again, we found all the results to be qualitatively similar, albeit noisier. These different combinations may be considered similar by design as they belong to the zone of degeneracy of the same algorithm and the changes in purity and completeness are relatively small. However our red fractions of centrals and satellites (Fig.~\ref{redfraction_mass}) are also in qualitative agreement with previous studies using the official SDSS group catalog, and can also be reproduced using the GAMA group catalog of \citet{Robotham2011}, despite significant differences in their rates of fragmentation and/or merging (Section \ref{subsec:fofconstruction}). \citet{Campbell2015} showed that accounting for the systematic errors associated with a particular group finder is best achieved by running the group finder over mock catalogs that include realistic galaxy colours and proposed a new statistic (called HTP for halo transition probability) to weigh the combined impact of the above errors. While such forward modeling is beyond the scope of the present paper, their general conclusions shed light on the uncertainties that may plague our results (and others that also do not account for these errors \citep[e.g.][]{Weinmann2006,Yang2008,Yang2009,Peng2012,Wetzel2012,Knobel2015}). \citet{Campbell2015} created a mock by populating the dark matter halos of a large $N$-body simulation with galaxies of different luminosities and colors ($g-r$), using both subhalo abundance matching and age matching \citep{Hearin2013}, which reproduces both one-halo and two-halo conformity \citep{Hearin2014}. Three different types of group finder were tested on this mock to assess their ability to recover color-dependent halo occupation statistics, including satellite fractions, red fractions and conformity. The resulting group catalogs were found to be remarkably similar (which is reassuring and in agreement with our tests), and to recover most color-dependent statistics reasonably well. In particular, the difference between the red fractions of centrals and satellites at fixed $r$-band luminosity (not shown as a function of stellar mass) is qualitatively reproduced. The three group finders tested by \citet{Campbell2015} also recover galactic conformity at fixed halo mass, defined using abundance matching on total group luminosity, but also tend to create weak conformity when this property is removed from the mock. They also found that the strength of the recovered or induced conformity signal at fixed halo mass may be reduced or enhanced, depending on whether luminosity or stellar mass is the primary galaxy property driving halo occupation in the mock data, and in the recovered groups. In other words, the definition of the ``true" and inferred halo masses (which can be both based on luminosity, or on stellar mass, or on one property for the mock and the other for the FoF\xspace), have a substantial impact on the systematic uncertainties that are assessed through forward modeling. Nevertheless, we understand from \citet{Campbell2015} that whatever the choice of primary galaxy property, or properties, none of the three FoF\xspace algorithms can completely abolish conformity at fixed halo mass when it exists in the mock, nor induce a significant conformity signal when it does not. Running our FoF\xspace algorithm on real data, we find that conformity does indeed depend on the definition of halo mass, that it is weak if we use the total group stellar mass, and non existent in a scenario that assigns larger halo masses to red centrals than to blue ones. We conclude from the work of \citet{Campbell2015} that our particular choice of FoF\xspace algorithm and parameters is unlikely to be solely responsible for these results. \section{Summary and conclusions} \label{sec:conclusions} We investigated the properties of central and satellite galaxies in groups at redshift $z \lesssim 0.2$ and with stellar mass $\log (M_{\star} / M_\odot) > 10.25$ using the spectroscopic survey Galaxy and Mass Assembly (GAMA). The group catalog was constructed using an anisotropic Friends-of-Friends algorithm taking into account the effects of redshift-space distortion. Red (quiescent) and blue (star-forming) galaxies were classified according to their dust-corrected $(u-r)$ color, shown to be a good bimodal measure of star-forming activity. We explored the fraction of quiescent galaxies (red fraction) in different environments. Our density contrast is defined as the density of central galaxies (satellites were excluded), smoothed by a 3D Gaussian kernel of $\sigma=5$ Mpc and normalized by the redshift-dependent mean density of the survey (using an 8 Mpc scale yields similar results). This estimator probes beyond the virial radius of all groups and is therefore quite different from the ``fifth nearest neighbor" generally used in satellite quenching studies, and which is sensitive to the varying size of DM halos. Our results can be summarised as follows: \begin{enumerate} \itemsep0.3em \item \textit{Mass and density quenching:} The red fraction of all galaxies, whether isolated or in groups, is a strongly increasing function of stellar mass, a phenomenon referred to as mass quenching. At fixed stellar mass, the red fraction of all galaxies, including isolated galaxies, increases with the large-scale density contrast. We account for both these effects by defining a quenching efficiency (QE) that separates out both mass and density quenching, designed to be null for field (isolated) galaxies. \item \textit{Galactic conformity:} The average red fraction of satellites around quenched centrals is significantly higher than that of satellites of the same stellar mass around star-forming centrals. Their QE increases with their central's color, group stellar mass and large-scale density, while that of satellites of blue centrals appears to be independent of all three parameters. This creates a conformity signal that increases with density and group stellar mass. Most of the signal originates from the most massive groups in the densest environments around quenched centrals, in a group stellar mass regime devoid of blue centrals. Some amount of conformity remains at fixed group stellar mass and density. \item \textit{Halo quenching:} Assuming group mass traces halo mass, galactic conformity is indeed observed at fixed halo mass as originally claimed. However, if red centrals inhabit more massive halos than blue ones of the same stellar mass, as several studies suggested, and we assume a color-dependent halo-to-stellar-mass ratio, we find that conformity disappears entirely at fixed halo mass. Two quenching regimes emerge: at $\log(M_{h}/M_\odot)\lesssim13$, centrals still undergo quenching but conformity is insignificant at any given stellar mass and density; at $\log(M_{h}/M_\odot)\gtrsim13$, a cutoff that sets apart massive ($\log(M_{\star}/M_\odot)>11$) red only centrals, conformity is meaningless, and the satellites undergo significantly more quenching than their counterparts in smaller halos, all the more so as density increases. This accounts for the conformity signal that increases with density when both regimes are mixed. \item \textit{Group quenching:} In the low halo mass regime, satellites and group centrals exhibit the same quenching excess, $\sim 10\%$ in red fraction, over field galaxies at fixed stellar mass and density, in agreement with the notion of group quenching advocated by \citet{Knobel2015}, who argued against the importance of satellite-specific processes. However in the high halo mass regime where satellites still undergo quenching while their centrals are fully quenched, the central/satellite dichotomy cannot be ruled out. In this regime, satellite quenching strongly depends on large-scale density, which correlates with halo mass. \item \textit{Star-formation activity in groups:} Star-forming group centrals and the majority of star-forming satellites, which reside in low mass halos, show no deviation from the color$-$stellar mass relation of blue field galaxies. However star-forming satellites in high mass halos ($\sim 30\%$) significantly deviate from the color distribution of blue field galaxies, with a mean $(u-r)$ reddening of $\sim + 0.09$ magnitude. \end{enumerate} \section*{Acknowledgements} We thank our anonymous referee for providing comments that significantly improved this work. This research was carried out within the framework of the Spin(e) collaboration (ANR-13-BS05-0005, http://cosmicorigin.org). GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalogue is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programmes including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. The VISTA VIKING data used in this paper are based on observations made with ESO Telescopes at the La Silla Paranal Observatory under pro- gramme ID 179.A-2004. The GAMA website is \href{http://www.gama-survey.org/}{http://www.gama-survey.org/}.
{ "timestamp": "2018-03-30T02:06:29", "yymm": "1712", "arxiv_id": "1712.05318", "language": "en", "url": "https://arxiv.org/abs/1712.05318" }
\section{Introduction} Blazars---active galaxies with relativistically beamed non-thermal broad-band emission---belong to the brightest gamma-ray sources in the sky. Their emission is notoriously variable, and their gamma-ray variability has been measured on time scales ranging from years to minutes (see \cite{mad16} for a recent review). The Fermi Large Area Telescope is the most sensitive instrument for measuring high-energy (HE) gamma rays in the energy range 0.1--10 GeV \cite{atw09}. Since August 2008, it performs an almost uninterrupted monitoring of the entire sky, orbiting Earth at the period of $95\;{\rm min}$. Determining the shortest variability time scale for the gamma-ray emission of blazars has tremendous theoretical implications for the physics of energy dissipation and particle acceleration in relativistic jets. Before the launch of Fermi, the shortest variability time scales of $t_{\rm var} \sim 2\;{\rm min}$ were measured in the very-high-energy (VHE) gamma rays above $100\;{\rm GeV}$ by ground-based Cherenkov telescopes, in particular by H.E.S.S. in PKS~2155-304 \cite{aha07} and by MAGIC in Mrk~501~ \cite{alb07}. Such a short variability time scale is much shorter than the light-crossing time $t_{\rm bh} \sim 3 M_{\rm bh,9}\;{\rm h}$ of supermassive black holes located at the bases of relativistic jets of blazars. This implicates the existence of very compact local dissipation sites, e.g., related to relativistic magnetic reconnection \cite{gia09, nal11}, or extremely efficient jet focusing due to recollimation shocks \cite{bod17}. In addition, the combination of high apparent luminosity with small size of the emitting region, i.e., exceptional radiative compactness, poses a problem of potentially very efficient intrinsic absorption of gamma-ray photons. Additional challenges arise in the case of the most luminous blazars known as flat spectrum radio quasars (FSRQs) due to the presence of dense radiation fields that provide a target for \emph{external absorption} of gamma rays~\cite{tav11,nal12}. In order to avoid such absorption, one can consider highly relativistic bulk motions with Lorentz factors $\Gamma \sim 100$~\cite{beg08} or possibly conversion of gamma-ray photons into axions \cite{tav12}. There were previously claims of the detection of gamma-ray variability in Fermi/LAT sources at suborbital time scales, e.g., in PKS~1510-089 \cite{fos13}. When analysing the brightest gamma-ray flares of blazars during the first four years of the Fermi mission \cite{nal13}, the very brightest case of 3C~454.3 at MJD~55520 (November 2010) \cite{abd11} was investigated for possible evidence of suborbital variability. That~analysis, performed with the older calibration standard {\tt P7V6}, was inconclusive, and hence it was not published at that time (upper limits on variability time scales of a few hours were reported for this event by~\cite{fos11}). However, after the Fermi Collaboration presented the case for suborbital variability in blazar 3C~279 at MJD $\simeq$ 57189 (Jun 2015) \cite{ack16}, it became clear that the case of 3C~454.3 needs to be reconsidered. Indeed, the theoretical implications of detectable suborbital gamma-ray variability in 3C~279 are extreme (see also \cite{pet17,vit17,aha17}). For example, in the standard ERC (External Radiation Comptonization) scenario of gamma-ray emission in FSRQ blazars, the minimum jet Lorentz factor should be $\Gamma_{\rm min} \simeq 50$ to satisfy the opacity, cooling, Eddington and SSC constraints, and $\Gamma_{\rm eqp} \simeq 120$ to achieve equipartition between matter and magnetic fields \cite{ack16}. Moreover, a large fraction of the total jet power should be concentrated into a tiny emitting region of \mbox{$R_\gamma \simeq 10^{-4}(\Gamma/50)\;{\rm pc}$} at the distance scale comparable to the broad-line region $r_\gamma \gtrsim r_{\rm BLR} \simeq 0.1\;{\rm pc}$. Such requirements cannot be reconciled with the conventional models of blazar emission \cite{der09,sik09,ghi10,boe13,nal14}. The results of suborbital Fermi/LAT analysis for 3C~454.3 are presented in Section \ref{sec_res}. A comparison of these results with the case of 3C~279 is discussed in Section \ref{sec_dis}. Details of the analysis method are described in Section \ref{sec_met}. Section \ref{sec_con} presents the conclusions. \section{Results} \label{sec_res} Figure \ref{fig_lc_long} shows the long-term gamma-ray variations of blazar 3C~454.3, calculated with the use of adaptive time bins \cite{sob14}. Over the period of 1000 days, the gamma-ray flux of 3C~454.3 varies by almost factor 1000. Correspondingly, the lengths of the time bins range from 25 days down to $\sim$20~min, always satisfying the detection condition of ${\rm TS} > 25$. During this time, 3C~454.3 produced three major outbursts. The first two, peaking at MJD~55167 and MJD~55294, were originally investigated in \cite{ack10}. The~last one, peaking at MJD~55520 with $F_{>100\;{\rm MeV}} \sim 8.5\times 10^{-5}\;{\rm ph\,s^{-1}\,cm^{-2}}$ (for 3 h bins)~\cite{abd11}, represents the brightest gamma-ray state of any blazar to date, also exceeding the brightest gamma-ray~pulsars. \begin{figure}[H] \centering \includegraphics[width=0.99\textwidth]{figs/lc_3c454_lat_long2.eps} \caption{Long-term gamma-ray light curve ($E > 100\;{\rm MeV}$) of blazar 3C 454.3 calculated from Fermi Large Area Telescope (LAT) data using adaptive time bins with Test Statistic (TS) $> 25$ \cite{sob14}. The dashed vertical lines indicate the time range covered by Figure \ref{fig_lc_med}.} \label{fig_lc_long} \end{figure} \vspace{-6pt} Figure \ref{fig_lc_med} presents the gamma-ray light curves of 3C~454.3 produced on sub-orbital time scales in the time range MJD~55516-22. These light curves are calculated for three values of characteristic minimum time bin length $t_{\rm min} = 10, 5, 3\;{\rm min}$. As the lengths of visibility windows appear modulated on a superorbital time scale, the number of time bins per orbit is variable. The maximum likelihood analysis returns the predicted number $N_{\rm pred}$ of gamma-ray photons (events/counts) associated with 3C~454.3. It is a good measure of the relative measurement error $\delta F/F \simeq N_{\rm pred}^{-1/2}$, or the signal-to-noise ratio ${\rm SNR} \simeq N_{\rm pred}^{1/2}$. Within the time range of MJD~55517-21, we find persistent values of $N_{\rm pred}$ per time bin, with the median values $N_{\rm pred,med} = 147, 70, 37$ for $t_{\rm min} = 10, 5, 3\;{\rm min}$, respectively. For every orbit where we have at least three independent consecutive measurements (there is no such case for $t_{\rm min} = 10\;{\rm min}$), we perform a reduced $\chi^2$ test against the null hypothesis that the photon flux is constant within the orbital visibility window. In Figure \ref{fig_lc_med}, we report the probability values $p$ for the null hypothesis. We find the median values of $p_{\rm med} = 0.40, 0.58$ for $t_{\rm min} = 5, 3\;{\rm min}$, respectively. The~smallest values that we found are $\sim$$10^{-2}$; hence, we can hardly reject the null hypothesis and claim statistically significant suborbital flux variability. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{figs/lc_3c454.eps} \caption{{\bf Top three panels:} gamma-ray light curves ($E > 100\;{\rm MeV}$) of blazars 3C 454.3 extracted from the Fermi/LAT data during the brightest gamma-ray flare in Nov 2010 for three values of $t_{\rm min}$. {\bf Fourth~panel:} the values of photon counts $N_{\rm pred}$ per time bin. {\bf Bottom panel:} the values of probability $p$ that photon flux variations measured for individual orbits (with at least three independent time bins) are consistent with the null hypothesis of constant flux. The dashed vertical lines indicate the time range covered by Figure \ref{fig_lc_short}.} \label{fig_lc_med} \end{figure} \vspace{-6pt} Figure \ref{fig_lc_short} shows a short section (MJD~55518.17-72) of the suborbital light curves, comparing directly the flux measurements for $t_{\rm min} = 10, 5, 3\;{\rm min}$. Within this time range, the strongest departure from constant flux is found for the orbit centred at MJD~55518.32. For $t_{\rm min} = 5\;{\rm min}$, we have four measurements with $\chi^2/{\rm dof} = 9.79/3$, which yields $p = 0.02$. We also measure the weighted mean photon flux $F_{\rm mean} = 5.41$ (in units of $10^{-5}\;{\rm ph\,s^{-1}\,cm^{-2}}$), the root-mean-square (rms) of flux ${\rm rms}(F) = 1.07$, and the rms of flux statistical error ${\rm rms}(\delta F) = 0.68$. For comparison, in the case of $t_{\rm min} = 3\;{\rm min}$, we have eight~measurements with $\chi^2/{\rm dof} = 13.04/7$, $p = 0.07$, $F_{\rm mean} = 5.45$, ${\rm rms}(F) = 1.26$, and ${\rm rms}(\delta F) = 0.97$. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{figs/lc_3c454_zoom.eps} \caption{A section of light curves shown in Figure \ref{fig_lc_med}. The observation time for the cases of \mbox{$t_{\rm min} = 5,10\;{\rm min}$} (green, red) are shifted by $+0.018,+0.036\;{\rm d}$ with respect to the case of \mbox{$t_{\rm min} = 3\;{\rm min}$} (blue). The `x' marks indicate the value of $-\log_{10}p$, where $p$ is the probability of steady intraorbit~flux.} \label{fig_lc_short} \end{figure} \vspace{-6pt} Figure \ref{fig_stats} shows the results of structure function analysis. As described in Section \ref{sec_met}, structure function is calculated for (1) absolute photon flux difference, (2) statistical significance of photon flux difference, and (3) photon flux ratio. For a particular light curve, the time delay values $\Delta t$ probe the range from $t_{\rm min}$ to about three days. We use the root-mean-square (rms) statistic (square root of classical structure function) to probe systematic variations, and the maximum (max) statistic to probe occasional variations (shots). The rms of absolute flux difference decreases systematically with increasing $t_{\rm min}$ for $\Delta t < 1\;{\rm d}$, as expected for statistical noise. The rms values converge at ${\rm rms}(\Delta f) \simeq 3\times 10^{-5}\;{\rm ph\,s^{-1}\,cm^{-2}}$ for $\Delta t \sim 2\;{\rm d}$. The maximum values decrease systematically and do not converge again as expected. On the other hand, the rms of statistical significance of flux difference converges for short time delays $\Delta t < 2\;{\rm h}$ at $\sigma_{\Delta f} \simeq 1$. For the same range of time delays, the maximum values of $\sigma_{\Delta f}$ are close to 3. For longer time delays, the rms of $\sigma_{\Delta f}$ exceeds unity, systematically increasing with increasing $t_{\rm min}$. For example, for $\Delta t = 1\;{\rm d}$, we find that ${\rm rms}(\sigma_{\Delta f}) \simeq 2$ for $t_{\rm min} = 3\;{\rm min}$, and ${\rm rms}(\sigma_{\Delta f}) \simeq 6$ for $t_{\rm min} = 95\;{\rm min}$ (1-orbit light curve). The maximum values of $\sigma_{\Delta f}$ also exceed 3 for $\Delta t > 2\;{\rm h}$. The~significance of occasional flux variations (max) exceeds 5 for $\Delta t > 5\;{\rm h}$ (for $t_{\rm min} = 10\;{\rm min}$ and $1\;{\rm orb}$). However, the significance of systematic flux variations (rms) exceeds 3 only for $\Delta t > 9\;{\rm h}$. In order to estimate the flux-doubling time scale $\tau_{\rm R2} = \Delta t(R_f = 2)$, the max values of the flux ratio are evaluated. Estimates of the flux-doubling time scale are found to increase systematically with $t_{\rm min}$, from $\tau_{R2} \simeq 3\;{\rm h}$ for $t_{\rm min} = 3\;{\rm min}$, to $\tau_{R2} \simeq 8\;{\rm h}$ for $t_{\rm min} = 1\;{\rm orb}$. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{figs/flux_change_3c454.eps} \caption{Variability statistics as functions of observed time scale for the gamma-ray flux of 3C~454.3 during the major flare at MJD~55516-22. {\bf Top panel} shows the ${\rm rms}(\Delta f)$ (solid lines) and $\max|\Delta f|$ (dotted lines) for the distributions of photon flux differences $\Delta f = f_2-f_1$. {\bf Middle panel} shows the ${\rm rms}(\sigma_f)$ (solid lines) and $\max|\sigma_f|$ (dotted lines) for the distributions of photon flux difference significances $\sigma_f = \Delta f/\delta f$. {\bf Bottom panel} shows the $\max(R_f)$ (dotted lines) for the distributions of photon flux ratios $R_f = f_2/f_1$. The line colours correspond to different characteristic binning time~scales. } \label{fig_stats} \end{figure} \section{Discussion} \label{sec_dis} This analysis of blazar 3C~454.3 at MJD~55520 can be compared with the recent study of blazar 3C~279 at MJD~57189 \cite{ack16}. In the case of 3C~279, the exposure to gamma-rays per orbit was enhanced by factor $\sim 3$ thanks to first successful pointing observation by Fermi/LAT. Although occultations by the Earth were still significant, the visibility windows were longer and the exposure to the target more uniform. Nevertheless, the number of gamma-ray counts $N_{\rm pred}$ collected over very short time bins, $t_{\rm bin} \sim 3\;{\rm min}$, is on average higher in the case of 3C~454.3, since its average photon flux is higher by factor $\simeq 2$. Moreover, such high gamma-ray flux was sustained over a longer period of almost four days, allowing for many more suborbital detections. At redshift $z = 0.859$, the luminosity distance to 3C~454.3 is $d_L \simeq 5.55\;{\rm Gpc}$, hence \mbox{$4\pi d_{\rm L}^2 \simeq 3.7\times 10^{57}\;{\rm cm^2}$}. An upper limit on suborbital photon flux variation amplitude of \mbox{$F_{\rm suborb} < 6\times 10^{-6}\;{\rm ph\,s^{-1}\,cm^{-2}}$} (the rms of photon flux difference for $t_{\rm bin} = 10\;{\rm min}$ at $\Delta t \simeq 12\;{\rm min}$) with the mean photon energy of $E_{\rm mean} \simeq 0.37\;{\rm GeV}$ (corresponding to the photon index of $\Gamma = 2.181$; see below) can be translated into the apparent gamma-ray luminosity of $L_{\gamma,\rm suborb} < 1.3\times 10^{49}\;{\rm erg\,s^{-1}}$. This limit is comparable with the apparent luminosity of suborbital variations detected in blazar 3C~279 ($F_{\rm suborb} \simeq 10^{-5}\;{\rm ph\,s^{-1}\,cm^{-2}}$, $d_{\rm L} \simeq 3.11\;{\rm Gpc}$) \cite{ack16}. Because of the high redshift of 3C~454.3, the eventual detection of gamma-ray variability on the time scales of several minutes would create even more serious problems to the theory of blazars. \section{Materials and Methods} \label{sec_met} The analysis of Fermi/LAT data presented in this work is performed with the final software package {\tt Science Tools} version {\tt v10r0p5}\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/}} and with the final instrument calibration standard {\tt P8R2\_SOURCE\_V6}. Photons are selected in the energy range 0.1--10 GeV from a Region of Interest (RoI) of $10^\circ$, applying a zenith angle cut of $<100^\circ$. Background sources were selected from the 3FGL catalog~\cite{ace15} within the radius of $25^\circ$. We applied a detection criterion ${\rm TS} > 10$ and $N_{\rm pred} > 3$. Although Fermi/LAT is characterised by very wide field-of-view (2.4 sr), individual sources are visible only during short time intervals for each orbit. Using the spacecraft telemetry data, I select visibility windows when the angular separation between the source and the main axis of LAT is $\alpha_{\rm src} < 60^\circ$, also selecting for good-time-intervals (GTIs) and avoiding the South Atlantic Anomaly (SAA). Such visibility windows are of variable length, but typically they are shorter than $\sim$30 min. Given a minimum time scale $t_{\rm min}$, each visibility window of duration $T_{\rm vis}$ is divided into as many as possible time bins of equal length $t_{\rm bin} \ge t_{\rm min}$, i.e., $t_{\rm bin} = T_{\rm vis}/{\rm floor}(T_{\rm vis}/t_{\rm min})$. While this choice results in slightly different values of $t_{\rm bin}$ for each orbit, it assures that the entire exposure of 3C~454.3 (except windows shorter than $t_{\rm min}$) is used in the analysis. A standard maximum likelihood analysis is used to measure the gamma-ray flux of 3C~454.3. In~order to minimise the number of degrees of freedom, all other parameters, including the normalisations and photon indices of background sources, were fixed to their average values determined from a global fit performed over time range MJD~55516-22. The photon index of 3C~454.3 was fixed at the value $\Gamma = 2.181$ determined in the same way. From the maximum likelihood analysis, light curves are obtained in the form $({t_i,f_i,\delta f_i})$, where $t_i$ is the centre of time bin, $f_i$ is the measured photon flux, and $\delta f_i$ is the statistical 1 {$-$} $\sigma$ error of photon flux measurement. A structure function \cite{emm10} is calculated from a given light curve by considering all pairs of measurements $(t,f,\delta f)$ made at times $t_1 < t_2$, binned according to the logarithm of time delay $\Delta t = t_2-t_1$. Given a measured parameter $F$, for every delay bin, the distribution of differences $\Delta F$ is determined, and then the statistics ${\rm rms}(\Delta F)$ and $\max(\Delta F)$ are calculated. In particular, three types of variations are considered: (1) absolute photon flux difference $\Delta f = |f_2 - f_1|$; (2) significance of photon flux difference $\sigma_f = |f_2-f_1|/\sqrt{(\delta f_1)^2+(\delta f_2)^2}$; and (3) photon flux ratio $R_f = f_2/f_1$. \section{Conclusions} \label{sec_con} Analysis of gamma-ray variability of blazar 3C~454.3 during its brightest gamma-ray flare at MJD 55516-22 from the Fermi/LAT data is performed on suborbital time scales ($t_{\rm bin} < 95\;{\rm min}$). The statistical significance of photon flux measurements is certainly not worse than in the case of blazar 3C~279 around MJD 57189 \cite{ack16}, where exposure was increased thanks to a successful pointing observation. By probing different characteristic suborbital time scales $t_{\rm min} = 3,5,10\;{\rm min}$, no evidence is found for statistically significant suborbital variability. The reduced $\chi^2$ test against the null hypothesis of constant flux per orbital visibility window returns $p > 10^{-2}$. The structure function analysis suggests: (1) an upper limit on suborbital variations at $\Delta F < 6\times 10^{-6}\;{\rm ph\,s^{-1}\,cm^{-2}}$ corresponding to $\Delta L < 1.3\times 10^{49}\;{\rm erg\,s^{-1}}$; (2) occasional (max) flux variations become significant ($5\sigma$) for $\tau > 5\;{\rm h}$; (3)~systematic (rms) flux variations become significant ($3\sigma$) for $\tau > 9\;{\rm h}$; and (4) the flux-doubling time scale is of the order $\tau_{R2} \sim 6\;{\rm h}$. These results are consistent with the standard models of blazar emission. \vspace{6pt} \acknowledgments{Discussions with Greg Madejski and Alex Markowitz are acknowledged. This work was supported by the Polish National Science Centre grant 2015/18/E/ST9/00580. This work is based on public data acquired by the Fermi Large Area Telescope created by NASA and DoE (USA) in collaboration with institutions from France, Italy, Japan and Sweden.} \conflictsofinterest{The author declares no conflict of interest.} \reftitle{References}
{ "timestamp": "2017-12-15T02:00:49", "yymm": "1712", "arxiv_id": "1712.04984", "language": "en", "url": "https://arxiv.org/abs/1712.04984" }
\section{Introduction} \label{sec:intro} The most widely used measure of correlation is the product-moment correlation coefficient. Its definition is quite simple. Consider a paired sample, that is $\{(x_1,y_1),\ldots,(x_n,y_n)\}$\; where the two numerical variables are the column vectors $X_n = (x_1,\ldots,x_n)^T$ and $Y_n$. Then the {\it product moment} of $X_n$ and $Y_n$ is just the inner product \begin{equation}\label{eq:PM} \PM(X_n,Y_n) \;=\; \frac{1}{n} \big\langle X_n,Y_n \big\rangle \;=\; \frac{1}{n} X_n^T Y_n \;=\; \ave_{i=1}^n x_i y_i \;\; . \end{equation} When the $(x_i,y_i)$ are i.i.d. observations of a stochastic vector $(X,Y)$ the population version is the expectation $E[XY]$. The product moment \eqref{eq:PM} lies at the basis of many concepts. The {\it empirical covariance} of $X_n$ and $Y_n$ is the `centered' product moment \begin{equation}\label{eq:Cov} \Cov(X_n,Y_n) \;=\; \frac{n}{n-1} \PM(X_n - \ave(X_n),Y_n - \ave(Y_n)) \end{equation} with population version $E[(X-E[X])(Y-E[Y])]$\;. Therefore \eqref{eq:PM} can be seen as a `covariance about zero'. And finally, the product-moment correlation is given by \begin{equation}\label{eq:Pearson} \Cor(X_n,Y_n) \;=\; \frac{n}{n-1} \PM(z(X_n),z(Y_n)) \end{equation} where the z-scores are defined as $z(X_n) = (X_n - \ave(X_n))/\Std(X_n)$ with the standard deviation $\Std(X_n) = \sqrt{\Var(X_n)} = \sqrt{\Cov(X_n,X_n)}$\;. The product-moment quantities \eqref{eq:PM}--\eqref{eq:Pearson} satisfy $\PM(X_n,Y_n) = \PM(Y_n,X_n)$ and\newline $\PM(X_n,X_n) \gs 0$\;. They have several nice properties. The {\bf independence property} states that when $X$ and $Y$ are independent we have $\Cov(X,Y) = 0$ (assuming the variances exist). Secondly, when our data set $\bX_{n,d}$ has $n$ rows (cases) and $d$ columns (variables, dimensions) we can assemble all the product moments between the variables in a $d \times d$ matrix \begin{equation}\label{eq:PMmatrix} \PM(\bX_{n,d}) = \frac{1}{n} \bX_{n,d}^T \bX_{n,d} \;\; . \end{equation} The {\bf PSD property} says that the matrix \eqref{eq:PMmatrix} is positive semidefinite, which is crucial. For instance, we can carry out a spectral decomposition of the covariance (or correlation) matrix, which forms the basis of principal component analysis. When $d < n$ the covariance matrix will typically be positive definite hence invertible, which is essential for many multivariate methods such as the Mahalanobis distance and discriminant analysis. The third property is {\bf speed}: the product moment, covariance and correlation matrices can be computed very fast, even in high dimensions $d$. Despite these attractive properties, it has been known for a long time that the product-moment covariance and correlation are overly sensitive to outliers in the data. For instance, adding a single far outlier can change the correlation from $0.9$ to zero or to $-0.9$. Many robust alternatives to the Pearson correlation have been proposed in order to reduce the effect of outliers. The first one was probably Spearman's (1904) correlation coefficient, in which the $x_i$ and $y_i$ are replaced by their ranks. Rank-based correlations do not measure a linear relation but rather a monotone one, which may or may not be preferable in a given application. A second approach is based on the identity \begin{equation}\label{eq:GK} \Cor(X,Y) = \frac{\Var(\tilde{X}+\tilde{Y})- \Var(\tilde{X}-\tilde{Y})} {\Var(\tilde{X}+\tilde{Y})+ \Var(\tilde{X}-\tilde{Y})} \end{equation} where $\tilde{X}=X/\sqrt{Var(X)}$ and $\tilde{Y}=Y/\sqrt{Var(Y)}$. \cite{Gnanadesikan:RobEst} proposed to replace the nonrobust variance by a robust scale estimator. This approach is quite popular, see e.g. \citep{Shev:book}. It does not satisfy the independence property however, and the resulting correlation matrix is not PSD so it needs to be orthogonalized, yielding the OGK method of \cite{Maronna:OGK}. Thirdly, one can start by computing a robust covariance matrix $\bC$ such as the Minimum Covariance Determinant (MCD) method of \cite{Rousseeuw:LMS}. Then we can define a robust correlation measure between variables $X_j$ and $X_k$ by \begin{equation}\label{eq:Cov2Cor} R(X_j,X_k) := C_{jk}/\sqrt{C_{jj}C_{kk}}\;\;. \end{equation} In this way we do produce a PSD matrix, but we lose the independence property. In fact, here the robust correlation between two variables depends on the other variables, so adding or removing a variable changes it. Also, the computational requirements do not scale well with the dimension $d$, making this approach infeasible for high dimensions. Another possibility is to start from the Spatial Sign Covariance Matrix (SSCM) of \cite{Visuri:Rank}. This method first computes the {\it spatial median} $\hbmu$ of the data points $\bx_i$ by minimizing $\sum_i ||\bx_i - \bmu||$. It then computes the product moment of the so-called {\it spatial signs} $(\bx_i - \hbmu)/||\bx_i-\hbmu||$. Then \eqref{eq:Cov2Cor} can be applied. The result is PSD but does not satisfy the independence property either. For high-dimensional data, the product-moment technology is computationally attractive. This suggests using the idea underlying Spearman's rank correlation, which is to transform the variables first. We do not wish to restrict ourselves to ranks however, and we want to explore how far the principle of robustness by data transformation can be pushed. In general, we consider a transformation $g$ applied to the individual variables, and we define the resulting $g$-product moment as \begin{equation}\label{eq:PMg} \PM_g (X_n,Y_n) \;\;:=\;\; \PM(g(X_n),g(Y_n)) \end{equation} and similarly for $\Cov_g$ and $\Cor_g$. Choosing $g(x_i)=x_i$ yields the usual product moment, and setting $g(x_i)$ equal to its rank yields the Spearman correlation. The $g$-product moment approach satisfies all three desired properties. First of all, if we use a bounded function $g$ the population version $E[g(X)g(Y)]$ always exists and $\Cov_g$ satisfies the independence property without any moment conditions. Secondly, the resulting matrices $\PM_g(\bX_{n,d}) = \PM(g(X_{.1}),\ldots,g(X_{.d}))$ always satisfy the PSD property. And finally, this method is very fast provided the transformation $g$ can be computed quickly (which could even be done in parallel over variables). Note that the bivariate winsorization in \cite{Khan:RLARS} is a transformation $\tilde{g}(X_n,Y_n)$ that depends on both arguments simultaneously, unlike \eqref{eq:PMg}. It yields a good robust bivariate correlation but without the multivariate PSD property. Our present goal is to find transformations $g$ for \eqref{eq:PMg} that yield covariance matrices that are sufficiently robust and at the same time sufficiently efficient in the statistical sense. \begin{table}[ht] \small \centering \caption{Computation times (in seconds) of various correlation matrices as a function of the dimension $d$, for $n=1000$ observations.} \label{tab:times} \begin{tabular}{|c|cccccc|} \hline dimension & \;\;\;MCD\;\; & \;\;OGK\;\; & \;SSCM\; & Spearman & Wrapping & Classic \\ \hline 10 & 0.319 & 0.022 & 0.004 & 0.002 & 0.003 & 0.001 \\ 50 & 6.222 & 0.426 & 0.009 & 0.009 & 0.012 & 0.002 \\ 100 & 24.76 & 2.089 & 0.031 & 0.019 & 0.027 & 0.008 \\ 500 & 1599 & 44.78 & 0.678 & 0.226 & 0.281 & 0.171 \\ 1000 & - & 166.7 & 3.107 & 0.774 & 0.836 & 0.685 \\ 5000 & - & 4389 & 129.1 & 17.11 & 17.39 & 16.81 \\ 10000 & - & - & 568.9 & 68.24 & 68.78 & 67.27 \\ 20000 & - & - & 2448 & 278.4 & 274.9 & 273.6 \\ \hline \end{tabular} \vskip0.3cm \end{table} Table \ref{tab:times} lists some computation times (in seconds) of the robust correlation methods mentioned above for $n=1000$ generated data points in various dimensions $d$, as well as the classical correlation matrix. (The times were measured on a laptop with Intel Core i7-5600U CPU at 2.60 GHz.) The fifth column is the $g$-product moment method that will be proposed in this paper. Note that the MCD cannot be computed when $d \geq n$, and that the computation times of MCD and OGK become infeasible at high dimensions. The next three methods are faster, and their robustness will be compared later on. The remainder of the paper is organized as follows. In Section \ref{sec:meth} we explore the properties of the $g$-product moment approach by means of influence functions, breakdown values and other robustness tools, and in Section \ref{sec:wrap} we design a new transformation $g$ based on what we have learned. Section \ref{sec:sim} compares these transformations in a simulation study and makes recommendations. Section \ref{sec:highdim} explains how to use the method in higher dimensions, illustrated on some real high-dimensional data sets in Section \ref{sec:app}. \section{General properties of $g$-product moments} \label{sec:meth} The oldest type of robust $g$-product moments occur in rank correlations. Define a rescaled version of the sample ranks as $R_n(x_i) = (\mbox{Rank}(x_i)-0.5)/n$ where $\mbox{Rank}(x_i)$ denotes the rank of $x_i$ in $\{x_1,\ldots,x_n\}$. The population version of $R_n(x_i)$ is the cumulative distribution function (cdf) of $X$. Then the following functions $g$ define rank correlations: \begin{itemize} \item $g(x_i) = R_n(x_i)$ yields the Spearman rank correlation \citep{Spearman:cor}. \item $g(x) = \sign(R_n(x_i) - 0.5)$ gives the quadrant correlation. \item $g(x) = \Phi^{-1}(R_n(x))$ (where $\Phi$ is the standard Gaussian cdf) yields the normal scores correlation. \item $g(x) := \Phi^{-1}\left([R_n(x)]_{\alpha}^ {1-\alpha}\right)$ with the notation $[y]_{a}^{b} := \mbox{min}(b,\mbox{max}(a,y))$ is the truncated normal scores function, first proposed on pages 210--211 of \citep{Hampel:IFapproach} in the context of univariate rank tests. \end{itemize} Kendall's tau is of a somewhat different type as it replaces each variable $X_n$ by a variable with $n(n-1)/2$ values, but we compare with it in Section \ref{sec:sim}. A second type of robust $g$-product moments goes back to Section 8.3 in the book of \cite{Huber:RobStat} and is based on M-estimation. Huber transformed $x_i$ to \begin{equation}\label{eq:psitrans} g(x_i) = \psi((x_i - \hmu)/\hs)\;, \end{equation} where $\hmu$ is an M-estimator of location defined by $\sum_i \psi((x_i - \hmu)/\hs) = 0$ and $\hs$ is a robust scale estimator such as the MAD given by $\MAD(X_n) = 1.4826\,\median_i|x_i - \median_j(x_j)|$\;. Note that $(x_i - \hmu)/\hs$ is like a z-score but based on robust analogs of the mean and standard deviation. For $\psi(z)=\sign(z)$ this yields $\hmu = \median_j(x_j)$ so we recover the quadrant correlation. Another transformation is Huber's $\psi_b$ function given by $\psi_b(z) = [z]_{-b}^{b}$ for a given corner point $b>0$. One can also use the sigmoid transformation $\psi(z) = \tanh\left(z\right)$. Note that the transformation \eqref{eq:psitrans} does not require any tie-breaking rules, unlike the rank correlations. \cite{Huber:RobStat} derived the asymptotic efficiency of the $\psi$-product moment. We go further by also computing the influence function, the breakdown value and other robustness measures. Our goal is to find a function $\psi$ that is well-suited for correlation. \subsection{Influence function and efficiency} \label{sec:IF} Note that the $g$-product moment $\PM_g(X_j,X_k)$ between two variables $X_j$ and $X_k$ in a multivariate data set does not depend on the other variables, so we can study its properties in the bivariate setting. For analyzing the statistical properties of the $\psi$-product moment we assume a simple model for the `clean' data, before outliers are added. The model says that $(X,Y)$ follows a bivariate Gaussian distribution $F_\rho$ given by \begin{equation}\label{eq:Frho} F_{\rho} = N \left( \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 & \rho \\ \rho & 1 \end{bmatrix} \right) \end{equation} for $-1 < \rho < 1$, so $F_0$ is just the bivariate standard Gaussian distribution. We restrict ourselves to odd functions $\psi$ so that $E[\psi(X)]=0=E[\psi(Y)]$, and study the statistical properties of $T_n = \frac{1}{n} \sum_{i=1}^n \psi(x_i)\psi(y_i)$ with population version $T_{\psi} = E[\psi(X)\psi(Y)]$. Note that $T_{\psi}$ maps the bivariate distribution of $(X,Y)$ to a real number, and is therefore called a {\it functional}. It can be seen as the limiting case of the estimator $T_n$ for $n \rightarrow \infty$. On the other hand, a finite sample $Z_n = \{(x_1,y_1),\ldots,(x_n,y_n)\}$ yields an empirical distribution $F_n(x,y) = \frac{1}{n} \sum_{i=1}^n I(x_i \le x,\,y_i \le y)$ and we can define an estimator $T_n(Z_n)$ as $T_\psi(F_n)$, so there is a strong connection between estimators and functionals. Whereas the usual consistency of an estimator $T_n$ requires that $T_n$ converges to $\rho$ in probability, there exists an analogous notion for functionals: $T_\psi$ is called {\it Fisher-consistent} for $\rho$ iff $T_\psi(F_\rho) = \rho$. We will start with the influence function (IF) of $T_{\psi}$. Following \cite{Hampel:IFapproach}, the raw influence function of the functional $T_{\psi}$ at $F_\rho$ is defined in any point $(x,y)$ as \begin{equation} \label{eq:rawIF} \mbox{IF}_{raw}((x,y),T_{\psi},F_\rho) = \frac{\partial}{\partial \eps} T_{\psi}((1-\eps)F_\rho + \eps \Delta_{(x,y)})|_{\eps = 0} \end{equation} where $\Delta_{(x,y)}$ is the probability distribution that puts all its mass in $(x,y)$. Note that \eqref{eq:rawIF} is well-defined because $(1-\eps)F_\rho + \eps\Delta_{(x,y)}$ is a probability distribution so $T_\psi$ can be applied to it. The IF quantifies the effect of a small amount of contamination in $(x,y)$ on $T_{\psi}$ and thus describes the effect of an outlier on the finite-sample estimator $T_n$. It is easily verified that $\mbox{IF}_{raw}((x,y),T_{\psi},F_0) = \psi(x)\psi(y)$. However, we cannot compare the raw influence function \eqref{eq:rawIF} across different functions $\psi$ since $T_{\psi}$ is not Fisher-consistent, that is, $T_{\psi}(F_\rho) \neq \rho$ in general. For non-Fisher-consistent statistics $T$ we follow the approach of \cite{Rousseeuw:IFgeneral} and \cite{Hampel:IFapproach} by defining \begin{equation}\label{eq:xi} \xi(\rho) := T(F_\rho) \;\;\; \mbox{ and } \;\;\; U(F) := \xi^{-1}(T(F)) \end{equation} so $U$ is Fisher-consistent, and putting \begin{equation}\label{eq:generalIF} \mbox{IF}((x,y),T,F) := \mbox{IF}_{raw}((x,y),U,F) = \frac{\mbox{IF}_{raw}((x,y),T,F)} {\xi'(\rho)}\;\;. \end{equation} \begin{proposition}\label{prop:IF} When $\psi$ is odd [i.e. $\psi(-z)=-\psi(z)$] and bounded we have $\xi'(0) = E[\psi']^2$ hence the influence function of $T_{\psi}$ at $F_0$ becomes \begin{equation}\label{eq:IFT} \mbox{IF}((x,y),T_{\psi},F_0) = \frac{\psi(x)\psi(y)}{E[\psi']^2}. \end{equation} \end{proposition} \noindent The proof can be found in Section \ref{A:proofIFT} of the Supplementary Material. The influence function at $F_\rho$ for $\rho \neq 0$ derived in Section \ref{A:IFgen} has the same overall shape. Since the IF measures the effect of outliers we prefer bounded $\psi$, unlike the classical choice $\psi(z) = z$. Note that \eqref{eq:IFT} is the raw influence function of $T^{*} = E[\psi^{*}(X)\psi^{*}(Y)]$ at $F_0$, where $\psi^{*}(u) = \psi(u)/E[\psi']$. As $\psi$ is bounded $T^{*}$ is integrable, so by the law of large numbers $T_n^{*}$ is strongly consistent for its functional value: $T_n^* = \frac{1}{n}\sum_{i=1}^{n} {\psi^*(x_i)\psi^*(y_i)} \xrightarrow{a.s.} T^{*}(F_{\rho})$ for $n \to \infty$. By the central limit theorem, $T^{*}$ is then asymptotically normal under $F_0$: \begin{equation*} \sqrt{n}(T_n^{*}-0)\rightarrow N(0,V)\;, \end{equation*} where \begin{equation}\label{eq:V} V = \frac{E[\psi^2]^2}{E[\psi']^4} = \left(\frac{E[\psi^2]}{E[\psi']^2}\right)^2. \end{equation} From this we obtain the asymptotic efficiency $\mbox{eff} = (E[\psi']^2/E[\psi^2])^2$\;. Note that the influence function of $T_{\psi}$ at $F_0$ factorizes as the product of the influence functions of the M-estimator $L_{\psi}$ of location with the same $\psi$-function: \begin{equation}\label{eq:splitinf} \mbox{IF}((x,y),T_{\psi},F_0) = \mbox{IF}(x,L_{\psi},\Phi)\, \mbox{IF}(y,L_{\psi},\Phi)\;, \end{equation} because $\mbox{IF}(x,L_{\psi},\Phi) = \psi(x)/E[\psi']$\,. This explains why the efficiency of $T_{\psi}$ satisfies $\mbox{eff}(T_{\psi}) = (\mbox{eff}(L_{\psi}))^2$\;. We are also interested in attaining a low gross-error sensitivity $\gamma^*(T_{\psi})$, which is defined as the supremum of $|\mbox{IF}((x,y),T_{\psi},F_0)|$ and therefore equals $(\gamma^*(L_{\psi}))^2$\;. It follows from \citep{Rousseeuw:CVC} that the quadrant correlation $\psi(z) = \sign(z)$ has the lowest gross-error sensitivity among all statistics of the type $T_{\psi} = E[\psi(X)\psi(Y)]$. In fact, $\mbox{IF}((x,y),T_{\psi},F_0) = (\pi/2) \sign(x)\sign(y)$ yielding $\gamma_{T}^{*} = \pi/2$. However, the quadrant correlation is very inefficient as $\mbox{eff} = 4/\pi^2 = 40.5\%$. The influence functions of rank correlations are obtained by \cite{Croux:IFspearman} and \cite{Boudt:GRcor}. Note that for some rank correlations the function $\xi$ of \eqref{eq:xi} is known explicitly, in fact $\xi(\rho) = \sin(\rho \pi/2)$ for the quadrant correlation, $\xi(\rho) = (6/\pi)\arcsin(\rho/2)$ for Spearman and $\xi(\rho)= \rho$ for normal scores. It turns out that these IF at $F_0$ match the expression in Proposition \ref{prop:IF} if $\psi$ corresponds to the population version of the transformation $g$ in the rank correlation, as explained in Section \ref{A:rankIF} of the Supplementary Material. \begin{figure}[!ht] \centering \includegraphics[width=0.7\textwidth] {influencefunctions_noWrap_cropped.pdf} \vskip-0.2cm \caption{Location influence functions at $\rho=0$ for different transformations $g$} \label{fig:IF} \end{figure} The influence functions of rank correlations at $F_0$ also factorize as in \eqref{eq:splitinf}. Figure \ref{fig:IF} plots these location influence functions for several choices of the transformation $g$. We see that the Pearson and normal scores correlations have the same influence function (the identity), which is unbounded. On the other hand, the IF of Huber's $\psi_b$ stays constant outside the corner points $-b$ and $b$. The truncated normal scores (`Norm05') has the same IF as Huber's $\psi_b$ provided $\alpha = \Phi(-b)$\;. The Spearman rank correlation and the sigmoid transformation have smooth influence functions. \subsection{Maxbias and breakdown value} Whereas the IF measures the effect of one or a few outliers, we are now interested in the effect of a larger fraction $\eps$ of contamination. For the uncontaminated distribution of the bivariate $(X,Y)$ we take the Gaussian distribution $F=F_\rho$ given by \eqref{eq:Frho}. Then we consider all contaminated distributions of the form \begin{equation}\label{eq:epscont} F_{H,\eps} = (1-\eps)F+\eps H\;, \end{equation} where $\eps \gs 0$ and $H$ can be any distribution. This {\it $\eps$-contamination model} is similar to the contaminated distributions in \eqref{eq:rawIF} and \eqref{eq:CVC} but here $H$ is more general. A fraction $\eps$ of contamination can induce a maximum possible upward and downward bias on $T_\psi = \Cor(\psi(X),\psi(Y))$ denoted by \begin{equation}\label{eq:maxbias} B^+(\eps,T_\psi,F) = \sup_{G \in \mathcal{F}_\eps} (T_\psi(G)-T_\psi(F)) \;\; \mbox{ and } \;\; B^-(\eps,T_\psi,F) = \inf_{G \in \mathcal{F}_\eps} (T_\psi(G)-T_\psi(F))\;, \end{equation} where $\mathcal{F}_\eps = \{G;\; G = (1-\eps)F +\eps H\;\; \mbox{for any distribution }H\}$\;. The proof of the following proposition is given in Section \ref{A:proofbias} in the Supplementary Material. \begin{proposition}\label{prop:corbias} Let $\eps\in [0,1]$ be fixed and $\psi$ be odd and bounded. Then the maximum upward bias of $T_\psi$ at $F$ is given by \begin{equation} B^+(\eps,T_\psi,F) = \frac{(1-\eps) \Var_F(\psi(X))\,T_\psi(F) + \eps M^2} {(1-\eps)\Var_F(\psi(X)) + \eps M^2} -T_\psi(F) \end{equation} with $M := \sup_x |\psi(x)|$, and the maximum downward bias is \begin{equation} B^-(\eps,T_\psi,F) = \frac{(1-\eps) \Var_F(\psi(X))\,T_\psi(F) - \eps M^2} {(1-\eps)\Var_F(\psi(X)) + \eps M^2} -T_\psi(F)\;\;. \end{equation} \end{proposition} \vskip0.1cm The {\it breakdown value} $\eps^*$ of a robust estimator is loosely defined as the smallest $\eps$ that can make the result useless. For instance, a location estimator $\hmu$ becomes useless when its maximal bias tends to infinity. But correlation estimates stay in the bounded range $[-1,1]$ hence the bias can never exceed 2 in absolute value, so the situation is not as clear-cut and several alternative definitions could be envisaged. Here we will follow the approach of \cite{Caperaa:tauxderes} who define the breakdown value of a correlation estimator as the smallest amount of contamination needed to give perfectly correlated variables a negative correlation. More precisely: \begin{definition}\label{def:bdp} Let $F$ be a bivariate distribution with $X=Y$, and $R$ be a correlation measure. Then the breakdown value of $R$ is defined as \begin{equation*} \eps^{*}(R) = \inf \{\eps > 0\; ; \; \inf_{G \in \mathcal{F}_\eps} R(G) \ls 0\}\;\;. \end{equation*} \end{definition} The breakdown value of $T_\psi$ then follows immediately from Proposition \ref{prop:corbias}: \begin{corollary}\label{prop:breakdown} When $\psi$ is odd and bounded the breakdown value $\eps^{*}$ of $T_\psi$ equals \begin{equation*} \eps^{*}(T_\psi)= \frac{\Var_F(\psi(X))} {\Var_F(\psi(X))+M^2} \;\;. \end{equation*} \end{corollary} The breakdown values of rank correlations were obtained in \citep{Caperaa:tauxderes,Boudt:GRcor}. They used a different contamination model, but their results still hold under $\eps$-contamination as shown in Section \ref{A:rankBD} in the Supplementary Material. \section{The proposed transformation} \label{sec:wrap} The change-of-variance curve \citep{Hampel:tanh,Rousseeuw:CVC} is given by \begin{equation}\label{eq:CVC} \mbox{CVC}(z,T_{\psi},F) = \frac{\partial} {\partial \eps} \left[ \log V\big(T_{\psi}, (1-\eps)F + \eps(\Delta_z + \Delta_{-z})/2\big) \right]|_{\eps = 0} \end{equation} and measures how stable the variance of the method is when the underlying distribution is contaminated, which may make it longer tailed. We do not want the variance to grow too much, as is measured by the change-of-variance sensitivity $\kappa^{*}(T_{\psi})$, which is the supremum of the CVC. (On the other hand, negative values of the CVC indicate lower variance and are not a concern.) Since the asymptotic variance of $T_{\psi}$ satisfies $V(T_{\psi}) = (V(L_{\psi}))^2$ we obtain $\mbox{CVC}(z,T_{\psi},F_0) = 2\,\mbox{CVC}(z,L_{\psi},\Phi)$ and $\kappa^*(T_{\psi}) = 2\,\kappa^*(L_{\psi})$\;. Therefore we inherit all the results about the CVC from the location setting. For instance, the quadrant correlation [with $\psi(z) = \sign(z)$] has the lowest possible $\kappa^*(T_{\psi})$\;. Now suppose one wants to eliminate the effect of far outliers, say those that lie more than $c$ robust standard deviations away. This can be done by imposing \begin{equation}\label{eq:redesc} \psi(z)=0 \;\;\; \mbox{whenever} \;\;\; |z| > c \;\;. \end{equation} Such functions $\psi$ can no longer be monotone, and are called {\it redescending} instead. They were first used for M-estimation of location, and performed extremely well in the seminal simulation study of \cite{Andrews:1972}. They have been used in M-estimation ever since. In the context of location estimation, \cite{Hampel:tanh} show that the $\psi$-function satisfying \eqref{eq:redesc} with the highest efficiency subject to a given $\kappa^*(T_{\psi})$ is of the following form: \begin{equation}\label{eq:psiwrap} \psi_{b,c}(z) = \begin{cases} z & \mbox{ if } 0 \ls |z| \ls b\\ q_1 \tanh\big(q_2(c-|z|)\big) \sign(z) & \mbox{ if } b \ls |z| \ls c \\ 0 & \mbox{ if } c \ls |z|\;\;. \end{cases} \end{equation} For any combination $0<b<c$ the values of $q_1$ and $q_2$ can be derived as in Section \ref{A:wrapping} of the Supplementary Material. Our default choice is $b=1.5$ and $c=4$ as in Figure \ref{fig:psiwrap}. As we will see in Table \ref{tab:corrs} this choice strikes a good compromise between robustness and efficiency. Note that the $b$ in $\psi_{b,c}$ plays the same role as the ``corner value'' in the Huber $\psi_b$ function for location estimation. In that setting, $b = 1.5$ has been a popular choice from the beginning. The value $c=4$ reflects that we do not trust measurements that lie more than 4 standard deviations away. The form of $\psi_{b,c}(z)$ for $b \ls |z| \ls c$ is the result of solving a differential equation. \begin{figure}[!ht] \centering \includegraphics[width=0.6\textwidth] {psi_wrap_cropped.pdf} \vskip-0.2cm \caption{The proposed transformation \eqref{eq:psiwrap} with default constants $b=1.5$ and $c=4$.} \label{fig:psiwrap} \end{figure} A nice property of $\psi_{b,c}$ is that under normality a large majority of the data values (in fact $86.6\%$ of them for $b=1.5$) are left unchanged by the transformation, and only a minority is modified. Leaving the majority of the data unchanged has the advantage that we keep much information about the distribution of a variable and the type of association between variables (e.g. linear), unlike rank transforms. \begin{figure}[!ht] \centering \includegraphics[width=0.7\textwidth] {illustrWrapping_cropped.pdf} \vskip-0.2cm \caption{Illustration of wrapping a standardized sample $\{z_1,\ldots,z_n\}$\;. Values in the interval $[-b,b]$ are left unchanged, whereas values outside $[-c,c]$ are zeroed. The intermediate values are `folded' inward so they still play a role.} \label{fig:wrapping} \end{figure} Interestingly, $\psi_{b,c}$ pushes values between $b$ and $c$ closer to the center so intermediate outliers still play some smaller role in the correlation, whereas far outliers do not count. For this reason we refer to $\psi_{b,c}$ as the {\it wrapping function}, as it wraps the data around the interval $[-b,b]$\,. Indeed, the points on the interval are mapped to themselves, whereas the other points are wrapped around the corners, as in Figure \ref{fig:wrapping}. Another way to describe this is to say that wrapping multiplies the variable $z$ by a weight $w(z)$, where $w(z) \coloneqq 1$ when $|z| \le b$ and $w(z) \coloneqq \psi_{b,c}(z)/z$ for $|z| > b$. The influence function \eqref{eq:splitinf} contains $\mbox{IF}(z,L_{\psi},\Phi) = \psi_{b,c}(z)/E[\psi'_{b,c}]$, which has the shape of $\psi_{b,c}$ in Figure \ref{fig:psiwrap}. The bivariate influence function $\mbox{IF}((x,y),T_{\psi},F_\rho)$ is continuous and bounded, and shown in Figure \ref{fig:IF05} in Section \ref{A:wrapping} of the Supplementary Material. Table \ref{tab:corrs} lists some correlation measures based on transformations $g$ that either use ranks or $\psi$-functions. For each the breakdown value $\eps^*$ and the efficiency and gross-error sensitivity $\gamma^*$ at $\rho = 0$ are listed. The rejection point $\delta^*$ says how far an outlier must lie before the IF is zero. The last column shows the product-moment correlation between a Gaussian variable $X$ and its transformed $g(X)$\,. The correlation is quite high for most transformations studied here, providing insight as to why this approach works. \begin{table}[ht] \centering \caption{Correlation measures based on transformations $g$ with their breakdown value $\eps^*$, efficiency, gross-error sensitivity $\gamma^*$, rejection point $\delta^*$ and correlation between $X$ and $g(X)$.} \label{tab:corrs} \vskip0.3cm \begin{tabular}{|c|c|c|c|c|c|} \hline $\Cor_g$ & $\eps^{*}$ & eff & $\gamma^*$ & $\delta^*$ & $\mbox{Cor}$\\ \hline \hline Pearson & 0\% & 100\% & $\infty$ & $\infty$ & 1\\ \hline Quadrant & 50\% & 40.5\% & 1.57 & $\infty$ & 0.798\\ Spearman (SP) & 20.6\% & 91.2\% & 3.14 & $\infty$ & 0.977\\ Normal scores (NS) & 12.4\% & 100\% & $\infty$ & $\infty$ & 1\\ Truncated NS, $\alpha = 0.05$ & 16.3\% & 95.0\% & 3.34 & $\infty$ & 0.987\\ Truncated NS, $\alpha = 0.1$ & 20.7\% & 88.9\% & 2.57 & $\infty$ & 0.971\\ \hline Sigmoid & 28.3\% & 86.6\% & 2.73 & $\infty$ & 0.965\\ Huber, $b = \Phi^{-1}(0.95) \approx 1.64$ & 23.5\% & 95.0\% & 3.34 & $\infty$ & 0.987\\ Huber, $b = \Phi^{-1}(0.9) \approx 1.28$ & 29.2\% & 88.9\% & 2.57 & $\infty$ & 0.971\\ Wrapping, $b=1.5$, $c=4$ & 25.1\% & 89.0\% & 3.16 & 4.0 & 0.971\\ Wrapping, $b=1.3$, $c=4$ & 28.1\% & 84.4\% & 2.79 & 4.0 & 0.958\\ \hline \end{tabular} \vskip0.3cm \end{table} In Table \ref{tab:corrs} we see that the quadrant correlation has the highest breakdown value but the lowest efficiency. The Spearman correlation reaches a much better compromise between breakdown and efficiency. Normal scores has the asymptotic efficiency and IF of Pearson but with a breakdown value of 12.4\%, a nice improvement. Truncating 5\% improves its robustness a bit at the small cost of 5\% of efficiency, whereas truncating 10\% brings its performance close to Spearman. Both the Huber and the wrapping correlation have a parameter $b$, the corner point, which trades off robustness and efficiency. A lower $b$ yields a higher breakdown value and a better gross-error sensitivity, but a lower efficiency. Note that the Huber correlation looks good in Table \ref{tab:corrs}, but in the simulation study of Section \ref{sec:sim} it performs less well than wrapping in the presence of outliers, and the same holds in the real data application in Section \ref{sec:video}. The reason is that wrapping gives a lower weight $w(z) := \psi_{b,c}(z)/z$ to outliers and even $w(z) = 0$ for $|z| > c$, whereas the Huber weight $w_b(z) := \psi_b(z)/z$ is higher for outliers and always nonzero, so even far outliers still have an effect. Note that whenever two random variables $X$ and $Y$ are independent the correlation between the wrapped variables $g_X(X)$ and $g_Y(Y)$ is zero, even if the original $X$ and $Y$ did not satisfy any moment conditions. This follows from the boundedness of $\psi_{b,c}$ in \eqref{eq:psiwrap}. It is well-known that the reverse is not true for the classical Pearson correlation, but that it holds when $(X,Y)$ follow a bivariate Gaussian distribution. This is also true for the wrapped correlation. \begin{proposition}\label{prop:independence} If the variables $(X,Y)$ follow a bivariate Gaussian distribution and the correlation between the wrapped variables $g_X(X)$ and $g_Y(Y)$ is zero, then $X$ and $Y$ are independent. \end{proposition} Another well-known property says that the Pearson correlation of a dataset $Z = \{(x_1,y_1),\ldots,(x_n,y_n)\}$ equals 1 if and only if there are constants $\alpha$ and $\beta$ with $\beta>0$ such that \begin{equation} \label{eq:linear} y_i = \alpha + \beta x_i \end{equation} for all $i$ (perfect linear relation). The wrapped correlation satisfies a similar result. \begin{proposition}\label{prop:linearity} (i) If \eqref{eq:linear} holds for all $i$ and we transform the data to $g_X(x_i) = \psi_{b,c}((x_i - \hmu_X)/\hs_X)$ and $g_Y(y_i) = \psi_{b,c}((y_i - \hmu_Y)/\hs_Y)$ then $\Cor(g_X(x_i),g_Y(y_i)) = 1$. (ii) If $\Cor(g_X(x_i),g_Y(y_i)) = 1$ then \eqref{eq:linear} holds for all $i$ for which $|x_i - \hmu_X|/\hs_X \leqslant b$ and $|y_i - \hmu_Y|/\hs_Y \leqslant b$. \end{proposition} In part (ii) the linearity has to hold for all points with coordinates in the central region of their distribution, whereas far outliers may deviate from it. In that case the points in the central region are exactly fit by a straight line. The proofs of Propositions \ref{prop:independence} and \ref{prop:linearity} can be found in Section \ref{A:independence} of the Supplementary Material. {\bf Remark.} Whereas Proposition \ref{prop:independence} requires bivariate gaussianity, the other results in this paper do not. In fact, Propositions \ref{prop:IF}, \ref{prop:corbias}, and \ref{prop:linearity} as well as Corollary \ref{prop:breakdown} still hold when the data is generated by a symmetric and unimodal distribution. The corresponding proofs in the Supplementary Material are for this more general setting. \section{Simulation Study} \label{sec:sim} We now compare the correlation by transformation methods in Table \ref{tab:corrs} for finite samples. For all of these methods the correlation between two variables does not depend on any other variable in the data, so we only need to generate bivariate data here. For the non rank-based methods we first normalize each variable by a robust scale estimate, and then estimate the location by the M-estimator with the given function $\psi$. Next we transform $x_i$ to $\tx_i = \psi((x_i - \hmu_X)/\hs_X)$ and $y_i$ to $\ty_i = \psi((y_i - \hmu_Y)/\hs_Y)$ and compute the plain Pearson correlation of the transformed sample $\{(\tx_1,\ty_1),\ldots,(\tx_n,\ty_n)\}$. {\bf Clean data.} Let us start with uncontaminated data distributed as $F = F_\rho$\, given by \eqref{eq:Frho} where the true correlation $\rho$ ranges over $\{0, 0.05, 0.10, \ldots, 0.95\}$. For each $\rho$ we generate $m = 5000$ bivariate data sets $\bZ^j$ with sample size $n = 100$. (We also generated data with $n=20$ yielding the same qualitative conclusions.) We then estimate the bias and the mean squared error (MSE) of each correlation measure $R$ by \begin{equation} \bias_{\rho}(R) = \ave_{j=1}^{m} \left(R(\bZ^j)-\rho \right) \;\;\; \mbox{ and } \;\;\; \MSE_{\rho}(R) = \ave_{j=1}^{m} \left( {R(\bZ^j)}-\rho \right)^2\;\;. \end{equation} \begin{figure}[!ht] \centering \includegraphics[width=0.49\textwidth] {Bias_at_n100.pdf} \includegraphics[width=0.49\textwidth] {MSE_at_n100.pdf} \vskip-0.2cm \caption{Bias and MSE of correlation measures based on transformation, for uncontaminated Gaussian data with sample size 100.} \label{fig:CBTclean} \end{figure} The bias is shown in the left part of Figure \ref{fig:CBTclean}. The vertical axis has flipped signs because the bias was always negative, so $\rho$ is typically underestimated. Unsurprisingly, the Pearson correlation has the smallest bias (known not to be exactly zero). The normal scores correlation and the Huber $\psi$ with $b=1.5$ are fairly close, followed by truncated normal scores, Spearman and the sigmoid. Wrapping with $b=1.5$ and $b=1.3$ (both with $c=4$) comes next, still with a fairly small bias. The bias of the quadrant correlation is much higher. Note that we could have reduced the bias of all of these methods by applying the consistency function $\xi^{-1}$ of \eqref{eq:xi}, which can be computed numerically. But such consistency corrections would destroy the crucial PSD property for the higher-dimensional data that motivate the present work, so we will not use them here. The right panel of Figure \ref{fig:CBTclean} shows the MSE of the same methods, with a pattern similar to that of the bias. Even for $n=20$ the bias dominated the variance (not shown). {\bf Contaminated data.} In order to compare the robustness of these correlation measures we now add outliers to the data. Since the true correlation $\rho$ ranges over positive values here, we will try to bring the correlation measures down. From the proof of Proposition \ref{prop:corbias} in Section \ref{A:proofbias} we know that the outliers have the biggest downward effect when placed at points $(k,-k)$ and $(-k,k)$ for some $k$. Therefore we will generate outliers from the distribution \begin{equation*} H\; = \;\frac{1}{2} N\left( \begin{bmatrix} k \\ -k\\ \end{bmatrix} ,0.01^2I\right) + \frac{1}{2} N\left( \begin{bmatrix} -k \\ k\\ \end{bmatrix} ,0.01^2I\right) \end{equation*} for different values of $k$. The simulations were carried out for 10\%, 20\% and 30\% of outliers, but we only show the results for 10\% as the relative performance of the methods did not change much for the higher contamination levels. \begin{figure}[!ht] \centering \includegraphics[width=0.49\textwidth] {MSE_at_n100_caw_10_k3.pdf} \includegraphics[width=0.49\textwidth] {MSE_at_n100_caw_10_k5.pdf} \vskip-0.2cm \caption{MSE of the correlation measures in Figure \ref{fig:CBTclean} with 10\% of outliers placed at $k = 3$ (left) and $k=5$ (right).} \label{fig:CBTcont} \end{figure} The results are shown in Figure \ref{fig:CBTcont} for $k=3$ and $k=5$. For $k=3$ we see that the Pearson correlation has by far the highest MSE, followed by normal scores (whose breakdown value of 12.4\% is not much higher than the 10\% of contamination). The 5\% truncated normal scores and the Huber with $b=1.5$ do better, followed by the Spearman, the sigmoid, the 10\% truncated normal scores and the Huber with $b=1.3$. The quadrant correlation does best among all the methods based on a monotone transformation. However, wrapping still outperforms it, because it gives the outliers a smaller weight. Even though wrapping has a slightly lower efficiency for clean data than Huber's $\psi_b$ with the same $b$, in return it delivers more resistance to outliers further away from the center. For $k=5$ the pattern is the same, except that the Pearson correlation is affected even more and wrapping has given a near-zero weight to the outliers. For $k=2$ (not shown) the contamination is not really outlying and all methods performed about the same, whereas for $k > 5$ the curves of the non-Pearson correlations remain as they are for $k=5$ since all of our transformations $g$ are constant in that region. {\bf Comparison with other robust correlation methods.} As described in the introduction, several good robust alternatives to the Pearson correlation exist that do not fall in our framework. We would like to find out how well wrapping stacks up against the most well-known of them, such as Kendall's tau. We also compare with the Gnanadesikan-Kettenring (GK) approach \eqref{eq:GK} in which we replace the variance by the square of a robust scale, in particular the MAD and the scale estimator $Q_n$ of \cite{Rousseeuw:scale}. For the approach starting with the estimation of a robust covariance matrix we consider the Minimum Covariance Determinant (MCD) method \citep{Rousseeuw:MultBD} using the algorithm in \citep{Hubert:DetMCD}, and the Spatial Sign Covariance Matrix (SSCM) of \cite{Visuri:Rank}. In both cases we compute a correlation measure between variables $X_1$ and $X_2$ from the estimated scatter matrix $C$ by \eqref{eq:Cov2Cor}. For our bivariate generated data the matrix $C$ is only $2 \times 2$, but if the original data have more dimensions the estimated correlation between $X_1$ and $X_2$ now also depends on the other variables. To illustrate this we computed the MCD and the SSCM also in $d=10$ dimensions where the true covariance matrix is given by $\Sigma_{jk} = \rho$ for $j \neq k$ and 1 otherwise. The simulation then reports the result of \eqref{eq:Cov2Cor} on the first two variables only. \begin{figure}[!ht] \centering \includegraphics[width=0.49\textwidth] {Bias_at_n100_cc.pdf} \includegraphics[width=0.49\textwidth] {MSE_at_n100_cc.pdf} \vskip-0.2cm \caption{Bias and MSE of other robust correlation measures, for uncontaminated Gaussian data with sample size 100.} \label{fig:CCclean} \end{figure} The left panel of Figure \ref{fig:CCclean} shows the bias of all these methods, in the same setting as Figure \ref{fig:CBTclean}. The two GK methods and the MCD computed in 2 and 10 dimensions have the smallest bias, followed by wrapping. The Kendall bias is substantially larger, and in fact looks similar to the bias of the quadrant correlation in Figure \ref{fig:CCclean}, which is not so surprising since they possess the same function $\xi(\rho) = 2 \arcsin(\rho)/\pi$ in \eqref{eq:xi}. The bias of the SSCM is even larger, both when computed in $d=2$ dimensions and in $d=10$. The MSE in the right panel of Figure \ref{fig:CCclean} shows a similar pattern. \begin{figure}[!ht] \centering \includegraphics[width=0.49\textwidth] {MSE_at_n100_caw_10_k3_cc.pdf} \includegraphics[width=0.49\textwidth] {MSE_at_n100_caw_10_k5_cc.pdf} \vskip-0.2cm \caption{MSE of the correlation measures in Figure \ref{fig:CCclean} with 10\% of outliers placed at $k = 3$ (left) and $k=5$ (right).} \label{fig:CCcont} \end{figure} Figure \ref{fig:CCcont} shows the effect of 10\% of outliers, using the same generated data as in Figure \ref{fig:CBTcont}. The left panel is for $k=3$. The scale of the vertical axis indicates that the outliers have increased the MSE of all methods. The MCD in $d=2$ dimensions is the least affected, whereas the GK methods, the SSCM with $d=2$ and Kendall's tau are more sensitive. Note that the data in $d=10$ dimensions was only contaminated in the first 2 dimensions, and the MCD still does quite well in that setting. On the other hand, the MSE of the SSCM in $d=10$ is now much higher. To conclude, wrapping holds its own even among well-known robust correlation measures outside our transformation approach. Wrapping was not the overall best method in our simulation, that would be the MCD, but the latter requires much more computation time which goes up a lot in high dimensions. Moreover, the highly robust quadrant transformation yields a low efficiency as it ignores much information in the data. Therefore, wrapping seems a good choice for our purpose, which is to construct a fast robust method for fitting high dimensional data. Some other methods like the MCD perform better in low dimensions (say, upto 20), but in high dimensions the MCD and related methods become infeasible, whereas the SSCM does not perform well any more. \section{Use in higher dimensions} \label{sec:highdim} \subsection{Methodology} \label{sec:method} So far the illustrations of wrapping were in the context of bivariate correlation. In this section we explain its use in the higher-dimensional context for which it was developed. Our approach is basically to wrap the data first, carry out an existing estimation technique on the wrapped data, and then use that fit for the original data. We proceed along the following steps. {\bf Step 1: estimation.} For each of the (possibly many) continuous variables $X_j$ with $j=1,\ldots,d$ we compute a robust initial scale estimate $\hs_j$ such as the MAD. Then we compute a one-step location M-estimator $\hmu_j$ with the wrapping function $\psi_{b,c}$ with defaults $b=1.5$ and $c=4$. We could take more steps or iterate to convergence, but this would lead to a higher contamination bias \citep{Rousseeuw:kStepM}. {\bf Step 2: transformation.} Next we wrap the continuous variables. That is, we transform any $x_{ij}$ to \begin{equation}\label{eq:wrapx} x_{ij}^* \;=\; g(x_{ij}) \;=\; \hmu_j + \hs_j \, \psi_{b,c} \Big(\frac{x_{ij}-\hmu_j} {\hs_j}\Big)\;\;. \end{equation} Note that $\ave_i(\tx_{ij})$ is a robust estimate of $\mu_j$ and $\std_i(\tx_{ij})$ is a robust estimate of $\sigma_j$\,. The wrapped variables $X^*_j$ do not contain outliers, and when the original $X_j$ is Gaussian over 86\% of its values remain unchanged, that is $\tx_{ij} = x_{ij}$\;. If $x_{ij}$ is missing we have to assign a value to $g(x_{ij})$ in order to preserve the PSD property of product moment matrices, and $g(x_{ij})=\hat{\mu}_j$ is the natural choice. We do not transform discrete variables -- depending on the context one may or may not leave them out of the subsequent analysis. {\bf Step 3: fitting.} We then fit the wrapped data $\tx_{ij}$ by an existing multivariate method, yielding for instance a covariance matrix or sparse loading vectors. {\bf Step 4: using the fit.} To evaluate the fit we will look at the deviations (e.g. Mahalanobis distances) of the wrapped cases $\bx^*_i$ as well as the original cases $\bx_i$\,. Note that the time complexity of Steps 1 and 2 for all $d$ variables is only $O(nd)$. Any fitting method in Step 3 must read the data so its complexity is at least $O(nd)$. Therefore the total complexity is not increased by wrapping, as illustrated in Table \ref{tab:times}. \subsection{Estimating covariance and precision matrices} \label{sec:specific} {\bf Covariance matrices.} The covariance matrix of the wrapped variables has the entries \begin{equation}\label{eq:Cjk} C(j,k) = \Cov(X^*_j,X^*_k) = \hs_j \, \hs_k \, \Cor\big( \psi_{b,c} \Big(\frac{x_{ij}-\hmu_j} {\hs_j}\Big), \psi_{b,c} \Big(\frac{y_{ik}-\hmu_k} {\hs_k}\Big) \big) \;. \end{equation} for $j,k = 1,\ldots,d$. The resulting matrix is clearly PSD. We also have the independence property: if variables $X_j$ and $X_k$ are independent so are $X^*_j = g(X_j)$ and $X^*_k = g(X_k)$, and as these are bounded their population covariance exists and is zero. \cite{Oellerer:robprec} defined robust covariances with a formula like \eqref{eq:Cjk} in which the correlation on the right was a rank correlation. They showed that the explosion breakdown value of the resulting scatter matrix (i.e. the percentage of outliers required to make its largest eigenvalue arbitrarily high) is at least that of the univariate scale estimator $S$ yielding $\hs_j$ and $\hs_k\,$, and their proof goes through without changes in our setting. Therefore, the robust covariance matrix \eqref{eq:Cjk} also has an explosion breakdown value of 50\%. The scatter matrix given by \eqref{eq:Cjk} is easy to compute, and can for instance be used for anomaly detection. In Section \ref{A:robdist} of the Supplementary Material it is illustrated how robust Mahalanobis distances obtained from the estimated scatter matrix can detect outlying cases. The scatter matrix can also be used in other multivariate methods such as canonical correlation analysis, and serve as a fast initial estimate in the computation of other robust methods such as \citep{Hubert:DetMCD}. {\bf Precision matrices and graphical models.} The precision matrix is the inverse of the covariance matrix, and allows to construct a Gaussian graphical model of the variables. \cite{Oellerer:robprec} and \cite{Tarr:robprec} estimated the covariance matrix from rank correlations, but one could also use wrapping for this step. When the dimension $d$ is too high the estimated covariance matrix cannot be inverted, so these authors construct a sparse precision matrix by applying GLASSO. \cite{Oellerer:robprec} show that the breakdown value of the resulting precision matrix, for both implosion and explosion, is as high as that of the univariate scale estimator. This remains true for wrapping, so the resulting robust precision matrix has breakdown value 50\%. \subsection{Distance Correlation} \label{sec:dependence} There exist measures of dependence which do not give rise to PSD matrices but are used as test statistics for dependence, such as mutual information and the distance correlation of \cite{Szekely:distcor}, which yield a single nonnegative scalar that does not reflect the direction of the relation if there is one. The theory of distance correlation only requires the existence of first moments. The distance correlation $\mbox{dCor}$ between random vectors $\bX$ and $\bY$ is defined through the Pearson correlation between the doubly centered interpoint distances of $\bX$ and those of $\bY$. It always lies between 0 and 1. The population version $\mbox{dCor}(\bX,\bY)$ can be written in terms of the characteristic functions of the joint distribution of $(\bX,\bY)$ and the marginal distributions of $\bX$ and $\bY$. This allows \cite{Szekely:distcor} to prove that $\mbox{dCor}(\bX,\bY)=0$ implies that $\bX$ and $\bY$ are independent, a property that does not hold for the plain Pearson correlation. The population $\mbox{dCor}(\bX,\bY)$ is estimated by its finite-sample version $\mbox{dCor}(\bX_n,\bY_n)$ which is used as a test statistic for dependence. For a sample of size $n$ this would appear to require $O(n^2)$ computation time, but there exists an $O(n\log(n))$ algorithm \citep{Huo:fast} for the bivariate setting. By itself distance correlation is not robust to outliers in the data. In fact, we illustrate in Section \ref{A:distcov} of the Supplementary Material that the distance correlation of independent variables can be made to approach 1 by a single outlier among $100,000$ data points, and the distance correlation of perfectly dependent variables can be made to approach zero. On the other hand, we could first transform the data by the function $g$ of \eqref{eq:wrapx} with the sigmoid $\psi(z) = \tanh(z)$, and then compute the distance covariance. This combined method does not require the first moments of the original variables to exist, and the population version is again zero if and only if the original variables are independent (since $g$ is invertible). Figure \ref{fig:distcor_Cauchy} illustrates the robustness of this combined statistic. \begin{figure}[!t] \centering \vskip0.5cm \includegraphics[width=0.99\textwidth] {Figure8.pdf} \vskip-0.2cm \caption{Left panel: power of dCor (dashed black curve) and its robust version (blue curve) for bivariate $\bX$ and $\bY$ with distribution $t(1)$ and independence except for $\bX_1=\bY_1$ versus the sample size $n$. Right panel: power of dCor and its robust version for $d$-dimensional $\bX$ and $\bY$ with distribution $t(1)$ and $n=100$, as a function of the dimension $d$.} \label{fig:distcor_Cauchy} \end{figure} The data for Figure \ref{fig:distcor_Cauchy} were generated following Example 1(b) in \citep{Szekely:distcor}, where $\bX$ and $\bY$ are multivariate and all their components follow $t(1)$, the Student $t$-distribution with one degree of freedom. The null hypothesis states that $\bX$ and $\bY$ are independent. We investigate the power of the test for dependence under the alternative that all components of $\bX$ and $\bY$ are independent except for $\bX_1 = \bY_1$. For this we use the permutation test implemented as {\it dcor.test} in the R package {\it energy}. As in \citep{Szekely:distcor} we set the significance level to 0.1. The empirical power of the test is then the fraction of the $1000$ replications in which the test rejects the null hypothesis. In the left panel of Figure \ref{fig:distcor_Cauchy} we see the empirical power as a function of the sample size when $\bX$ and $\bY$ are both bivariate. The power of the original dCor (dashed black curve) starts around 0.6 for $n=20$ and approaches 1 when $n = 200$. This indicates that for small sample sizes the components $\bX_2$ and $\bY_2$, even though they are independent of everything else, have added noise in the doubly centered distances. In contrast, the power of the robust method (solid blue curve) is close to 1 overall. No outliers were added to the data, but the underlying distribution t(1) is long-tailed. The right panel of Figure \ref{fig:distcor_Cauchy} shows the effect of increasing the dimension $d$ of $\bX$ and $\bY$, for fixed $n=100$. At dimension $d=1$ we only have the components $\bX_1=\bY_1$ and both methods have power 1. At dimension $d=2$, dCor has power 0.9 and the robust version has power 1. When increasing the dimension further, the power of dCor goes down to about 0.3 around dimension $d=8$, whereas the power of the robust method only starts going down around dimension $d=17$ and is still reasonable at dimension $d=30$. This illustrates that the transformation has tempered the effect of the $d-1$ independent variables on the doubly centered distances, delaying the curse of dimensionality in this setting. \subsection{Fast detection of anomalous cells} \label{sec:FastDDC} Wrapping is a coordinatewise approach which makes it especially robust against cellwise outliers, that is, anomalous cells $x_{ij}$ in the data matrix. In this paradigm a few cells in a row (case) can be anomalous whereas many other cells in the same row still contain useful information, and in such situations we would rather not remove or downweight the entire row. The cellwise framework was first proposed and studied by \cite{Alqallaf:scalable,Alqallaf:Propag}. Most robust techniques developed in the literature aim to protect against rowwise outliers. Such methods tend not to work well in the presence of cellwise outliers, because even a relatively small percentage of outlying cells may affect a large percentage of the rows. For this reason several authors have started to develop cellwise robust methods \citep{Agostinelli:cellwise}. In the bivariate simulation of Section \ref{sec:sim} we generated rowwise outliers, but the results for cellwise outliers are similar (see Section \ref{A:cell} in the Supplementary Material). Actually {\it detecting} outlying cells in data with many dimensions is not trivial, because the correlation between the variables plays a role. The DetectDeviatingCells (DDC) method of \cite{Rousseeuw:DDC} predicts the value of each cell from the columns strongly correlated with that cell's column. The original implementation of DDC required computing all $O(d^2)$ robust correlations between the $d$ variables, yielding total time complexity $O(nd^2)$ which grows fast in high dimensions. Fortunately, the computation time can be reduced a lot by the wrapping method. This is because the product moment technology allows for nice shortcuts. Let us standardize two column vectors (that is, variables) $X_n = (x_1,\ldots,x_n)^T$ and $Y_n$ to zero mean and unit standard deviation. Then it is easy to verify that their correlation satisfies \begin{equation}\label{eq:cordist} \Cor(X_n,Y_n) \;=\; \frac{1}{n-1} \big\langle X_n,Y_n \big\rangle \;=\; 1 - \frac{||X_n - Y_n||^2}{2(n-1)} \end{equation} where $||\ldots||$ is the usual Euclidean distance. This monotone decreasing relation between correlation and distance allows us to switch from looking for high correlations in $d$ dimensions to looking for small distances in $n$ dimensions. When $n << d$ this is very helpful, and used e.g. in Google Correlate \citep{Vanderkam:Google}. The identity \eqref{eq:cordist} can be exploited for robust correlation by wrapping the variables first. In the (ultra)high dimensional case we can thus transpose our dataset so it becomes $d \times n$. If needed we can reduce its dimension even more to some $q << n$ by computing the main principal components and projecting on them, which preserves the Euclidean distances to a large extent. Finding the $k$ variables that are most correlated to a variable $X_j$\; therefore comes down to finding its $k$ nearest neighbors in $q$-dimensional space. Fortunately there exist fast approximate nearest neighbor algorithms \citep{Arya:ANN} that can obtain the $k$ nearest neighbors of all $d$ points in $q$ dimensions in $O(qd\log(d))$ time, a big improvement over $O(nd^2)$. Note that we want to find both large positive and large negative correlations, so we look for the $k$ nearest neighbors in the set of all variables and their sign-flipped versions. Using these shortcuts we constructed the method FastDDC which takes far less time than the original DDC and can therefore be applied to data in much higher dimensions. The detection of anomalous cells will be illustrated in the real data examples in Section \ref{sec:app}. In both applications, finding the anomalies is the main result of the analysis. \section{Real data examples} \label{sec:app} \subsection{Prostate data} \label{sec:prostate} In a seminal paper, \cite{Singh:prostate} investigated the prediction of two different types of prostate cancer from genomic information. The data is available as the R file Singh.rda in\\ {\it http://www.stats.uwo.ca/faculty/aim/2015/9850/microarrays/FitMArray/data/} and contains 12600 genes. The training set consists of 102 patients and the test set has 34. There is also a response variable with the clinical classification, -1 for tumor and 1 for nontumor. With the fast version of DDC introduced in Subsection \ref{sec:FastDDC} we can now analyze the entire genetic data set with $n=136$ and $d=12600$, which would take very long with the original DDC algorithm. Now it takes under 1 minute on a laptop. In this analysis only the genetic data is used and not the response variable, and the DDC method is not told which rows correspond to the training set. Out of the 136 rows 33 are flagged as outlying, corresponding to the test set minus one patient. The entire cellmap of size $136 \times 12600$ is hard to visualize. Therefore we select the 100 variables with the most flagged cells, yielding the cellmap in Figure \ref{fig:prostate}. The flagged cells are colored red when the observed value (the gene expression level) is higher than predicted, and blue when it is lower than predicted. Unflagged cells are colored yellow. \begin{figure}[!ht] \centering \includegraphics[width=0.55\textwidth] {cellmap_prostate.pdf} \vskip-0.2cm \caption{Prostate data: cellmap of the genes with the largest number of flagged cells.} \label{fig:prostate} \end{figure} The cellmap clearly shows that the bottom rows, corresponding to the test set, behave quite differently from the others. Indeed, it turns out that the test set was obtained by a different laboratory. This suggests to align the genetic data of the test set with that of the training set by some form of standardization, before applying a model fitted on the training data to predict the response variable on the test data. \subsection{Video data} \label{sec:video} For our second example we analyze a video of a parking lot, filmed by a static camera. The raw video can be found on {\it http://imagelab.ing.unimore.it/visor} in the category \textit{Videos for human action recognition in videosurveillance}. It was originally analyzed by \cite{Ballan:Video} using sophisticated computer vision technology. The video is 23 seconds long and consists of 230 Red/Green/Blue (RGB) frames of 640 by 480 pixels, so each frame corresponds with 3 matrices of size $640 \times 480$. In the video we see two men coming from opposite directions, meeting in the center where they talk, and then running off one behind the other. Figure \ref{fig:videosum} shows 3 frames from the video. The men move through the scene, so they can be considered as outliers. Therefore every frame (case) is contaminated, but only in a minority of pixels (cells). We treat the video as a dataset $\bX$ with 230 row vectors $\bx_i$ of length $921,600 = 640\cdot 480 \cdot 3$, and we want to carry out a PCA based on the robust covariance matrix between the $921,600$ variables. When dealing with datasets this large one has to be careful with memory management, as a covariance matrix between these variables has nearly $10^{12}$ entries which is far too many to store in RAM memory. Therefore, we proceed as follows: \begin{figure}[!htb] \centering \vspace{0.3cm} \includegraphics[width=1.0\textwidth] {Fig9_jpg.pdf} \caption{Frames 60, 100 and 200 of the video data.} \label{fig:videosum} \end{figure} \begin{enumerate} \item Wrap the 230 data values of each RGB pixel (column) $X_j$ which yields the wrapped data matrix $\bX^*$ and its centered version $\bZ^* = \bX^*-\boldsymbol{\overline{x^*}}\;$. \item Compute the first $k=3$ loadings of $\Cov(\bX^*) = \frac{n}{n-1}\PM(\bZ^*)$\;. We cannot actually compute or store this covariance matrix, so instead we perform a truncated singular value decomposition (SVD) of $\bZ^*$ with $k=3$ components, which is mathematically equivalent. For this we use the efficient function {\it propack:svd()} from the R package {\it svd} with option {\it neig=3}, yielding the loading row vectors $\bv_j$ for $j = 1,2,3$. \item Compute the 3-dimensional robust scores $\bt_i$ by projecting the {\it original} data on the robust loadings obtained from the {\it wrapped} data, i.e. $\bt_i = (\bx_i-\boldsymbol{\overline{x^*}}) (\bv_1^T,\bv_2^T,\bv_3^T)\,$. \end{enumerate} The classical PCA result can be obtained by carrying out steps 2 and 3 on $\bZ = \bX-\boldsymbol{\overline{x}}\;$ without any wrapping. We also want to compare with other robust methods. For the Spearman method we first replace each column $X_j$ by its ranks, i.e. $R_{ij}$ is the rank of $x_{ij}$ among all $x_{hj}$ with $h=1,\ldots,n$. We also compute $\hs_j = \MAD(X_j)$. Then we transform each $x_{ij}$ to $\,(R_{ij} - \ave_h(R_{hj}))\hs_j/ \std_h (R_{hj})\,$ yielding a matrix whose columns have mean zero and standard deviation $\hs_j$ to which we again apply step 2. Another method is to transform the data as in \eqref{eq:wrapx} but using Huber's $\psi$ function $\psi_b(z) = [z]_{-b}^{b}$ with the same $b=1.5$ as in wrapping. \begin{figure}[!htb] \centering \vspace{0.3cm} \includegraphics[width=1\textwidth] {Fig10_with_Huber_jpg.pdf} \caption{First loading vector of the video data, for classical PCA (upper left), Spearman correlation (upper right), Huber's $\psi$ (lower left), and wrapping (lower right).} \label{fig:loadings} \end{figure} Figure \ref{fig:loadings} shows the first loading vector $\bv_1$ displayed as an image, for all 4 methods considered. Positive loadings are shown in red, negative ones in blue, and loadings near zero look white. For wrapping the loadings basically describe the background, whereas for classical PCA they are affected by the moving parts (mainly the men and some leaves) that are outliers in this setting. The Spearman loadings resemble those of the classical method, whereas those with Huber's $\psi$ are in between. Similar conclusions hold for the second and third loading vectors (not shown). We can now compute a fit to each frame. For wrapping this is $\,\boldsymbol{\hat{x_i}} = \bt_i\, (\bv_1^T,\bv_2^T,\bv_3^T)^T + \boldsymbol{\overline{x^*}}\,$. The residual of the frame is then $\br_i=\bx_i-\boldsymbol{\hat{x_i}}\;$ whose 921,600 components (pixels) we can normalize by their scales. This allows us to keep those pixels of the frame where the absolute normalized residuals exceed a threshold, and turn the other pixels grey. For wrapping, this procedure yields a new video which only contains the men. This method has thus succeeded in accurately separating the movements from the background. \begin{figure}[!htb] \centering \includegraphics[width=0.84\textwidth] {Fig11_with_Huber.pdf} \caption{Residuals of the video data, for classical PCA (upper left), Spearman correlation (upper right), Huber's $\psi$ (lower left), and wrapping (lower right).} \label{fig:mask} \end{figure} The lower right panel of Figure \ref{fig:mask} shows the result for the central part of frame 100. The corresponding computation for classical PCA is shown in the upper left panel, which has separated the men less well: many small elements of the background are marked as outlying, whereas parts of the man on the left are missing. We conclude that in this dataset wrapping is the most robust, classical PCA the least, and the other methods are in between. Note that the entire analysis of this huge dataset of size 1.6 Gb in R took about two minutes on a laptop for wrapping (the times for the other three methods were similar). This is much faster than one would expect from the computation times in Table \ref{tab:times}, which are quadratic in the dimension since they calculate the entire covariance matrix. Of course, in real-time situations one would estimate the robust loadings on an initial set of, say, 100 frames and then process new images while they are recorded, which is very fast as it only requires a matrix multiplication. In parallel with this the robust loadings can be updated from time to time. \section{Software availability} The wrapping transform is implemented in the R package {\it cellWise} \citep{cellWise2019} on CRAN, which now also provides the faster version of DDC used in the first example. The package contains two vignettes with examples. The video data of the second example, its analysis and the video with results can be downloaded from\linebreak {\it https://wis.kuleuven.be/stat/robust/software}\,. \section{Conclusions} \label{sec:concl} Multivariate data often contain outlying (anomalous) values, so one needs robust methods that can detect and accommodate such outliers. The underlying assumption is that the variables are roughly Gaussian for the most part, with some possible outliers that do not follow any model and could be anywhere. (If necessary some variables can be transformed first, e.g. by taking their logarithms.) For multivariate data in low dimensions, say up to 20, there exist robust scatter matrix estimators such as the minimum covariance determinant (MCD) method that can withstand many rowwise outliers, even those that are not visible in the marginal distributions. We recommend to use such high-breakdown methods when the dimension allows it. But in higher dimensions these methods would require infeasible computation time to achieve the same degree of robustness, and then we need to resort to other methods. It is not easy to construct robust methods that simultaneously satisfy the independence property, yield positive semidefinite matrices, and scale well with the dimension. We achieve this by transforming the data first, after which the usual methods based on product moments are applied. Based on statistical properties such as the influence function, the breakdown value and efficiency we selected a particular transform called wrapping. It leaves over 86\% of the data intact under normality, which preserves partial information about the data distribution, granularity, and the shape of the relation between variables. Wrapping performs remarkably well in simulation. It is especially robust against cellwise outliers, where it outperforms typical rowwise robust methods. This made it possible to construct a faster version of the DetectDeviatingCells method. The examples show that the wrapping approach can deal with very high dimensional data.\\ \noindent {\bf Supplementary materials.} These consist of a text with the proofs referenced in the paper, and an R script that illustrates the approach and reproduces the examples.\\ \noindent {\bf Funding.} This research has been supported by projects of Internal Funds KU Leuven.
{ "timestamp": "2019-10-22T02:18:11", "yymm": "1712", "arxiv_id": "1712.05151", "language": "en", "url": "https://arxiv.org/abs/1712.05151" }
\section{Acknowledgement} The author thanks Nikesh Koirala, Nuh Gedik and Liang Fu for useful discussions and the MIT for hospitality during the visit where this work was initiated. \renewcommand{\theequation}{A-\arabic{equation}} \renewcommand{\thesection}{S\arabic{section}} \renewcommand{\thetable}{S\arabic{table}} \renewcommand{\thefigure}{S\arabic{figure}} \setcounter{equation}{0} \section{Appendix} In order to obtain the Eq.(\ref{imaginary_tk_22}), we use Eq.(\ref{6}) in (\ref{selfen}) and obtain \begin{equation} T_{\vec k}= -\frac{1}{2}\sum_{{\vec k}^\prime} V_{\vert \vec k-\vec k^\prime\vert} e^{i(\phi_S^\prime-\phi_S)} \frac{\alpha k^\prime+ T_{\vec k^\prime}}{\vert G_{\vec k^\prime}\vert}\,f^{+}_{\vec k^\prime}~~~~~~~~ \label{selfen_supp} \end{equation} After some algebra and a shift of variables $\phi^\prime \to \phi^\prime+\phi$ on the r.h.s., of the Eq.(\ref{selfen_supp}) here, the imaginary part of $Im\{T_{\vec k}\}$ can be found as, \begin{eqnarray} &~&Im\{T_{\vec k}\}=-\frac{1}{2}\sum_{{\vec k}^\prime} v_{kk^\prime}(\phi^\prime) e^{-i\phi^\prime} \nonumber \\ &\times&\Biggl\{\Bigl[\frac{\alpha k^\prime+ Re\{T_{k^\prime}(\phi^\prime+\phi)\}}{\vert G_{k^\prime}(\phi^\prime+\phi)\vert}-\frac{\alpha k^\prime+ Re\{T_{k^\prime}(-\phi^\prime+\phi)\}}{\vert G_{k^\prime}(-\phi^\prime+\phi)\vert}\Bigr] \nonumber \\ &+&i\Bigl[\frac{Im\{T_{k^\prime}(\phi^\prime+\phi)\}}{\vert G_{k^\prime}(\phi^\prime+\phi)\vert}+\frac{Im\{T_{k^\prime}(-\phi^\prime+\phi)\}}{\vert G_{k^\prime}(-\phi^\prime+\phi)\vert}\Bigr]\Biggl\} \label{selfen_supp_im} \end{eqnarray} where $v_{kk^\prime}(\phi^\prime)$ is essentially the same as $V_{\vert \vec k-\vec k^\prime\vert}$ with the replacement $\phi^\prime \to \phi^\prime+\phi$. We also introduced a slightly new notation $G_{k^\prime}(\phi)$ and $T_{k^\prime}(\phi)$ separating the radial and the azymuthal angular dependences in order to independently analyze the shifted angular dependence of $G_{\vec k}$ and $T_{\vec k}$. We now ignore in the r.h.s. of Eq.(\ref{selfen_supp_im}), the $Re\{T_{\vec k^\prime}\}$ as compared to the $\alpha k^\prime$, and define two new operators ${\cal L}_{\phi^\prime}^\pm$ by their action on an arbitrary function $h(\phi)$ \begin{eqnarray {\cal L}_{\phi^\prime}^\pm \, h(\phi)=h(\phi+\phi^\prime) \pm h(\phi-\phi^\prime) \end{eqnarray} and write the Eq.(\ref{selfen_supp_im}) in a more suggestive form as \begin{eqnarray Im\{T_k(\phi)\}&=&-\frac{1}{2}\sum_{\vec k^\prime} v_{kk^\prime}(\phi^\prime) \,\nonumber \\ \times \Biggl\{\sin\phi^\prime \,{\cal L}^-_{\phi^\prime}&\Bigl[&\frac{\alpha k^\prime+Re\{T_{k^\prime}(\phi)\}}{\vert {\vec G}_{k^\prime}(\phi)\vert}\Bigr] \nonumber \\ &+&\cos\phi^\prime {\cal L}^+_{\phi^\prime}\Bigl[\frac{Im\{T_{k^\prime}(\phi)\}}{\vert {\vec G}_{k^\prime}(\phi)\vert}\Bigr]\Biggr\} \label{imaginary_tk_2} \end{eqnarray} The Eq.(\ref{imaginary_tk_2}) must be solved self-consistently for $T_{\vec k}$ with a known interaction $v_{k k^\prime}(\phi^\prime)$. We claim that a general result not depending on the details of the interactions is more illustrative than the full self-consistent solution of Eq.(\ref{imaginary_tk_2}). Firstly, if the interaction is sufficiently weaker than the SOC, we can ignore the $T_{\vec k}$ dependent terms on the r.h.s. of Eq.(\ref{imaginary_tk_2}). Secondly, the ${\cal L}^+_{\phi^\prime}$ dependent term is approximately a renormalizion of the $Im\{T_k(\phi)\}$ on the l.h.s. and brings a $k$-dependent overall factor. This detail can be neglected for a simple result which retains only the most essential properties of the solution. We are then left with the ${\cal L}^-_{\phi^\prime}$ dependent term only. The result is, \begin{equation} Im\{T_k(\phi)\}=-\frac{1}{2}\sum_{\vec k^\prime} v_{kk^\prime}(\phi^\prime) \sin\phi^\prime \,{\cal L}^-_{\phi^\prime}\Bigl[\frac{\alpha k^\prime}{\vert G_{k^\prime}(\phi)\vert}\Bigr] ~~ \label{imtk} \end{equation} which is the Eq.(\ref{imaginary_tk_22}). This indicates that a nonzero $Im\{T_k(\phi)\}$ is the result of the anisotropy in $G_{k^\prime}(\phi)$. Since the source of anisotropy is the hexagonal warping in the context of this work, Eq.(\ref{imtk}) builts the connection between the spin canting anomaly and the hexagonal warping as explained below Eq.(\ref{imaginary_tk_22}). The angular translation operators in ${\cal L}^-_{\phi^\prime}$ act on the HW part in $\vert G_{k^\prime}(\phi)\vert$. Replacing the ${\cal L}^-_{\phi^\prime}$ with its leading term $2\phi^\prime \partial/\partial \phi$, and defining \begin{equation} {\cal S}_k=\frac{3}{2}\sum_{\vec k^\prime}\phi^\prime \sin\phi^\prime {k^\prime}^4 v_{k k^\prime}(\phi^\prime) \label{Sk} \end{equation} The Eq.(\ref{Sk}) is then used in the Eq.(\ref{imaginary_tk_4}).
{ "timestamp": "2018-06-12T02:10:41", "yymm": "1712", "arxiv_id": "1712.05140", "language": "en", "url": "https://arxiv.org/abs/1712.05140" }
\section{Introduction} In the context of this report, Chern-Simons gravities (CSG) and gravitational Chern-Simons (GCS) densities are distinct but not unrelated objects. They both result from the definition of Chern-Simons (CS) densities of the non-Abelian (nA) Yang-Mills (YM) fields. The CSG consist of some superpositioxn of $usual$ gravitational Lagrangians displaying all orders of the Riemann curvature, including the $0$-th - the cosmological constant. The GCS are the direct gravitational analogues of the nA CS densities, and like these, they can be employed in conjuction with $usual$ gravitational Lagrangians. CS gravity (CSG) in $2+1$ dimensions was proposed in \cite{Witten:1988hc} and was subsequently extended to $2n+1$ dimensions in \cite{Chamseddine:1989nu,Chamseddine:1990gk}. Gravitational CS densities in $2+1$ dimensions first appeared~\footnote{In these references the CS density is used to generate mass in a (Topological) non-Abelian field theory, and in the gravitational case the GCS density is added to the usual gravity.} in~\cite{Deser:1982vy,Deser:1981wh} and were extended to odd dimensions in \cite{Mardones:1990qc,Zanelli:2012px}. Since the construction of both CSG's and GCS densities employ the nA CS densities, it follows that these are defined in odd dimensions only. The main aim here to extend these definitions to cover also even dimensions. Indeed, such examples are present in the literature both for CSG~\cite{Chamseddine:1990gk,MacDowell:1977jt}, and, for GCS density~\cite{Jackiw:2003pm,Araneda:2016iiy} in $3+1$ dimensions. For CSG systems in $3+1$ dimensions, two such proposals are made, in \cite{MacDowell:1977jt} and in \cite{Chamseddine:1990gk}. In the second of these, in~\cite{Chamseddine:1990gk}, a Higgs-like~\footnote{The scalar used in \cite{Chamseddine:1990gk}, and in the rest of this presentation, is not a matter field. We refer to it as Higgs because of its provenance $via$ the dimensional descent of a non-Abelian (nA) density. The gravitational degrees of freedom it yields is technically related to the nA Higgs field as the spin-connection is related to the nA curvature.} scalar is employed, which is contained in what is proposed in the present report. A GCS densities in $3+1$ dimensions are introduced in \cite{Jackiw:2003pm} and \cite{Araneda:2016iiy}, which are employed for the purpose of modifying the usual (Einstein) gravity. In \cite{Jackiw:2003pm}, the (gravitational) Pontryagin density is employed, while in \cite{Araneda:2016iiy} both the Pontryagin and the Euler densities are employed. Definitions of GCS densities are made in this spirit in what follows here. The pivotal point in our considerations is the explotation of (what is referred to here) Higgs--Chern-Simons (HCS) densities, which are defined in all, odd and even dimensions. These result from the dimensional descent of Chern-Pontryagin (CP) densities from some even dimension down to any even or odd dimension. They are introduced in \cite{Tchrakian:2010ar,Radu:2011zy} and in Appendix A of \cite{Tchrakian:2015pka}. These HCS densities will be employed instead of the usual CS densities, to construct gravitational models in all dimensions which might be described as Higgs--Chern-Simons gravities (HCSG), and also to construct gravitational Higgs--Chern-Simons densities (GHCS) in all even dimensions, and in $4p-1$ odd dimensions. It is known that GCS densities are absent~\cite{Mardones:1990qc,Zanelli:2012px} in $4p-3$ dimensions, and it will turn out that also GHCS densities are absent in $4p-1$ dimensions. All statements made are illustrated $via$ typical exmples, and all calculations are carried out in the Einstein-Cartan formulation of gravity. In the Einstein-Cartan formulation, the gravitational system can be described in terms of ({\bf a}) the $Vielbein$ fields $e_M^a$ (and its inverse $e^M_a$) which are related to the metric through \[ g_{MN}=e_M^ae_N^b\,\eta_{ab}\ ,\quad g^{MN}=e^M_ae^N_b\,\eta^{ab}\,, \] $\eta_{ab}$ being the flat (Minkowskian or Euclidean) metric, and ({\bf b}) the spin-connection $\om_M^{ab}$ which defines the Riemann curvature \[ R_{MN}=\pa_{[M}\om_{N]}^{ab}+(\om_{[M}\om_{N]})^{ab}\,. \] Since the frame indices $a,b,\dots$ are raised/lowered by a flat metric, and since we are not particularly concerned with the signature of the spaces(s), we will henceforth not pay any attention to whether the frame index is covariant or contravariant. It is the identification of the spin-connection and the Riemann curvature with the YM connection and curvature in $D$ dimensions, through \be \label{con1} A_{M}=-\frac12\,\om_{M}^{ab}\,L_{ab} \Rightarrow F_{MN}=-\frac12\,R_{MN}^{ab}\,L_{ab}\,,\quad M=1,2,\dots,D\ ,\quad a=1,2,\dots,D \ee that is exploited in the definitions of both GCS densities and CSG gravities. The matrices $L_{ab}$ in \re{con1} are representations of $SO(D)$. In what follows, we will employ the Dirac (Clifford algebraic) representations for $L_{ab}$ \be \label{L} L_{ab}=\ga_{ab}=-\frac14\,[\ga_a,\ga_b]\,, \ee in terms of $\ga_{a}$, the gamma matrices in $D$ dimensions. The Chern-Simons (CS) density expressed in terms of the YM connection $A_M$ and curvature $F_{MN}$ is defined through the one-step descent of the Chern-Pontryagin (CP) density \[ \Om_{\rm CP}= \mbox{Tr}\, F\wedge F\wedge \dots\wedge F=\bnabla\cdot\bOmega\ ,\quad 2n\ {\rm times} \] in some $even$, $D=2n$ dimensions~\cite{Jackiw:1985}. The one-step descent in question is a result of the fact that the CP density $\Om_{\rm CP}=\bnabla\cdot\bOmega$ is a total divergence. The CS density is defined as any one component of the vector-valued density $\bOmega$, say $\Omega_D$, $i.e.$ $\Omega_{\rm CS}=\Omega_D$, which now depends only on the $D-1$ coordinates $x_\mu\ ,\quad\mu=1,2,\dots,D-1$. Hence, the CS density thus defined necessarily exists in some $odd$ dimension $2n-1$. The CS densities are by construction $gauge\ variant$ but their variational equations turn out to be $gauge\ covariant$, and display other interesting features prominent among which is their gauge transformation properties in the case of nA fields, leading to their exploitation in quantum field theory~\cite{Deser:1982vy,Deser:1981wh}. These aspects of CS theory will not be pursued here. Instead, our aim here is to exploit the non-Abelian (nA) CS densities to construct gravitational-CS (GCS) densities and Chern-Simons gravities (CSG). Already at this stage it is clear that the passage from YM to gravity prescribed by \re{con1} is problematic in the context of gravitational CS (GCS) densities and CS gravities (CSG), since these are defined in $D-1$ dimensions with coordinates $x_\mu\ ,\quad\mu=1,2,\dots,D-1$, while the frame indices run over $a=1,2,\dots,D$ instead of $D-1$. This discrepancy is corrected in each case respectively, GCS and CSG, by sharpening the prescription \re{con1}. Following this prescription, it is clear that GCS densities and CSG systems can be defined only in {\bf odd} dimensions. Our aim in this presentation is to propose GCS densities and CSG systems in all dimensions, namely in both {\bf odd} and {\bf even} dimensions. To this end, one starts from a version of the Chern-Pontryagin (CP) densities that is defined in odd and even dimensions. These are the Higgs--CP (HCP) densities resulting from the dimensional descent of a CP density in some even dimension, down to some odd or even residual dimension. The Higgs field in this case is a relic of the YM connection in the higher dimension and these HCP densities are $total\ divergences$ like the CP densities. The dimensional reduction of gauge fields has a long history~\cite{Forgacs:1979zs,Schwarz:1981mb,Kapetanakis:1992hf}. The calculus of dimensional reduction used in \cite{Tchrakian:2010ar,Radu:2011zy,Tchrakian:2015pka} is an extended version of that of Ref.~\cite{Schwarz:1981mb}. In most applications of this calculus, the descent was carried out on the Yang-Mills action/energy density in higher dimensons. Application to the descent of Chern-Pontryagin densities was first carried out in~\cite{Sherry:1984ky} applied to the third and fourth CP denisties in $6$ and $8$ dimensions down to $3$ dimensions, yielding the monopole (topological) charge densities of two extended Yang-Mills--Higgs (YMH) theory on $\R^3$. Soon after in \cite{OBrien:1988whk}, the fourth CP denisty in $8$ dimensions was dimensionally reduced down to $4$ dimensions, yielding the monopole~\footnote{In \cite{OBrien:1988whk}, the descent was performed both on the $4$-th CP density and the $p=2$ YM~\cite{Tchrakian:1984gq} system to yield a YMH theory supporting ``instantonss'' on $\R^4$.} (topological) charge density of a YMH theory on $\R^4$. Subsequently this formulation was extended to all even and odd dimensions in Refs.~\cite{Tchrakian:2010ar,Radu:2011zy} and in Appendix A of Ref~\cite{Tchrakian:2015pka}. The $total\ divergence$ property of the HCP densities enables, $via$ the standard one-step descent, the definition of CS densities in {\bf all} dimensions. We refer to these as Higgs--CS (HCS) densities resulting from the one-step descent - from $D$ to $D-1$ dimensions. In $3+1$ dimensions in particular, two such HCS densitied were employed in Refs.~\cite{Navarro-Lerida:2013pua,Navarro-Lerida:2014rwa} in $SO(5)$ and $SU(3)$ YMH models. In arbitrary dimensions, the HCS densities are introduced in \cite{Tchrakian:2010ar,Radu:2011zy,Tchrakian:2015pka}. Recently, some HCS densities were given independently in \cite{Szabo:2014zua}. While in many cases the HCS densities of \cite{Szabo:2014zua} agree with ours~\cite{Tchrakian:2010ar,Radu:2011zy,Tchrakian:2015pka}, they differ most markedly in that they are defined in $odd$ dimensions only, while in our case $all$, even and odd, dimensions are included. The reason for this is that in \cite{Szabo:2014zua} it is the CS density~\footnote{Dimensional reduction being a calculus of symmetry imposition, it is unsafe to carry it out on a $gauge\ variant$ density. That in some examples this may not be problematic~\cite{Romanov:1978ur}, happens to be true.} in the (higher) odd dimensions which is subjected to dimensional reduction, and then by $2$ dimensions or or by some other even dimension. Thus in the case of \cite{Szabo:2014zua}, only even dimensional HCS are defined. In our case by contrast, it is the $gauge\ invariant$ CP density that is subjected to dimensional reduction by any number of dimensions, resulting in HCS densities in odd dimensions. It turns out that in even residual dimensions, the HCS density thus obtained is $gauge\ invariant$ like the CP density in the bulk. It is the HCS densities that are employed in constructing the gravitational HCS (GHCS) densities and the HCS gravities (HCSG) in both odd and even dimensions, applying some variants of the prescription \re{con1}. It turns out that the construction of HCSG systems is unique, in the sense that in {\bf odd} dimensions where CSG systems exist already before the introduction of the Higgs field, this CSG system is embedded in the corresponding HCSG system. The situation with the construction of the GCSH density is rather more $ad\ hoc$, such that in {\bf odd} dimensions where both GCS and GHCS densities can be constructed, the two results are different. We have thus relegated the subject of GCS and GHCS to an Appendix. The presentation is organised as follows. In Section {\bf 2} the main building blocks, namely the CS densities for $d=3,5,7$ and HCS densities for for $d=3,4,5,6.7$, dimensions $d=D-1$ are listed. In Section {\bf 3} some CS gravities (CSG) and HCS gravities (HCSG) are presented. In Section {\bf 4} some gravitational CS desities (GCS) and gravitational HCS (GHCS) densities are presented. The reason for the chosen order of Sections {\bf 3} and {\bf 4} is that in the context adopted here, the GCS (and GHCS) densities are objects which should be employed to modify the more fundamental CSG (and HCSG) systems. \section{Chern-Simons and Higgs--Chern-Simons densities} In this Section we present the usual Chern-Simons (CS) densities, which are defined in odd dimensions only, and the Higgs--Chern-Simons densities which are defined in all dimensions. They are defined for arbitrary gauge group but the choice of gauge group appropriate for the passage of Yang-Mills to gravity, is specified. \subsection{The $usual$ Chern-Simons densities in odd dimensions} The $usual$ Chern-Simons (CS) density in $d$ dimensions results from the one-step descent of the the Chern-Pontryagin (CP) density in $D=d+1$ dimensions, $D=2n$ being even. Since our final aim is to transition from Yang-Mills to gravity, the gauge group of the non-Abelian field, is fixed by the prescription \re{con1} where $L_{ab}$ takes its values in the algebra of $SO(D)$ ($D=2n$) with $a=1,2,\dots, 2n$. The corresp[onding gravitational density is constructed by evaluating the trace in the CS formula. Since $D$ is even, $L_{ab}$ are represented the Dirac matrices $\ga_a$, where these are augmented by the chiral matrix $\ga_{D+1}$. One has then the option of including $\ga_{D+1}$ in the trace of the nA CS density. For the sake of illustration, we state the CS densities in $d=3,5,7$. Including $\ga_{D+1}$ in the trace these are \bea \Omega_{\rm CS}^{(3)}&=&\vep^{\la\mu\nu}\mbox{Tr}\,\ga_5 A_{\la}\left[F_{\mu\nu}-\frac23A_{\mu}A_{\nu}\right]\label{CS32}\\ \Omega_{\rm CS}^{(5)}&=&\vep^{\la\mu\nu\rho\si}\mbox{Tr}\,\ga_7 A_{\la}\left[F_{\mu\nu}F_{\rho\si}-F_{\mu\nu}A_{\rho}A_{\si}+ \frac25A_{\mu}A_{\nu}A_{\rho}A_{\si}\right]\label{CS52} \\ \Omega_{\rm CS}^{(7)}&=&\vep^{\la\mu\nu\rho\si\tau\ka} \mbox{Tr}\,\ga_9A_{\la}\bigg[F_{\mu\nu}F_{\rho\si}F_{\tau\ka} -\frac45F_{\mu\nu}F_{\rho\si}A_{\tau}A_{\ka}-\frac25 F_{\mu\nu}A_{\rho}A_{\si}F_{\tau\ka}\nonumber\\ &&\qquad\qquad\qquad\qquad\qquad\qquad +\frac45F_{\mu\nu}A_{\rho}A_{\si}A_{\tau}A_{\ka}-\frac{8}{35} A_{\mu}A_{\nu}A_{\rho}A_{\si}A_{\tau}A_{\ka}\bigg]\,,\label{CS72} \eea This choice is appropriate for the construction of CS gravities~\cite{Witten:1988hc,Chamseddine:1989nu,Chamseddine:1990gk} (CSG). It results in Euler type gravitational densities~\cite{Obukhov:1995eq}. The other choice is to exclude $\ga_{D+1}$ in the trace. The resulting CS densities \bea \hat\Omega_{\rm CS}^{(3)}&=&\vep^{\la\mu\nu}\mbox{Tr}\, A_{\la}\left[F_{\mu\nu}-\frac23A_{\mu}A_{\nu}\right]\label{CS31}\\ \hat\Omega_{\rm CS}^{(5)}&=&\vep^{\la\mu\nu\rho\si}\mbox{Tr}\, A_{\la}\left[F_{\mu\nu}F_{\rho\si}-F_{\mu\nu}A_{\rho}A_{\si}+ \frac25A_{\mu}A_{\nu}A_{\rho}A_{\si}\right]\label{CS51} \\ \hat\Omega_{\rm CS}^{(7)}&=&\vep^{\la\mu\nu\rho\si\tau\ka} \mbox{Tr}\,A_{\la}\bigg[F_{\mu\nu}F_{\rho\si}F_{\tau\ka} -\frac45F_{\mu\nu}F_{\rho\si}A_{\tau}A_{\ka}-\frac25 F_{\mu\nu}A_{\rho}A_{\si}F_{\tau\ka}\nonumber\\ &&\qquad\qquad\qquad\qquad\qquad\qquad +\frac45F_{\mu\nu}A_{\rho}A_{\si}A_{\tau}A_{\ka}-\frac{8}{35} A_{\mu}A_{\nu}A_{\rho}A_{\si}A_{\tau}A_{\ka}\bigg]\,,\label{CS71} \eea which are appropriate for the construction of gravitational CS densities, and result in Pontryagin type gravitational densities~\cite{Obukhov:1995eq}.. \subsection{The $Higgs$ Chern-Simons densities in all dimensions} The definitions of the Higgs--Chern-Simons (HCS) densities however are more ubiquitous. Thus for example a HCS density in $d$ dimensions, which is derived from the Higgs--Chern-Pontryagin (HCP) density in $D=d+1$, $D$ being even {\bf or} odd. The HCP density in question, itself arises from the dimensional descents of the CP in some even dimension $N\ge d+1$. The HCP density in $D$ density employed, say $\Om_{\rm HCP}^{D,N}$, is descended from a CP density $\Om_{\rm CP}$ in even $N$ dimensions. It may thus be useful to denote the HCS density gotten $via$ the one-step descent as \[ \Om_{\rm HCS}^{(d,N)}\ ,\quad d=D-1\,. \] A detailed description of HCP and HCS description is given in \cite{Tchrakian:2010ar,Tchrakian:2015pka}. In the notation used below, the (suare matrix valued) Higgs scalar $\F$ has dimension $L^{-1}$, as does also the constant $\eta$ which is the inverse of the sphere over which the descent is carried out. We list two such the HCS densities in $d=3$, arrived at from the one-step descents from the HCP densities in $D=4$, each of them descended from the CP densities in $6$ and $8$ dimensions respectively, \bea \Omega^{(3,6)}_{\rm HCS}&=& -2\eta^2\Omega_{\rm CS}^{(3)}-\vep^{\mu\nu\la}\mbox{Tr}\,\ga_5D_{\la}\F\left(F_{\mu\nu}\,\F+F_{\mu\nu} \F\right)\,.\label{HCS36}\\ \Omega^{(3,8)}_{\rm HCS}&=&6\eta^4\Omega_{\rm CS}^{(3)}-\vep^{\mu\nu\la}\,\mbox{Tr}\,\ga_5\,\bigg\{ 6\,\eta^2\left(\F\,D_{\la}\F-D_{\la}\F\,\F\right)\,F_{\mu\nu}\nonumber\\ &&\hspace{20mm}-\bigg[\left(\F^2\,D_{\la}\F\,\F-\F\,D_{\la}\F\,\F^2\right)-2\left(\F^3\,D_{\la}\F-D_{\la}\F\,\F^3\right)\bigg]F_{\mu\nu} \bigg\}\,.\label{HCS38} \eea Note that the leading term in both \re{HCS36} and \re{HCS38} is the CS density \re{CS32}. The HCS density in $d=5$ arrived at from the one-step descent of the HCP density in $D=6$, itself descended from the CP density in $8$ dimensions, is \bea \Omega^{(5,8)}_{\rm HCS}&=&2\eta^2\Om_{\rm CS}^{(5)}+\vep^{\mu\nu\rho\si\la}\,\mbox{Tr}\,\ga_7\bigg[ D_{\la}\F\left(\F F_{\mu\nu}F_{\rho\si}+F_{\mu\nu}\F F_{\rho\si}+F_{\mu\nu}F_{\rho\si}\F\right)\bigg]\label{HCS58} \eea and the HCS density in $d=7$ arrived at from the one-step descent of the HCP density in $D=8$, itself descended from the CP density in $10$ dimensions, is \bea \Omega^{(7,10)}_{\rm HCS}&=&\eta^2\Om_{\rm CS}^{(7)} +\vep^{\mu\nu\rho\si\tau\la\ka}\,\mbox{Tr}\,\ga_9D_{\ka}\F\,F_{\mu\nu}F_{\rho\si}(F_{\tau\la}\,\F+\F\,F_{\tau\la})\,.\label{HCS710} \eea Note that the leading term in \re{HCS58} is the CS density \re{CS52}, and that in \re{HCS710} the CS density \re{CS72}. Thus, in all odd dimensions where both a CS and a HCS density exist, the leading term in the HCS density $\Omega^{(d,N)}_{\rm HCS}$ in (odd) $d$ dimensions that pertains to the HCP density desecnded from the CP density in (even) $N$ dimensions, is the CS density $\Omega_{\rm CS}^{(d)}$. The situation is as expected, entirely different for HCS densities in even dimensions, where there are no $usual$ CS densities. In those cases the HCS densities are expressed {\bf entirely} in terms of the Higgs scalar $\F$, its covariant derivative $D_\mu\F$, and of course the curvature $F_{\mu\nu}$. Here, we display only the HCS densities in $d=4$, arrived at from the one-step descents of the HCP densities in $D=5$, each of them descended from the CP densities in $6$ and $8$ dimensions respectively, \bea \Omega^{(4,6)}_{\rm HCS}&=&\vep^{\mu\nu\rho\si}\,\mbox{Tr}\ F_{\mu\nu}\,F_{\rho\si}\,\F\label{HCS46}\\ \Omega^{(4,8)}_{\rm HCS}&=&\vep^{\mu\nu\rho\si}\,\mbox{Tr}\bigg[ \F\left(\eta^2\,F_{\mu\nu}F_{\rho\si}+\frac29\,\F^2\,F_{\mu\nu}F_{\rho\si}+\frac19\,F_{\mu\nu}\F^2F_{\rho\si}\right) \nonumber\\ &&\qquad\qquad\qquad-\frac29 \left(\F D_{\mu}\F D_{\nu}\F-D_{\mu}\F\F D_{\nu}\F+D_{\mu}\F D_{\nu}\F\F\right)F_{\rho\si}\bigg]\,.\label{HCS48} \eea Finally, we select the gauge group appropriate for the purpose of transiting from Yang-Mills to gravity. As stated at the outset by \re{con1}, this group is $SO(D)$ ($D=2n$), the orthogonal group of the non-Abelian field defining the Chern-Pontryagin (CP) density from which the Chern-Simons (CS) density is derived, and in the case of Higgs-CS, the gauge group of the Higgs-CP density from which the HCS is derived. In the case of CS densities the Dirac matrix representations, $e.g.$ \re{L}, are employed, such that the spin-connection and the $SO(D)$ YM connections are identified, $A_\mu^{ab}=\om_\mu^{ab}$. The situation is somewhat more involved in the case of Higgs-CS (HCS) densities, in which case we have HCS densities both in odd deimensions, $e.g.$ \re{HCS36}, \re{HCS38}, \re{HCS58}, \re{HCS710} and \re{CS32}, and in even dimensions $e,g.$ \re{HCS46} and \re{HCS48}. Here, $D$ is even for the HCS in odd dimensions but it is even when the HCS is in even dimensions. Besides, in this case the multiplicity of the Higgs scalar $\F$ must alo be chosen~\footnote{These choices coincide with those for which monopoles on $\R^D$ are constructed~\cite{Tchrakian:2010ar}.}. For HCS densities in odd dimensions, $i.e.$ with even $D$, \re{con1} is augmented by the choice of Higgs multiplet, \be \label{evenDoddd} A_{\mu}=-\frac12\,A_{\mu}^{ab}\,\ga_{ab}\ ,\quad {\rm and}\quad\F=2\f^a\,\ga_{a,D+1}\,, \ee while for HCS densities in even dimensions, $i.e.$ with $D$ odd, \be \label{evendoddD} A_{\mu}=-\frac12\,A_{\mu}^{ab}\,\Si_{ab}\ ,\quad {\rm and}\quad\F=2\f^a\,\Si_{a,D+1}\,, \ee where $\Si^{(\pm)}_{ab}$ are one or other chiral representations of $SO(D)$, \be \label{sigma} \Si^{(\pm)}_{ab}=\frac12\left(\eins\pm\ga_{D+1}\right)\,\ga_{ab}\ ,\quad a=1,2,\dots D\,. \ee It is in order to remark that for odd $d$ only the CS densities $\Om_{\rm CS}^{(d)}$, \re{CS32}-\re{CS72}, appear in the HCS densities $\Om_{\rm HCS}^{(d,N)}$, \re{HCS36}-\re{HCS710}. The CS densities $\hat\Om_{\rm CS}^{(d)}$, \re{CS31}-\re{CS71} do not appear in the latter. \section{Chern-Simons gravity (CSG) and Higgs-CS gravity (HCSG)} Both the CSG and the HCSG result in gravitational systems consisting of the superposition of $p$-Einstein-Hilbert densities \re{pEH} which we present here to be self-comtained and to fix the notation. In terms of the the spin-connection $\om_M^{ab}$, the covariant derivative of (some frame-vector valued field) $\f^a$ is defined as \be D_M\f^a=\pa_M\f^a+\om_M^{ab}\f^b\label{cov} \ee and employing further the $vielbein$ field $e_M^a$ there follow the definitions of the gravitational curvature and torsion \bea R_{\mu\nu}^{ab}&=&(D_{[\mu}D_{\nu]})^{ab}=\pa_\mu\om_\nu^{ab} -\pa_\nu\om_\mu^{ab}+\om_\mu^{ac}\om_\nu^{cb}-\om_\nu^{ac}\om_\mu^{cb}\,\label{Rcurv}\\ C_{\mu\nu}^a&=&D_{[\mu}e_{\nu]}^a=\pa_{\mu}e_{\nu}^a-\pa_{\nu}e_{\mu}^a+\om_\mu^{ac}e_\nu^c-\om_\nu^{ac}e_\mu^c\,,\label{tor} \eea with \be \label{index1} \mu=1,2,\dots,d\ ;\quad a=1,2,\dots,d\,. \ee To define the $p$-Einsten-Hilbert~\footnote{Aka. Lovelock gravitity.} ($p$-EH) Lagrangians, we split the indices on the Levi-Civita symbols as follows \[ \vep^{\mu_1\mu_2\dots \mu_{2p}\mu_{2p+1}\dots \mu_d}\ \quad{\rm and}\quad\vep_{a_1a_2\dots a_{2p}a_{2p+1}\dots a_d} \] such that in $d$-dimensional spacetime the Lagrangians are \be \label{pEH} {\cal L}^{(p,d)}_{\rm EH}= \vep^{\mu_1\mu_2\dots \mu_{2p}\mu_{2p+1}\dots \mu_d}\,e_{\mu_{2p+1}}^{a_{2p+1}}e_{\mu_{2p+2}}^{a_{2p+2}}\dots e_{\mu_d}^{a_d} \vep_{a_1a_2\dots a_{2p}a_{2p+1}\dots a_d}\,R_{\mu_1\mu_2}^{a_1a_2}R_{\mu_3\mu_4}^{a_3a_4}\dots R_{\mu_{2p-1}\mu_{2p}}^{a_{2p-1}a_{2p}} \ee For $p=0$ in $d=2p$ this is a total divergence, while for $d\ge 2p$ it is the cosmological constant. For $p=1$ it is the usual Einstein-Hilbert (EH) Lagrangian in $d$-dimensions, for $p=2$ it is the usual Gauss-Bonnet Lagrangian in $d$-dimensions, $etc$. The definitions of \re{pEH} include the Levi-Civita symbol in both the frame indices and the coordinate indices. Thus it is appropriate to adopt the definitions \re{CS32}-\re{CS72} for the CS densities since in that case evaluating the traces will result in some superpositions of (usual) $p$-EH Lagrangians. In this respect, the choice of \re{HCS36}-\re{HCS710} for the HCS densities is the appropriate one. Examples of CSG and HCSG systems are given in the next Subsections, respectively. \subsection{Chern-Simons gravity (CSG): odd dimensions} As remarked earlier, the prescription \re{con1} for transiting from YM to gravity in $d$ dimensions, yield a frame-index $a=1,2,\dots d+1$ which is defective. This defect is overcome by splitting $a$ as $a=(\al,d+1)=(\al,D)$, with $\al=1,2,\dots, d$, as will be described in the following Subsections. In this case the prescription \re{con1}, or the first member of \re{evenDoddd}, is refined as follows, \bea A_{\mu}&=&-\frac12\,\om_{\mu}^{\al\bt}\,\ga_{\al\bt}+\ka\,e_{\mu}^{\al}\,\ga_{\al D}\Rightarrow F_{\mu\nu}= -\frac12\left(R_{\mu\nu}^{\al\bt}-\ka^2\,e_{[\mu}^{\al}\,e_{\nu]}^{\bt}\right)\ga_{\al\bt}+\ka\,C_{\mu\nu}^{\al}\ga_{\al D}\label{oddd} \eea where $C_{\mu\nu}^{\al}=D_{[\mu}e_{\nu]}$ is the torsion. Clearly, $\al$ is now the frame-index with the correct range. The constant $\ka$ in \re{oddd} has dimensions $L^{-1}$ to compensate for the difference in the dimensions of the connection and the $Vielbein$. Substituting \re{oddd} in $\Om^{n}_{\rm CS}$, \re{CS32}-\re{CS72}, yields the CSG models in $d=3,5,7$, \bea {\cal L}_{\rm CSG}^{(3)}&=& -\ka\,\vep^{\mu\nu\la}\vep_{abc}\left(R_{\mu\nu}^{ab}-\frac23\,\ka^2e_{\mu}^ae_{\nu}^b\right)e_{\la}^c\label{3csg}\\ {\cal L}_{\rm CSG}^{(5)}&=& \ka\,\vep^{\mu\nu\rho\si\la}\vep_{abcde}\left(\frac34\,R_{\mu\nu}^{ab}\,R_{\rho\si}^{cd}-\ka^2\,R_{\mu\nu}^{ab}\,e_{\rho}^ce_{\si}^d +\frac35\,\ka^4e_{\mu}^ae_{\nu}^be_{\rho}^ce_{\si}^d\right)e_{\la}^e\label{5csg}\\ {\cal L}_{\rm CSG}^{(7)}&=& -\ka\,\vep^{\mu\nu\rho\si\tau\ka\la}\vep_{abcdefg}\bigg(\frac18\,R_{\mu\nu}^{ab}\,R_{\rho\si}^{cd}\,R_{\tau\ka}^{ef} -\frac14\,\ka^2\,R_{\mu\nu}^{ab}\,R_{\rho\si}^{cd}\,e_{\tau}^ee_{\ka}^f\nonumber\\ &&\qquad\qquad\qquad\qquad+\frac{3}{10}\,\ka^4\,R_{\mu\nu}^{ab}\,e_{\rho}^ce_{\si}^de_{\tau}^ee_{\ka}^f -\frac17\,\ka^6e_{\mu}^ae_{\nu}^be_{\rho}^ce_{\si}^de_{\tau}^ee_{\ka}^f\bigg)e_{\la}^g\label{7csg} \eea Each of these is a linear sum of all the $p$-Einstein-Hilbert (EH) Lagrangians ${\cal L}_{\rm EH}^{(p,d)}$ defined in the given dimension $d$ (the $p=0$ member being the cosmological constant term.). In the notation of Appendix {\bf A}, \bea {\cal L}_{\rm CSG}^{(3)}&=&-\ka\,\left[\tau_{(1)}{\cal L}_{\rm EH}^{(1,3)}-\tau_{(0)}\ka^2{\cal L}_{\rm EH}^{(0,3)} \right]\label{csg3x}\\ {\cal L}_{\rm CSG}^{(5)}&=&\ka\,\left[\tau_{(2)}{\cal L}_{\rm EH}^{(2,5)}-\tau_{(1)}\ka^2{\cal L}_{\rm EH}^{(1,5)} +\tau_{(0)}\ka^4{\cal L}_{\rm EH}^{(0,5)} \right]\label{csg5x}\\ {\cal L}_{\rm CSG}^{(7)}&=& -\ka\,\left[\tau_{(3)}{\cal L}_{\rm EH}^{(3,7)}-\tau_{(2)}\ka^2{\cal L}_{\rm EH}^{(2,7)} +\tau_{(1)}\ka^4{\cal L}_{\rm EH}^{(1,7)}-\tau_{(0)}\ka^6{\cal L}_{\rm EH}^{(0,7)} \right]\label{csg7x} \eea where the dimensionless constants $\tau_{(p)}$ can be read off \re{3csg}, \re{5csg} and \re{7csg}. \subsection{Higgs--Chern-Simons gravity (HCSG): all dimensions} The HCS densities, displayed in Subsection {\bf 2.2}, involve both the YM and the Higgs fields. The passage of the YM-Higgs (YMH) system to gravity is prescribed by \re{evenDoddd} in odd dimensions, and \re{evendoddD} in even. We will henceforth refer to the case of odd $d=D-1$, namely to the prescription \re{oddd}, since for even $d$ the appropriate prescription can be read off \re{oddd} by formally replacing $(\ga_{\al\bt},\ga_{\al D})$ with $(\Si_{\al\bt},\Si_{\al D})$, the latter defined by \re{sigma}. As in the previous Subsection, the index $a=(\al,D)$ is split such that $\al$ now is the frame-index, and the refined version of the second mamber of \re{evenDoddd} we apply is \bea 2^{-1}\F&=&(\f^{\al}\,\ga_{\al,D+1}+\f\,\ga_{D,D+1})\Rightarrow\nonumber\\ &\Rightarrow& 2^{-1}D_{\mu}\F=(D_{\mu}\f^{\al}-\ka\,e_\mu^\al\,\f)\ga_{\al,D+1}+(\pa_{\mu}\f+\ka\,e_\mu^\al\,\f^\al)\ga_{D,D+1} \label{higgsevenD2} \eea where \be \label{gravcov} D_{\mu}\f^{\al}=\pa_{\mu}\f^{\al}+\om_{\mu}^{\al\bt}\f^{\bt} \ee is the gravitational covariant derivative. We employ \re{oddd} and \re{higgsevenD2} to calculate the traces in the HCS formulas in Subsection {\bf 2.2}. Here, we display only the pair of HCSG (gravitational) systems arising from HCS densities \re{HCS36}-\re{HCS38} in $d=3$, and the pair arising from \re{HCS46}-\re{HCS48} in $d=4$. The pair in $d=3$ is \bea {\cal L}_{\rm HCSG}^{(3,6)}&=&\vep^{\la\mu\nu}\vep_{\al\bt\ga}\Bigg\{2\eta^2\ka\,\left(e_{\la}^{\ga}\,R_{\mu\nu}^{\al\bt} -\frac23\ka^2e_{\mu}^{\al}e_{\nu}^{\bt}e_{\la}^{\ga}\right)\nonumber\\ &&-\bigg[2(R_{\mu\nu}^{\al\bt}-\ka^2\,e_{[\mu}^{\al}e_{\nu]}^{\bt})\left[\f^{\ga}(\pa_{\la}\f+\ka\,e_{\la}^d\f^\del) -\f (D_{\la}\f^{\ga}-\ka\, e_{\la}^\ga\f)\right]\nonumber\\ &&\qquad\qquad\qquad\qquad\qquad\qquad-4\ka\f^{\al}(D_{\la}\f^{\bt}-\ka\,e_\la^\bt\f)C_{\mu\nu}^{\ga}\bigg]\Bigg\} \label{HCSG36} \\ {\cal L}_{\rm HCSG}^{(3,8)}&=& -3\eta^2{\cal L}_{\rm HGCS}^{(3,3)}-12\vep^{\la\mu\nu}\vep_{\ga\al\bt}\left[\eta^2-(|\f^{\del}|^2+\f^2)\right]\cdot\nonumber\\ &&\qquad\qquad\qquad\cdot\bigg[\left[\f^{\ga}(\pa_{\la}\f+\ka\,e_{\la}^d\f^\del) -\f (D_{\la}\f^{\ga}-\ka\, e_{\la}^\ga\f)\right]\nonumber\\ &&\qquad\qquad\qquad\qquad\qquad\qquad-4\ka\f^{\al}(D_{\la}\f^{\bt}-\ka\,e_\la^\bt\f)C_{\mu\nu}^{\ga}\bigg] \label{HCSG38} \eea and the pair in $d=4$ is \bea {\cal L}_{\rm HCSG}^{(4,6)}&=& -\vep^{\mu\nu\rho\si}\vep_{\al\bt\ga\del}\,\f\left[R_{\mu\nu}^{\al\bt}R_{\rho\si}^{\ga\del} -4\ka^2\,e_{\rho}^{\ga}e_{\si}^{\del}R_{\mu\nu}^{\al\bt}+4\ka^4\,\,e_{\mu}^{\al}e_{\nu}^{\bt}e_{\rho}^{\ga}e_{\si}^{\del}\right] \nonumber\\ &&\qquad+2\ka\vep^{\mu\nu\rho\si}\vep_{\al\bt\ga\del}\left(R_{\mu\nu}^{\al\bt} -\ka^2\,e_{[\mu}^{\al}\,e_{\nu]}^{\bt}\right)C_{\rho\si}^{\ga}\f^{\del}\label{HCSG46}\\ {\cal L}_{\rm HCSG}^{(4,8)}&=&\left[\eta^2-\frac13(|\f^{\al}|^2+\f^2)\right]{\cal L}_{\rm HGCS}^{(4,6)}-\nonumber\\ &&-\frac23\vep^{\mu\nu\rho\si}\vep_{\al\bt\ga\del}\left[\left(R_{\mu\nu}^{\al\bt} -\ka^2\,e_{[\mu}^{\al}\,e_{\nu]}^{\bt}\right)\f_{\rho}^{\ga}(\f\f_{\si}^{\del}-2\f^{\del}\f_{\si}) +\frac23C_{\mu\nu}^{\al}\f^{\bt}\f_{\rho}^{\ga}\f_{\si}^{\del}\right] \label{HCSG48} \eea where we have used an abbreviated notation \[ \f_\mu=\pa_\mu\f\ ,\quad\f_\mu^\al=D_\mu\f^\al\ , \quad\mu=1,2,\dots d\ ,\quad \al=1,2,\dots d\,. \] Some concluding remarks are now in order. We observe the following qualitative properties of the listed HCSG (gravitaional) systems: \begin{itemize} \item In the pair of models in $d=3$, namely \re{HCSG36}-\re{HCSG38}, the leading terms are the usual Einstein-Hilbert Lagrangian ${\cal L}_{\rm EH}^{(1,3)}$, $viz.$ \re{3csg} or \re{csg3x}. By contrast, in \re{HCSG46} and \re{HCSG48}, no purely gravitational Lagrangians ${\cal L}_{\rm EH}^{(p,d)}$ appear without the presence of the frame-vector field $\f^\al$ and the scalar $\f$. This is not surprising, since there exist no CS densities in even dimensions. \item The models ${\cal L}_{\rm HGCS}^{(d,N)}$ pertaining to higher values of $N$ in the HCS densities $\Om_{\rm HCS}^{(d,N)}$ from which they follow, feature Lagrangians with lower $N$, nested inside. \item All HCSG models, in odd and even dimensions, feature the torsion term explicitly. This, together with the fact that they feature the gravitational covariant derivative \re{gravcov}, means that these models can sustain non-zero torsion. Whether or not torsion-free solutions may exist, must be checked in each case. \item The frame-vector field $\f^\al$ and the scalar $\f$ are relics of the Higgs scalar in the Yang-Mills--Higgs systems giving rise to the HCSG models. Thus we would expect that these are gravitational coordinates and not matter fields that might result in hairy solutions. Accordingly, we would expect that these models support only black hole solutions, and not regular ones. \end{itemize} \section{Gravitational CS (GCS) densities} Gravitational CS (GCS) densities, as their name suggests, are not gravitational models like the CSG and HCSG models discussed above. They are the analogues of the non-Abelian (nA) CS densities, that are employed in various applications of nA gauge theories. Like the latter, the GCS densities are designed to find application in the same way, in gravitational theories. Together with the CSG models discussed above, GCS densities are derived from the nA CS densities by applying the prescription \re{con1}, but not in the refined versions \re{oddd}. As result the frame indices $a=1,2,\dots,D$, $D=d+1$, in the resulting gravitational density have the wrong range, namely that they range over $\al=1,2,\dots,d$. This defect is corrected by introducing a (rather aribitary) truncation, which consists of setting some components of the spin-connection $\om_{\mu}^{ab}=(\om_{\mu}^{\al\bt},\om_{\mu}^{\al,D})$ equal to zero by hand according to \be \label{trunc} \om_{\mu}^{\al,D}=0\quad\Rightarrow\quad R_{\mu\nu}^{\al,D}=0\,. \ee The resulting density is expressed exclusively in terms of the components of the (gravitational) connection and curvature $(\om_\mu^{\al\bt},R_{\mu\nu}^{\al\bt})$, such that now the frame-indices $\al$ transform with the required group $SO(d)$ and not $SO(D)$. This is adopted as the definition of gravitational Chern-Simons (GCS) density. As in Section {\bf 3}, one has again the choice~\footnote{Recall that previously in the derivation of the CSG models, the choice of \re{CS32}-\re{CS72} was made since the Levi-Civita symbol with frame indices, which results from the presence of the chiral matrix $\ga_{D,D+1}$ under the trace, was required for the description of gravitational systems. Here, we have no such constraint.} of opting for the definitions \re{CS32}-\re{CS72}, or, \re{CS31}-\re{CS71} for the nA CS densities, which prior to implementing the truncation \re{trunc} are the CS densities for gauge group $SO(D)$. A further important distinction form the nA case arises here in the gravitational case when the choice \re{CS31}-\re{CS71} is made for the nA CS densities. As a result of gamma-matrix identities~\footnote{The identity in question, in $2n$ dimensions is \[ \ga_{a_1a_2\dots a_nb_1b_2\dots b_{2n}} =\delta_{a_1a_2\dots a_n}^{b_1b_2\dots b_{2n}}\eins+\vep_{a_1a_2\dots a_nb_1b_2\dots b_{2n}}\ga_{2n+1} \] where $\ga_{a_1a_2\dots a_nb_1b_2\dots b_{2n}}$ is the totally antisymmetrised product of $2n$ gamma matrices in $2n$ dimensions, and $\ga_{2n+1}$ is the chiral matrix. }, it turns ot that substituting \re{con1}-\re{L} in \re{CS31}-\re{CS71}, these traces vanish itentically in all $4p-3$ dimensions. As a result, with this choice gravitational CS (GCS) densities can be constructed {\bf only} in $4p-1$, and {\bf not}, all odd dimensions. The choice of \re{CS31}-\re{CS71} is for the nA CS densities on the other hand, is not subject to this obstacle and it affords the definition of CSG densities in all odd dimensions. In this case the resulting CSG density will feature the Levi-Civita symbol with frame indices, which subject to the truncation \re{trunc}, collapses. Succinctly stated, CSG densities thus constructed, exist in $4p-1$ dimensionst only. Applying the prescription in $d=3,5,7$, \be \label{prescr1} A_{\mu}=-\frac12\,\om_{\mu}^{ab}\,\ga_{ab}\ \,,\quad a=1,2,\dots,d+1\,, \ee namely by eveluating the traces in \re{CS31}-\re{CS71} and then implementing the truncation \re{trunc}, \bea \hat\Om_{\rm GCS}^{{(3)}}&=&-\frac{1}{2\cdot2!}\,\vep^{\la\mu\nu}\del_{\al\bt}^{\bar\al\bar\bt}\ \om_{\la}^{\al\bt}\left[R_{\mu\nu}^{\bar\al\bar\bt}-\frac23\left(\om_{\mu}\om_{\nu}\right)^{\bar\al\bar\bt}\right]\label{GCS31}\\ \hat\Om_{\rm GCS}^{{(5)}}&=&0\label{GCS51}\\ \hat\Om_{\rm GCS}^{{(7)}}&=&\frac{1}{2\cdot6!}\,\vep^{\la\mu\nu\rho\si\tau\ka}\hat\del_{\al\bt\ga\del}^{\bar\al\bar\bt\bar\ga\bar\del} \,\om_{\la}^{\al\bt}\bigg[R_{\mu\nu}^{\ga\del}R_{\rho\si}^{\bar\al\bar\bt}R_{\tau\ka}^{\bar\ga\bar\del} -\frac45R_{\mu\nu}^{\ga\del}R_{\rho\si}^{\bar\al\bar\bt}\left(\om_{\tau}\om_{\ka}\right)^{\bar\ga\bar\del}-\frac25 R_{\mu\nu}^{\ga\del}\left(\om_{\rho}\om_{\si}\right)^{\bar\al\bar\bt}R_{\tau\ka}^{\bar\ga\bar\del}\nonumber\\ &&\qquad\qquad\qquad\quad +\frac45R_{\mu\nu}^{\ga\del}\left(\om_{\rho}\om_{\si}\right)^{\bar\al\bar\bt}\left(\om_{\tau}\om_{\ka}\right)^{\bar\ga\bar\del} -\frac{8}{35}\left(\om_{\mu}\om_{\nu}\right)^{\ga\del}\left(\om_{\rho}\om_{\si}\right)^{\bar\al\bar\bt} \left(\om_{\tau}\om_{\ka}\right)^{\bar\ga\bar\del}\bigg]\label{GCS71} \eea $etc.$, where the symbol $\hat\del_{\al\bt\ga\del}^{\bar\al\bar\bt\bar\ga\bar\del}$ in \re{GCS71} is \be \label{symbol} \hat\del_{\al\bt\ga\del}^{\bar\al\bar\bt\bar\ga\bar\del}=\frac19\,\del_{\al\bt\ga\del}^{\bar\al\bar\bt\bar\ga\bar\del} +\frac14\,\del_{\al\bt}^{\ga\del}\del_{\bar\al\bar\bt}^{\bar\ga\bar\del}\,. \ee \subsection{Gravitational Higgs-CS (GHCS) densities} The main purpose of constructing GHCS densities would be to supply GCS densities in both odd and even dimensions, possibly including in $4p-3$ dimensions which were absent in the Higgs free case above. Thus here too we employ the Higgs-CS (HCS) densities presented in Section {\bf 2.2}. In Section {\bf 3}, where CS and HCS gravities were constructed, it turned out that in all odd dimensions the leading term in the HCS gravity (HCSG) was the CS gravity (CSG). In that case the choice of CS densities \re{CS32}-\re{CS72} displaying the chiral matrix $\ga_{D+1}=(\ga_{d+2})$, was made with the aim of generating a gravitational model, which coincided with \re{HCS36}-\re{HCS710}, the defining of the HCS densities in odd dimensions. If we invoke the same criterion here as in the construction of HCS gravities (HCSG), namely that the leading terms in the GHCS densities in $4p-1$ dimensions be the CSG densities, $e.g.$ $\hat\Om_{\rm CS}^{(d)}$ given by \re{CS31} and \re{CS71} in $d=3,7$, then this is achieved by deforming \re{HCS36}-\re{HCS38} and \re{HCS710}, by removing the chiral matrix under the trace, $by\ hand$. The corresponding consideration in $d=4p-3$ dimensions fails to yield a nontrivial result. We know the GCS density $\hat\Om_{\rm CS}^{(5)}$ vanishes, $cf.$ \re{CS51}. To illustrate these points, consider the examples of proposed (deformed) HCS densities in $d=3,5$ \bea \hat\Omega^{(3,6)}_{\rm HCS}&=&-2\eta^2\hat\Omega_{\rm CS}^{(3)} -\vep^{\mu\nu\la}\mbox{Tr}\,D_{\la}\F\left(F_{\mu\nu}\,\F+F_{\mu\nu} \F\right)\,.\label{hatHCS36}\\ \hat\Omega^{(5,8)}_{\rm HCS}&=&2\eta^2\hat\Om_{\rm CS}^{(5)}+\vep^{\mu\nu\rho\si\la}\,\mbox{Tr}\, D_{\la}\F\left(\F F_{\mu\nu}F_{\rho\si}+F_{\mu\nu}\F F_{\rho\si}+F_{\mu\nu}F_{\rho\si}\F\right)\label{hatHCS58} \eea Applying the prescription \be \label{prescr2} A_{\mu}= -\frac12\,\om_{\mu}^{ab}\,\ga_{ab}\ ,\quad {\rm and}\quad\F=2\f^a\,\ga_{a,d+2}\,,\quad a=1,2,\dots,d+1\,, \ee and then implementing the truncatio \re{trunc}, we find the two GHCS densities \bea \hat\Om_{\rm GHCS}^{(3,6)}&=&-2\eta^2\hat\Om_{\rm GCS}^{{(3)}}+4\vep^{\la\mu\nu}\,\f^\al R_{\mu\nu}^{\al\bt}D_\la\f^\bt\label{GHCS36}\\ \hat\Om_{\rm GHCS}^{(5,8)}&=&0+0\label{GHCS58} \eea As we see from \re{GHCS58}, the gravitational HCS (GHCS) densities vanish in $4p-3$ dimensions, just as the gravitational CS (GCS) densities, for the same technical reason. (Not deforming the HCS density by removing the chiral matrix from under the trace does not change the situation. In that case the Levi-Civita symbol in the frame indices in $D=d+1$ dimensions appear, which vanish when the trauncation \re{trunc} is implemented.) Concerning the construction of GHCS densities in even dimensions, we employ the HCS densities $\Om_{\rm GHCS}^{(4,6)}$ and $\Om_{\rm GHCS}^{(4,8)}$ given by \re{HCS46}-\re{HCS48} in $d=4$. The prescription applied here is also \re{prescr2}, but with $\ga_{ab}$ formally replaced by $\Si_{ab}$, $cf.$ \re{sigma}, followed by the truncation \re{trunc}. The result is \bea \hat\Om_{\rm GHCS}^{(4,6)}&=& -\frac14\,\vep^{\mu\nu\rho\si}\,R_{\mu\nu}^{\al\bt}\,R_{\rho\si}^{\al\bt}\,\f\label{GHCS46}\\ \hat\Om_{\rm GHCS}^{(4,8)}&=& -\vep^{\mu\nu\rho\si}\, R_{\mu\nu}^{\al\bt}\left\{\left[\frac18\left(1-\frac{1}{3}|\f^a|^2\right)R_{\rho\si}^{\al\bt} +\frac13\,\f_{\rho\si}^{\al\bt}\right]\f+\frac43\f^{\al}D_{\rho}\f^{\bt}\,\pa_{\si}\f\right\}\label{GHCS48} \eea where the abbreviated notation \[ \f_{\mu\nu}^{\al\bt}=D_{[\mu}\f^\al D_{\nu]}\f^\bt\ , \quad{\rm and}\quad \f=\f^5\,. \] In even dimensions, there are no exclusions like in odd dimensions, and GHCS densities like \re{GHCS46} and \re{GHCS48} exist in all even dimensions. Some concluding remarks are in order here. \begin{itemize} \item In odd dimensions, gravitational CS (GCS) densities and gravitational HCS (GHCS) densities are defined only in $4p-1$ and not in $4p-3$ dimensions. \item In $4p-1$ dimensions, the leading term in the GHCS density is the GCS density. \item Gravitational HCS (GHCS) densities can be defined in all even dimensions where no GCS densities exist. \item As in the case of HCS gravities (HCSG), the frame-vector field $\f^\al$ and the scalar $\f$ are gravitational degrees of freedom. \item If employed as CS densities to modify a gravitational model, the GHCS densities would be applied to HCSG gravitational models decsribed in Section {\bf 3}, which are decsribed by the same gravitational fields. This is because the fields $\f^a=(\f^\al,\f)$ are gravitational degrees of freedom and their dynamics is given naturally by the HCSG models in the given dimension. \end{itemize} \section{Summary} An illustraive presentation of Chern-Simons gravities in all dimensions, and gravitational Chern-Simons densities in all even and in $4p-1$ odd dimensions is given. These ``Chern-Simons densities'' are one-step descendents of (Higgs--)Chern-Pontryagin densities defined in all dimensions, and which result from the dimensional decsent of a Chern-Pontryagin density in some (higher) even dimension. A distiction is made between CS gravitational systems and the CS densities, and each is presented separately followed by comments in its own Section. \bigskip \noindent {\bf Acknowledgements}: My deepest gratitude to Eugen Radu for his unstinting support in preparing this report. My thanks to Friedrich Hehl for having introduced me to the Einstein-Cartan formulation, and to Ruben Manvelyan for helpful extended discussions. Thanks to Jorge Zanelli for helpful correspondence. \begin{small}
{ "timestamp": "2018-01-03T02:10:09", "yymm": "1712", "arxiv_id": "1712.05190", "language": "en", "url": "https://arxiv.org/abs/1712.05190" }
\section*{abstract} Exciton diffusion length plays a vital role in the function of opto-electronic devices. Oftentimes, the domain occupied by an organic semiconductor is subject to surface measurement error. In many experiments, photoluminescence over the domain is measured and used as the observation data to estimate this length parameter in an inverse manner based on the least square method. However, the result is sometimes found to be sensitive to the surface geometry of the domain. In this paper, we employ a random function representation for the uncertain surface of the domain. After non-dimensionalization, the forward model becomes a diffusion-type equation over the domain whose geometric boundary is subject to small random perturbations. We propose an asymptotic-based method as an approximate forward solver whose accuracy is justified both theoretically and numerically. It only requires solving several deterministic problems over a fixed domain. Therefore, for the same accuracy requirements we tested here, the running time of our approach is more than one order of magnitude smaller than that of directly solving the original stochastic boundary-value problem by the stochastic collocation method. In addition, from numerical results, we find that the correlation length of randomness is important to determine whether a 1D reduced model is a good surrogate for the 2D model. \medskip {{\bf Subject class[2000]} {34E05, 35C20, 35R60, 58J37, 65C99}} \par {{\bf Keywords:} exciton diffusion, random domain, asymptotic methods, uncertainty qualification, organic semiconductor.} \section{Introduction} \noindent From a practical perspective, measurement error or insufficient data in many problems inevitably introduces uncertainty, which however has been overlooked for a long time. In materials science, recent adventure in manufacturing has reduced the device dimension from macroscropic/mesoscropic scales to nanoscale, in which the uncertainty becomes important \cite{Bejan:2000}. In the field of organic opto-electronics, such as organic light-emitting diodes (LEDs) and organic photovoltaics, a surge of interest has occurred over the past few decades, due to major advancements in material design, which led to a significant boost in the materials performance \cite{PopeSwenberg:1999, MyersXue:2012, SuLanWei:2012}. These materials are carbon-based compounds with other elements like N, O, H, S, and P, and can be classified into small molecules, oligomers, and polymers with atomic mass units ranging from several hundreds to at least several thousands and conjugation length ranging from a few nanometers to hundreds of nanometers \cite{Forrest:2004, MyersXue:2012}. At the electronic level, exciton, a bound electron-hole pair, is the elementary energy carrier, which does not carry net electric charge. The characteristic distance that an exciton travels during its lifetime is defined as the exciton diffusion length, which plays a critical role in the function of opto-electronical devices. A small diffusion length in organic photovoltaics limits the dissociation of excitons into free charges \cite{TeraoSasabeAdachi:2007, MenkeLuhmanHolmes:2012}, while a large diffusion length in organic LEDs may limit luminous efficiency if excitons diffuse to non-radiative quenching sites \cite{Antoniadis:1994}. Generally, there are two types of experimental methods to measure exciton diffusion length: photoluminescence quenching measurement, including steady-state and time-resolved photoluminescence surface quenching, time-resolved photoluminescence bulk quenching, and exciton-exciton annihilation \cite{Linetal:2013}, and photocurrent spectrum measurement \cite{PetterssonRomanInganas:1999}. Exciton generation, diffusion, dissociation, recombination, exciton-exciton annihilation, and exciton-environment interaction, are the typical underlying processes. Accordingly, two types of models are used to describe exciton diffusion, either differential equation based or stochastic process based. The connections between these models are systematically discussed in \cite{Chen:2016}. We focus on the differential equation model in this paper. Accordingly, the device used in the experiment includes two layers of organic materials. One layer of material is called donor and the other is called acceptor or quencher due to the difference of their chemical properties. A typical bilayer structure is illustrated in Figure \ref{fig:dom}. These materials are thin films with thicknesses ranging from tens of nanometers to hundreds of nanometers along the $x$ direction and in-plane dimensions up to the macroscopic scale. Under the illumination of solar lights, excitons are generated in the donor layer, and then diffuse. Due to the exciton-environment interaction, some excitons die out and emit photons which contribute to the photoluminescence. The donor-acceptor interface serves as the absorbing boundary while other boundaries serve as reflecting boundaries due to the tailored properties of the donor and the acceptor. As derived in \cite{Chen:2016}, such a problem can be modeled by a diffusion-type equation with appropriate boundary conditions, which will be introduced in \secref{sec:model}. Since the donor-acceptor interface is not exposed to the air/vacuum and the resolution of the surface morphology is limited by the resolution of atomic force microscopy, this interface is subject to an uncertainty with amplitude around $1\;$nm. At a first glance, this uncertainty does not seem to affect the observation very much since its amplitude is much smaller than the film thickness. However, in some scenarios \cite{Linetal:2013}, the fitted exciton diffusion lengths are sensitive to the uncertainty, which may affect a chemist to determine which material should be used for a specific device. Therefore, it is desirable to understand the quantitative effect of such an uncertainty on the exciton diffusion length and provide a reliable estimation method to select appropriate models for organic materials with different crystalline orders. Uncertainty quantification is an emerging research field that addressing these issues \cite{Xiu:09,LeMaitre:2010,Smith:2013}. Due to the complex nature of the problems considered here, finding analytical solutions is almost impossible, so numerical methods are very important to study these solutions. Here we give a briefly introduction of existing numerical methods, which can be classified into non-intrusive sampling methods and intrusive methods. Monte Carlo (MC) method is the most popular non-intrusive method \cite{glasserman:03}. For the randomness in the partial differential equations (PDEs), one first generates $N$ random samples, and then solves the corresponding deterministic problem to obtain solution samples. Finally, one estimates the statistical information by ensemble averaging. The MC method is easy to implement, but the convergence rate is merely $O(\frac{1}{\sqrt{N}})$. Later on, quasi-Monte Carlo methods \cite{Caflisch:98} and multilevel Monte Carlo methods \cite{Giles:08} have been developed to speed up the MC method. Stochastic collocation (SC) methods explore the smoothness of PDE solutions with respect to random variables and use certain quadrature points and weights to compute solution realizations \cite{Xiu:05,Babuska:07,Webster:08}. Exponential convergence can be achieved for smooth solutions, but the quadrature points increase exponentially fast as the number of random variables increases, known as the {\it curse of dimensionality}. Sparse grids were introduced to reduce the quadrature points to some extent \cite{Griebel:04}. For high-dimensional PDEs with randomness, however, the sparse grid method is still very expensive. In intrusive methods, solutions of the random PDEs are represented by certain basis functions, e.g., orthogonal polynomials. Typical examples are the Wiener chaos expansion (WCE) and polynomial chaos expansion (PCE) method. Then, Galerkin method is used to derive a coupled deterministic PDE system to compute the expansion coefficients. The WCE was introduced by Wiener in \cite{Wierner:38}. However, it did not receive much attention until Cameron provided the convergence analysis in \cite{cameron:47}. In the past two decades, many efficient methods have been developed based on WCE or PCE; see \cite{Ghanem:91,Xiu:03,Xiu:2006,babuska:04,WuanHou:06} and references therein. When dealing with relatively small input variability and outputs that do not express high nonlinearity, perturbation type methods are most frequently used, where the random solutions are expanded via Taylor series around their mean and truncated at a certain order \cite{Matthies:2005,Dambrine2016SINUM}. Typically, at most second-order expansion is used because the resulting system of equations are typically complicated beyond the second order. An intrinsic limitation of the perturbation methods is that the magnitude of the uncertainties should be small. Similarly, one also chooses the operator expansion method to solve random PDEs. In the Neumann expansion method, we expand inverse of the stochastic operator in a Neumann series and truncate it at a certain order. This type of method often strongly depends on the underlying operator and is typically limited to static problems \cite{yamazaki:1988,Xiu:09}. In this paper, we employ a diffusion-type equation with appropriate boundary conditions as the forward model and the exciton diffusion length is extracted in an inverse manner. Surface roughness is treated as a random function. After nondimensionalization, the forward model becomes a diffusion-type equation on the domain whose geometric boundary is subject to small perturbations. Therefore, we propose an asymptotic-based method as the forward solver with its accuracy justified both analytically and numerically. It only requires solving several deterministic problems over the regular domain without randomness. The efficiency of our approach is demonstrated by comparing with the SC method as the forward solver. Of experimental interest, we find that the correlation length of randomness is the key parameter to determine whether a 1D surrogate is sufficient for the forward modeling. Precisely, the larger the correlation length, the more accurate the 1D surrogate. This explains why the 1D surrogate works well for organic semiconductors with high crystalline order. The rest of the paper is organized as follows. In \secref{sec:model}, a diffusion-type equation is introduced as the forward model and the exciton diffusion length is extracted by solving an inverse problem. Domain mapping method and the asymptotic-based method are introduced in \secref{sec:method} with simulation results presented in \secref{sec:result}. Conclusion is drawn in \secref{sec:conclusion}. \section{Model}\label{sec:model} \noindent In this section, we introduce a diffusion-type equation over the random domain as the forward model and the extraction of exciton diffusion length is done by solving an inverse problem. \subsection{Forward model: A diffusion-type equation over the random domain} \noindent Consider a thin layer of donor located over the two dimensional domain $\set{(x,z): x\in (h(z,\omega),d), z\in (0,L)}$, where $L\gg d$. Refer to Figure \ref{fig:dom}. The donnor-acceptor interface, $\Gamma$, is described by $x=h(z,\omega)$, a random field with period $L$: \begin{equation} \label{eqn:randominterface} h(z,\omega) = \bar{h}\sum_{k=1}^{K}\lambda_{k}\theta_k(\omega)\phi_k(z), \end{equation} where $\{\theta_k\}$ are i.i.d. random variables, $\phi_k(z) = \sin(2k\pi\frac{z}{L})$, and $\lambda_{k}>0$ are eigenvalues that control the decay speed of physical mode $\phi_k(z)$. In principle, one could also add the cosine modes in the basis functions $\set{\phi_k}$. We here only use the sine modes for simplicity. In the experiment, $\bar{h}\sim 1\;$nm due to the surface roughness limited by the resolution of atomic force microscopy. The thickness $d$ varys between $10\sim100\;$nm in a series of devices. Therefore, the dimensionless parameter characterizing the ratio between measurement uncertainty and film thickness $$\epsilon = \bar{h}/d,$$ ranges around $ [0.01, 0.1].$ So, it is assume that the amplitude $\bar{h}\ll d$ in our models. The in-plane dimensions of the donor layer are of centimeters in the experiment, but we choose $L\sim 100\;$nm and set up the periodic boundary condition along the $z$ direction based on the following two reasons. First, the current work treats exciton diffusion length as a homogeneous macroscopic quantity, which is a good approximation for ordered structures. For example, small molecules are the simplest and can form crystal structures under careful fabrication conditions \cite{DirksenRing:1991, Rodetal:2013}. Second, the light intensity and hence the exciton generation density is a single variable function depending on $x$ only. \begin{figure}[htbp] \includegraphics[width=0.5\textwidth]{domain.pdf} \caption{The donor-acceptor bilayer device with film thickness $d$ along the $x$ direction and in-plane dimension $L$ along the $z$ direction under the illumination of sun lights. One realization of the donor-acceptor interface with uncertainty is described by $x=h(z)$. $G(x)$ is the normalized exciton generation density which depends on $x$ only and is a decreasing function due to the phonon absorption in the donor layer.} \label{fig:dom} \end{figure} Define the domain $\mathcal{D}_\epsilon:=\set{(x,z): x\in (h(z,\omega),d), z\in (0,L)}$. The diffusion-type equation reads as \begin{numcases} { \label{eqn:rPDE2d} } {\sigma^2} \left( u_{xx}(x,z)+ u_{zz}(x,z)\right) - u(x,z) + G(d-x) = 0, & $(x,z)\in \mathcal{D}_\epsilon$ \label{eqn:rPDE2d1} \\ u_x(d, z) = 0, ~\ ~ u(h(z,\omega), z) = 0, & $ 0<z<L$ \label{eqn:rPDE2d2} \\ u(x, z) = u(x, z+L), & $ h(z,\omega) < x <d$. \label{eqn:rPDE2d3} \end{numcases} Here ${\sigma}$ is the exciton diffusion length which is an unknown parameter, and the ${\sigma^2}$ term in \eqref{eqn:rPDE2d1} describes the exciton diffusion. Exciton-environment interaction makes some excitons emit phonons and die out, which is described by the term $-u$ in \eqref{eqn:rPDE2d1}. The normalized exciton generation function $G$ is $\mathbf{R}^+$-valued, and is smooth on $\mathbf{R}^+\cup\set{0}$. By solving the Maxwell equation over the layered device, one can find that $G(x)$ is a combination of exponential functions which decay away from $0$ \cite{Born:1965}. $x=d$ is served as the reflexive boundary and homogeneous Neumann boundary condition is thus used there, while $x=h(z,\omega)$ is served as the absorbing boundary and homogeneous Dirichlet boundary condition is used in \eqref{eqn:rPDE2d2}. Periodic boundary condition is imposed along the $z$ direction in \eqref{eqn:rPDE2d3}. It is not difficult to see that the solution $u$ to \eqref{eqn:rPDE2d} is strictly positive in $\mathcal{D}_\epsilon$ by the maximum principle. The (normalized) photoluminescence is computed by the formula \begin{equation} \label{eqn:PL2d} \textmd{I}[{\sigma},d]=\frac{1}{L}\int_{0}^L\int_{h(z,\omega)}^d u(x,z)\text{d}z\text{d}x. \end{equation} If the interface $\Gamma$ is random but entirely flat, i.e., $h(x,\omega)=\xi(\omega)$ for some random variable $\xi$, then the domain is a rectangle $ (\xi(\omega),d)\times(0,L)$. Notice that in \eqref{eqn:rPDE2d}, $G$ is a function of $x$ only. Then, \eqref{eqn:rPDE2d} actually reduces to the following 1D problem \begin{numcases} { \label{eqn:rPDE1d}} {\sigma^2} u_{xx}(x) - u(x) + G(d-x) = 0,\ \ x\in (\xi,d) \\ u_x(d) = 0, \quad u(\xi) = 0. \end{numcases} For the 1D model \eqref{eqn:rPDE1d}, when $L\rightarrow 0$, the photoluminescence defined by \eqref{eqn:PL2d} reduces to \begin{equation} \label{eqn:rPL1d} \textmd{I}({\sigma},d)= \int_{\xi}^d u(x) \,\mathrm{d} x. \end{equation} This is why the normalized factor $1/L$ is used in \eqref{eqn:PL2d}. Due to the simple analytical formula, the 1D model given by \eqref{eqn:rPDE1d} and \eqref{eqn:rPL1d} has been widely used to fit experimental data for photoluminescence measurement \cite{Linetal:2013} and photocurrent measurement \cite{Guideetal:2013}. Since the roughness of the interface is taken into account, problem \eqref{eqn:rPDE2d} with the random interface $\Gamma$ is viewed as a generalized and more realistic model. The 1D model \eqref{eqn:rPDE1d} still has the uncertainty of the boundary but fails to include the spatial variety of the donor-interface interfacial layer. We are interested in identifying under which condition the 1D model can be viewed as a good surrogate for the 2D model and how this condition can be related to the property of organic semiconductors. \subsection{Inverse problem: Extraction of exciton diffusion length} \noindent In the experiment, photoluminescence data $\{\wt{\textmd{I}}_i\}_{i=1}^N$ are measured for a series of bilayer devices with different thicknesses $\{d_i\}_{i=1}^N$. Here $i$ denotes the $i$-th observation in the experiment with $d_i$ the thickness of the donor layer. ${\sigma}$ is the unknown parameter, and the optimal ${\sigma}$ is expected to reproduce the experimental data $\{d_i, \wt{\textmd{I}}_i\}_{i=1}^N$ in a proper sense. To achieve this, we propose the following minimization problem in the sense of mean square error \begin{equation} \label{min1} \min_{\sigma} ~J({\sigma}) =\frac{1}{N}\sum_{i=1}^N\ \left( \mathbb{E}_\omega [ \textmd{I}({\sigma}, d_i) ]- \wt{\textmd{I}}_i \right)^2. \end{equation} We use the Newton's method to solve \eqref{min1} for ${\sigma}$. Given ${\sigma}^{(0)}$, for $n=1,2,\ldots,$ until convergence, we have \begin{equation} \label{estimate_sigma} {\sigma}^{(n)} = {\sigma}^{(n-1)} - \alpha_n \dfrac{\frac{\partial}{\partial {\sigma}} J({\sigma}^{(n-1)})} {\frac{\partial^2}{\partial {\sigma^2}}J({\sigma}^{(n-1)})}. \end{equation} Here $\alpha_n\in (0,1]$ is given by a line search \cite{Nocedal:1999}. Details are given in Appendix \ref{sec:Newton}. \section{Methods for solving the forward model}\label{sec:method} \noindent In the photoluminescence experiment, the surface roughness is very small compared to the film thickness, i.e., $\bar{h}\sim 1\;$nm and $10\le d\le 100\;$nm. Based on this observation, we propose an asymptotic-based method for solving the diffusion-type equation over the random domain. For comparison, we first describe the domain mapping approach \cite{Xiu:2006}. \subsection{Domain mapping method}\label{sec:gpc} \noindent To handle the random domain $\mathcal{D}_\epsilon$, we introduce the following transformation \[ \tilde{y} = \frac{x - h(z,\omega)}{d-h(z,\omega)}, \quad \tilde{z} = z/L, \] so that $\mathcal{D}_\epsilon$ becomes the unit square $\mathcal{D}_{\textrm{s}}=(0,1)\times(0,1)$. Under this change of variables, Eq. \eqref{eqn:rPDE2d} becomes the following PDE with random coefficients (still use $y$ and $z$ to represent $\wt{y}$ and $\wt{z}$, respectively) \begin{equation}\label{eqn:PDE2d} {\sigma^2}\mathcal{L} u - u + g(y,z,\omega) =0, \quad (y, z) \in \mathcal{D}_{\textrm{s}}, \end{equation} where the spatial differentiation operator is defined for a random element $\omega$ in the probability space \begin{equation}\label{L} \begin{split} \mathcal{L}:=& \frac{(1-y)^2(h')^2+1}{(d-h)^2} \partial_{yy} +\frac{1}{L^2} \partial_{zz} -\frac{2}{L}\frac{(1-y)h'}{(d-h)}\partial_{yz} \\ & -2\frac{(1-y)(h')^2}{(d-h)^2}\partial_y -\frac{(1-y)h''}{(d-h)}\partial_y. \end{split} \end{equation} and \begin{equation}\label{g} g(y,z,\omega):=G((1-y)(d-h(z,\omega))). \end{equation} The boundary condition is \begin{equation}\label{eqn:bc2d} \begin{split} \partial_y u(1,z) = 0, \quad u(0,z)= 0, \quad z\in (0,1), \\ u(y,z) = u(y,z+1), \quad y\in (0,1). \end{split} \end{equation} The photoluminescence defined in \eqref{eqn:PL2d} is then transformed into \begin{equation} \label{eqn:PL2D} \textmd{I}({\sigma}, d)= \int_{0}^1\int_{0}^1 u(y,z)(d-h(z,\omega))\text{d}y\text{d}z. \end{equation} \begin{remark} In 1D, changing of variable $y = \frac{x - \xi}{d-\xi}$ also transforms \eqref{eqn:rPDE1d} to a differential equation with random coefficients over the unit interval. \begin{equation}\label{eqn:PDE1d} {\sigma^2}\mathcal{L}_1 u(y) - u(y) + G((1-y)(d-\xi)) = 0, \quad y\in (0,1) \end{equation} with \begin{equation}\label{L1d} \mathcal{L}_1:= \frac{1}{(d-\xi)^2} d_{yy} \end{equation} and the boundary condition \begin{equation}\label{eqn:bc1d} u_y(1) = 0, \quad u(0) = 0. \end{equation} Accordingly, the photoluminescence can be written as \begin{equation} \label{eqn:PL1d} \textmd{I}({\sigma}, d)=(d-\xi)\int_{0}^1 u(y)\text{d}y. \end{equation} \end{remark} \begin{remark} The generation term in \eqref{g} depends on both $y$ and $z$ after changing of variables. We expect some dimensional effect on the estimation of ${\sigma}$, which will be carefully examined in \secref{sec:result}. \end{remark} \subsection{Finite difference method for the model problem} \noindent We use finite difference method to discretize the forward model \eqref{eqn:PDE2d} developed in \secref{sec:gpc}. We partition the domain $\mathcal{D}_{\textrm{s}}=[0,1]\times[0,1]$ into $(N_y+1)\times (N_z+1)$ grids with meshes $h_y=\frac{1}{N_y}$ and $h_z=\frac{1}{N_z}$. Denote by $u_{i,j}$ the numerical approximation of $u(y_i,z_j)$, where $y_i=(i-1)h_y$, $z_j=(j-1)h_z$ with $i=1,...,N_y+1$ and $j=1,...,N_z+1$, respectively. For the discretization in space, we use a second-order, centered-difference scheme \cite{morton:2005}. We introduce the difference operators \[ D_{0}^{y}u_{i,j}=\frac{u_{i+1,j}-u_{i-1,j}}{2h_y}, \quad D_{-}^{y}u_{i,j}=\frac{u_{i,j}-u_{i-1,j}}{h_y}, \quad D_{+}^{y}u_{i,j}=\frac{u_{i+1,j}-u_{i,j}}{h_y}. \] The operators $D_{0}^{z}$, $D_{-}^{z}$, and $D_{+}^{z}$ are defined similarly. For each $\omega \in \Omega$ and each interior mesh point $(i,j)$ with $2 \leqslant i \leqslant N_y, 2 \leqslant j \leqslant N_z$, we discretize the forward model \eqref{eqn:PDE2d} as \begin{align} &\sigma^2 \frac{(1-y_i)^2(h')^2+1}{(d-h)^2}D_+^y D_-^y u_{i,j} + \frac{\sigma^2 }{L^2}D_+^z D_-^z u_{i,j} - \frac{2\sigma^2 }{L}\frac{(1-y_i)h'}{(d-h)}D_0^y D_0^z u_{i,j}\nonumber \\ & - \left(2\sigma^2 \frac{(1-y_i)(h')^2}{(d-h)^2} + \sigma^2 \frac{(1-y_i)h''}{(d-h)} \right) D_0^y u_{i,j} - u_{i,j} =- g(y_i,z_j,\omega), \label{eqn:PDE2d_FDM} \end{align} where $h$, $h'$, and $h''$ are evaluated at $(y_i,z_j)$. We then discretize the boundary conditions \eqref{eqn:bc2d} on $\partial\mathcal{D}_{\textrm{s}}$. The Dirichlet boundary condition on $y=0$ gives $u_{1,j}=0$, $1 \leqslant j \leqslant N_z+1$. For the Neumann boundary condition on $y=1$, we introduce ghost nodes at $(y_{-1},z_{j})$ and obtain a second order accurate finite difference approximation $\frac{u_{1,j}-u_{-1,j}}{2h_y}=0$. Then, the values of the $u_{-1,j}$ at the ghosts nodes are eliminated by combining with Eq. \eqref{eqn:PDE2d_FDM}. Finally, the periodic boundary condition along the $z$ direction gives $u_{i,N_z+1}=u_{i,1}$. We solve a system of $N_y(N_z+1)$ linear equations for $\{u_{i,j}\}$ with $2 \leqslant i \leqslant N_y+1$ and $1 \leqslant j \leqslant N_z+1$. The equations have a regular structure, each equation involving at most nine unknowns. Thus the corresponding matrix of the system is sparse and can be solved efficiently using existing numerical solvers. After obtaining $\{u_{i,j}\}$, we use the 2D trapezoidal quadrature rule to compute the photoluminescence $\textmd{I}({\sigma}, d)$ defined in \eqref{eqn:PL2D}. In this paper, we choose the sparse-grid based SC method \cite{Griebel:04,Webster:08} to discretize the stochastic dimension in Eq. \eqref{eqn:PDE2d}. As such the expectation of $u(y,z,\omega)$ is computed by \[ \mathbb{E}[u(y,z,\omega)] = \sum_{q=1}^{Q} u(y,z,s_q)w_q, \label{eqn:sparse_grid} \] where $s_q$ are sparse-grid quadrature points, $w_q$ are the corresponding weights, and $Q$ is the number of sparse-grid points. Other functionals of $u(y,z,\omega)$ can be computed in the same way. When the solution $u(y,z,\omega)$ is smooth in the stochastic dimension, the SC method provides very accurate results. \subsection{An asymptotic-based method}\label{sec:asymptotic} \noindent If we rewrite Eq. \eqref{eqn:rPDE2d} in the nondimensionalized form with the change of variables $\tilde{x} = x/d$ and $\tilde{z} = z/L$, the domain $\mathcal{D}_\epsilon$ becomes \[ \mathcal{D}_{\textrm{s}, \epsilon}:=\set{(x,z) \in (\epsilon\tilde{h}(z,w),1)\times(0,1)}, \] where $\epsilon= \bar{h}/d$. When $\epsilon = 0$, $\mathcal{D}_{\textrm{s}, \epsilon}$ becomes $\mathcal{D}_{\textrm{s}, 0}=\mathcal{D}_{\textrm{s}}=(0,1)\times(0,1)$. Here \begin{equation}\label{eqn:htilde} \tilde{h}(z,w) = \sum_{k=1}^{K} \lambda_k\theta_k(\omega)\phi_k(z), \end{equation} where $K$ is the mode number in the interface modeling. As discussed in \secref{sec:model}, $\epsilon\sim 0.01-0.1$. Therefore, it is meaningful to derive the asymptotic equations when $\epsilon\rightarrow 0$. For ease of description, we list the main results below. The main idea is: (1) we rewrite Eq. \eqref{eqn:rPDE2d} over $\mathcal{D}_{\textrm{s}, \epsilon}$; (2) with appropriate extension/restriction of solutions on the fixed domain $\mathcal{D}_{\textrm{s}}$, we obtain a Taylor series with each term satisfying a PDE of the same type with the boundary condition involving lower order terms; (3) we apply the inverse transform for each term and change the domain $\mathcal{D}_{\textrm{s}}$ back to $\mathcal{D}_0=(0,d)\times(0,L)$. Detailed derivation can be found in Appendix \ref{sec:derivation} for self-consistency. The interested readers can find the systematic study on asymptotic expansions for more general problems in \cite{Chen:2017}. The asymptotic expansion over the fixed domain $\mathcal{D}_0$ is of the form \begin{equation} \label{series} w_\epsilon(x,z)=\sum_{n=0}^\infty \epsilon^n w_{n}(x,z) \quad \text{for }(x,z)\in \overline{\mathcal{D}_0}. \end{equation} The equation for each $w_n$ can be derived in a sequential manner. Only the first three terms are listed here. More details are included in Appendix \ref{sec:derivation}. The leading term $w_0(x,z)$ is the solution to the boundary value problem \begin{equation}\label{eqn:w0} \begin{cases} &\sigma^2\partial_{xx}w_0+\sigma^2\partial_{zz}w_0-w_0 +G(d-x)=0 \quad \text{in } \mathcal{D}_0,\\ &\partial_x w_0(d,z)=0, \\ &w_{0}(0,z)=0, \quad \text{for } 0\leqslant z\leqslant L,\\ &w_0(x,z+L)=w_0(x,z), \quad \text{for } 0\leqslant x\leqslant d, \end{cases} \end{equation} and $w_1(x,z,\omega)$ solves \begin{equation}\label{eqn:w1} \begin{cases} &\sigma^2\partial_{xx}w_1+\sigma^2\partial_{zz}w_1-w_1 =0 \quad \text{in } \mathcal{D}_0,\\ &\partial_x w_1(d,z,\omega)=0, \\ &w_{1}(0,z,\omega)=-d\tilde{h}(z,\omega)\partial_x w_{0}(0,z), \quad \text{for } 0\leqslant z\leqslant L,\\ &w_1(x,z+L,\omega)=w_1(x,z,\omega), \quad \text{for } 0\leqslant x\leqslant d. \end{cases} \end{equation} $w_2(x,z,\omega)$ is the solution to the following boundary value problem \begin{equation}\label{eqn:w2} \begin{cases} &\sigma^2\partial_{xx}w_2+\sigma^2\partial_{zz}w_2-w_2 =0 \quad \text{in } \mathcal{D}_0,\\ &\partial_x w_2(d,z,\omega)=0, \quad \text{for } 0\leqslant z\leqslant L,\\ &w_{2}(0,z,\omega)=-d\tilde{h}(z,\omega)\partial_x w_{1}(0,z,\omega)+\frac{(d\tilde{h}(z,\omega))^2}{2\sigma^2}G(d), \ \text{for } 0\leqslant z\leqslant L,\\ &w_2(x,z+L,\omega)=w_2(x,z,\omega), \quad \text{for } 0\leqslant x\leqslant d. \end{cases} \end{equation} \begin{remark} As demonstrated in Eqs. \eqref{eqn:w0}, \eqref{eqn:w1}, and \eqref{eqn:w2}, the asymptotic expansion in \eqref{series} requires a sequential construction from lower order terms to high order terms and the partial derivatives of lower terms appear in the boundary condition for high terms. Numerically, we use the second-order finite difference scheme for \eqref{eqn:w0}, \eqref{eqn:w1}, and \eqref{eqn:w2}. For boundary conditions, we use the one-sided beam warming scheme to discretize $\partial_x w_{0}(0,z)$ and $\partial_x w_{1}(0,z,\omega)$ so the overall numerical schemes are still of second order accuracy. \end{remark} Define $v^{[n]}=\sum_{k=0}^n \epsilon^k w_k$. Note that $w_0$ is a function of $(x,z)$ only. The zeroth order approximation of the photoluminescence is \begin{equation} \label{I0} \textmd{I}[u]\approx\textmd{I}[v^{[0]}] = \frac1L\int_{\mathcal{D}_\epsilon} w_0(x,z)\,\mathrm{d} x\mathrm{d} z\approx\frac1L\int_{\mathcal{D}_0} w_0(x,z)\,\mathrm{d} x\mathrm{d} z=:\textmd{I}_0[v^{[0]}],\\ \end{equation} and so \begin{equation} \label{EI0} \mathbb{E}[\textmd{I}[u]]\approx\mathbb{E}\left[\textmd{I}_0[v^{[0]}]\right]=\textmd{I}_0[w_0]. \end{equation} For $k=1,2,\ldots, K$, let $w_{1,k}(x,z)$ be the solution to \eqref{eqn:w1} with $\phi_k(z)$ in place of $\tilde{h}(z,\omega)$ \begin{equation}\label{eqn:w1k} \begin{cases} &\sigma^2\partial_{xx}w_{1,k}+\sigma^2\partial_{zz}w_{1,k}-w_{1,k} =0 \quad \text{in } \mathcal{D}_0,\\ &\partial_x w_{1,k}(d,z)=0, \\ &w_{1,k}(0,z)=-d\phi_k(z)\partial_x w_{0}(0,z), \quad \text{for } 0\leqslant z\leqslant l,\\ &w_{1,k}(x,z+L )=w_{1,k}(x,z), \quad \text{for } 0\leqslant x\leqslant d. \end{cases} \end{equation} Then by linearity, the solution $w_1$ to \eqref{eqn:w1} with $\tilde{h}$ given by \eqref{eqn:htilde} can be expressed as \begin{equation}\label{eqn:KLu1} w_1(x,z,\omega)=\sum_{k=1}^K \lambda_k\theta_k(\omega)w_{1,k}(x,z). \end{equation} Hence the first order approximation of the photoluminescence becomes \begin{equation} \label{I1} \begin{split} \textmd{I}[u]&\approx\textmd{I}[v^{[1]}]=\frac1L\int_{\mathcal{D}_\epsilon}v^{[1]}(x,z,\omega)\,\mathrm{d} x\mathrm{d} z\approx\frac1L\int_{\mathcal{D}_0}v^{[1]}(x,z,\omega)\,\mathrm{d} x\mathrm{d} z\\ &=\frac1L\int_{\mathcal{D}_0} \left[w_0(x,z) +\epsilon w_1(x,z,\omega)\right]\,\mathrm{d} x\mathrm{d} z \\ &=\frac1L\int_{\mathcal{D}_0} w_{0}(x,z)\,\mathrm{d} x\mathrm{d} z+ \frac \epsilon L\sum_{k=1}^K \lambda_k\theta_k(\omega)\int_{\mathcal{D}_0} w_{1,k}(x,z)\,\mathrm{d} x\mathrm{d} z \\ &=\textmd{I}_0[w_0]+\epsilon\sum_{k=1}^K \lambda_k\theta_k(\omega) \textmd{I}_0[w_{1,k}] =:\textmd{I}_1[v^{[1]}], \end{split} \end{equation} and so \begin{equation} \label{EI1} \mathbb{E}[\textmd{I}[u]]\approx\mathbb{E}\left[\textmd{I}_1[v^{[1]}]\right] =\textmd{I}_0[w_0]+\epsilon\sum_{k=1}^K\lambda_k\mathbb{E}[\theta_k] \textmd{I}_0[w_{1,k}]. \end{equation} Next, we consider the second order approximation of the photoluminescence. Since $\tilde{h}$ and $w_1$ are given by \eqref{eqn:htilde} and \eqref{eqn:KLu1}, the boundary condition for $w_2$ at $x=0$ can be written as \[ w_2=\sum_{j,k=1}^K \lambda_j\lambda_k\theta_j\theta_k\left(-d\phi_j\partial_xw_{1,k}+\frac{G(d)}{2\sigma^2}d^2\phi_j\phi_k\right). \] Introduce $w_{2,j,k}(x,z)$ as the solution to the boundary value problem \eqref{eqn:w2} with the boundary condition at $x=0$ replaced by \[ w_{2,j,k}=-d\phi_j\partial_xw_{1,k}+\frac{G(d)}{2\sigma^2}d^2\phi_j\phi_k. \] Then \[ w_2(x,z,\omega)=\sum_{j,k=1}^K \lambda_j\lambda_k\theta_j(\omega)\theta_k(\omega) w_{2,j,k}(x,z), \] and consequently, the second order approximation of the photoluminescence is \begin{equation} \label{I2} \begin{split} \textmd{I}[u]\approx~&\textmd{I}[v^{[2]}]=\frac1L\int_{\mathcal{D}_\epsilon} v^{[2]}(x,z,\omega)\,\mathrm{d} x\mathrm{d} z\\ \approx~&\frac1L\int_{\mathcal{D}_0} v^{[2]}(x,z,\omega)\,\mathrm{d} x\mathrm{d} z-\frac \epsilon{2L}\int_0^L v^{[2]}(0,z,\omega)h(z,\omega)\,\mathrm{d} z\\ \approx~&\frac1L\int_{\mathcal{D}_0} [w_0+\epsilon w_1+\epsilon^2 w_2]\,\mathrm{d} x\mathrm{d} z -\frac \epsilon{2L}\int_0^L [w_0+\epsilon w_1](0,z,\omega)h(z,\omega)\,\mathrm{d} z\\ =~&\frac1L\int_{\mathcal{D}_0} [w_0+\epsilon w_1+\epsilon^2 w_2]\,\mathrm{d} x\mathrm{d} z+\frac {\epsilon^2}{2L}\int_0^L [d\tilde{h}(z,\omega)]^2\partial_x w_0(0,z)\,\mathrm{d} z\\ =~&\textmd{I}_0[w_0]+\epsilon\sum_{k=1}^K\lambda_k\theta_k(\omega) \textmd{I}_0[w_{1,k}]+\epsilon^2\sum_{j,k=1}^K \lambda_j\lambda_k\theta_j(\omega) \theta_k(\omega) \textmd{I}_0[w_{2,j,k}]\\ &+\frac {\epsilon^2d^2}{2L}\sum_{j,k=1}^K \lambda_j\lambda_k\theta_j(\omega)\theta_k(\omega)\int_0^L \phi_j(z)\phi_k(z)\partial_x w_0(0,z)\,\mathrm{d} z\\ =:&\textmd{I}_2[v^{[2]}], \end{split} \end{equation} and we have \begin{equation} \label{EI2} \begin{split} \mathbb{E}[\textmd{I}[u]]\approx\mathbb{E}\left[\textmd{I}_2[v^{[2]}]\right]=&\textmd{I}_0[w_0]+ \epsilon\sum_{k=1}^K\lambda_k\mathbb{E}[\theta_k] \textmd{I}_0[w_{1,k}]+\epsilon^2\sum_{j,k=1}^K \lambda_j\lambda_k\mathbb{E}[\theta_j\theta_k] \textmd{I}_0[w_{2,j,k}]\\ &+\frac {\epsilon^2d^2}{2L}\sum_{j,k=1}^K \lambda_j\lambda_k\mathbb{E}[\theta_j\theta_k]\int_0^L \phi_j(z)\phi_k(z)\partial_x w_0(0,z)\,\mathrm{d} z. \end{split} \end{equation} In general, $w_n$ can be written as the sum of $K^n$ functions, each of which solves a deterministic problem. The approximation accuracy of a finite series in \eqref{series} is given by the following theorem. Proof can be found in \cite{Chen:2017}. \begin{theorem}\label{theorem} Assume $\mathcal{D}_0\subset\mathcal{D}_\epsilon\subset\mathcal{D}_{\epsilon_0}$ with $\epsilon\in [0,\epsilon_0]$ and $\partial\mathcal{D}_0\in C^{\infty}$. Also assume $G\in C^{\infty}(\overline{\mathcal{D}_{\epsilon_0}})$ and $h\in C^{\infty}(\partial\mathcal{D}_0)$. Then, $\forall n, m\geqslant 0$, \begin{equation}\label{eqn:vn err} \bigl\| v^{[n]}(\omega) -u(\omega) \bigr\|_{H^m(\mathcal{D}_0)} = \mathcal{O} (\epsilon^{n+1}) \quad \mathbb{P}-\textrm{a.e.\ } \omega\in\Omega, \end{equation} where $u$ is the solution to \eqref{eqn:rPDE2d} and $v^{[n]}=\sum_{k=0}^n \epsilon^k w_k$. \end{theorem} To proceed, let us recall the definition of Bochner spaces. \begin{definition} Given a real number $p\geqslant 1$ and a Banach space $X$, the Bochner space is \[ L_{\mathbb{P}}^p(\Omega,X) = \{u:\Omega\rightarrow X \; | \; \norm{u}_{L_{\mathbb{P}}^p(\Omega,X)} \textrm{is finite}\} \] with \[ \norm{u}_{L_{\mathbb{P}}^p(\Omega,X)} : = \begin{cases} &\left( \int_{\Omega} \norm{u(\cdot,\omega)}_X^p \mathrm{d}\;\mathbb{P}(\omega)\right)^{1/p}, \quad p<\infty \\ &\mathrm{ess\;sup}_{\omega\in\Omega} \norm{u(\cdot,\omega)}_X, \quad p=\infty. \end{cases} \] \end{definition} \begin{proposition}\label{proposition} Given $h\in L_{\mathbb{P}}^\infty(\Omega,C^1(\partial \mathcal{D}_0))$, then $w_n,\; n\geqslant 0$ belongs to $L_{\mathbb{P}}^2(\Omega,H^1(\mathcal{D}_0))$ and hence \[ \bigl\| v^{[n]} -u \bigr\|_{L_{\mathbb{P}}^2(\Omega,H^1(\mathcal{D}_0))} = \mathcal{O} (\epsilon^{n+1}). \] \end{proposition} \begin{proof} From Theorem \ref{theorem}, for $m=1$, we have \[ \bigl\| v^{[n]}(\omega) -u(\omega) \bigr\|_{H^1(\mathcal{D}_0)} = \mathcal{O} (\epsilon^{n+1}) \quad \mathbb{P}-\textrm{a.e.\ } \omega\in\Omega. \] Since $w_n,\; n\geqslant 0$ satisfies the same elliptic equation \eqref{eqn:w0} with a boundary condition depending on $w_k, \;k\leqslant n-1$. By the Lax-Milgram's theorem, we have $w_n \in L_{\mathbb{P}}^2(\Omega,H^1(\mathcal{D}_0)),\; n\geqslant 0$. Therefore, $v^{[n]}\in L_{\mathbb{P}}^2(\Omega,H^1(\mathcal{D}_0)),\; n\geqslant 0$ and the desired result is obtained. \end{proof} A direct consequence of Proposition \ref{proposition} is \begin{equation} \label{expection} \norm{\mathbb{E}(v^{[n]}) - \mathbb{E}(u)}_{H^1(\mathcal{D}_0)} = \mathcal{O} (\epsilon^{n+1}). \end{equation} Based on the above assertions, we have \begin{corollary} \label{corollary} For \eqref{EI0}, \eqref{EI1}, and \eqref{EI2}, we have the following approximation errors \begin{eqnarray} \label{expection0} \left\lvert\mathbb{E}\left[\textmd{I}_0[v^{[0]}]\right] - \mathbb{E}\left[\textmd{I}[u]\right]\right\rvert = \mathcal{O} (\epsilon^{1}), \\ \label{expection1} \left\lvert\mathbb{E}\left[\textmd{I}_1[v^{[1]}]\right] - \mathbb{E}\left[\textmd{I}[u]\right]\right\rvert = \mathcal{O} (\epsilon^{2}), \\ \label{expection2} \left\lvert\mathbb{E}\left[\textmd{I}_2[v^{[2]}]\right] - \mathbb{E}\left[\textmd{I}[u]\right]\right\rvert = \mathcal{O} (\epsilon^{3}). \end{eqnarray} \end{corollary} In summary, by using the asymptotic expansion solution, we circumvent the difficulty of sampling the random function and solving PDEs on irregular domains for each sample. In our approach, there is no statistical error or errors from numerical quadratures as in MC method, SC method, and PCE method. However, our method is applicable only for small perturbation of the random interface, where a small $n$ is sufficient in practice. The computational cost depends on the approximation order $n$ and the number of modes $K$ used to represent the random interface, and increases proportionally to $K^n$. \section{Numerical Results}\label{sec:result} \noindent In this section, we numerically investigate the accuracy and efficiency of the asymptotic-based method in computing photoluminescence and the efficiency in estimating the exciton diffusion length. In addition, we study of the validation of the diffusion-type model, i.e., under which condition the 1D model can be viewed as a good surrogate for the 2D model. \subsection{Accuracy and efficiency of the asymptotic-based method} Consider the forward model defined by Eq. \eqref{eqn:rPDE2d} over $\mathcal{D}_\epsilon:=\set{(x,z): x\in (h(z,\omega),d), z\in (0,L)}$. Recall that the random interface $h(z,\omega)$ between the donor and the acceptor is parameterized by $h(z,\omega) = \bar{h}\sum_{k=1}^{K} \lambda_{k} \theta_k(\omega)\sin(2k\pi\frac{z}{L})$, where $\theta_k(\omega)$ are i.i.d. uniform random variables and $K$ is the number of random variables in the model. We first solve \eqref{eqn:PDE2d} over the fixed domain $\mathcal{D}_{\textrm{s}}=(0,1)\times(0,1)$ in the domain mapping method using the SC method. Note that the spatial differentiation operator in \eqref{L} depends on the random variables in a highly nonlinear fashion, which makes the WCE method and PCE method extremely difficult. In the asymptotic-based method, we solve \textit{deterministic} boundary value problems \eqref{eqn:w0}, \eqref{eqn:w1}, and \eqref{eqn:w2} over the \textit{fixed} domain $\mathcal{D}_0=(0,d)\times(0,L)$, respectively. Recall that in the asymptotic-based method, $\epsilon= \bar{h}/d$ and the random interface becomes $\tilde{h}(z,\omega)=\sum_{k=1}^{K} \lambda_k\theta_k(\omega)\sin(2k\pi z)$. In our simulation, the random interface $h(z,\omega)$ is parameterized by $K=5$ random variables. The accuracy of the asymptotic-based method is verified by two numerical tests. In the first test, $\theta_k\sim U(0,1)$, while in the second one $\theta_k\sim U(-1,1)$. To compute the reference solution, we employ the finite difference method to discretize the spatial dimension of Eq. \eqref{eqn:PDE2d} with a mesh size $H=\frac{1}{128}$, and use the sparse-grid based SC method to discretize the stochastic dimension. We choose level six sparse grids with 903 quadrature points. After obtaining solutions at all quadrature points, we compute the expectation of the photoluminescence, which provides a very accurate reference solution. In the asymptotic-based method, we use the finite difference method to discretize the spatial dimension of boundary value problems \eqref{eqn:w0}, \eqref{eqn:w1k}, and \eqref{eqn:w2} for $w_{2,j,k}$ with a mesh size $H=\frac{1}{64}$. Expectations $\mathbb{E}[\theta_k]$ in \eqref{EI1} and $\mathbb{E}[\theta_j\theta_k]$ in \eqref{EI2} can be easily computed beforehand. Therefore, given the approximate solutions $w_0$, $w_{1,k}$, and $w_{2,j,k}$, we immediately obtain different order approximations of the expectation of the photoluminescence. This provides the significant computational saving over the SC method. For $\epsilon=2^{-i}$, $i=2,...,7$, Figure \ref{fig:EstEDL_EX1_AccuAsympMethodFB4U01} shows the approximation accuracy of the asymptotic-based method. In Figure \ref{fig:EstEDL_EX1_AccuAsympMethodFB4U01a}, $\theta_k \sim U(0,1)$. The approximated expectation of the photoluminescence obtained by using the zeroth, first and second order approximations are shown in the lines with circle, star, and triangle, with convergence rates 1.21, 1.99, and 3.81, respectively. In Figure \ref{fig:EstEDL_EX1_AccuAsympMethodFB4U01b}, $\theta_k\sim U(-1,1)$. In this case, $\mathbb{E}[\theta_k]=0$, so the zeroth and first order approximations produce the same results. The second order approximation provides a better result. The corresponding convergence rates are 1.82, 1.82, and 3.06, respectively. These results confirm the theoretical estimates in Corollary \ref{corollary}. \begin{figure}[h] \centering \subfigure[$U(0,1)$]{\label{fig:EstEDL_EX1_AccuAsympMethodFB4U01a} \includegraphics[width=0.49\linewidth]{EstEDL_EX1_AccuAsympMethodFB5U01}}% \subfigure[$U(-1,1)$]{\label{fig:EstEDL_EX1_AccuAsympMethodFB4U01b} \includegraphics[width=0.49\linewidth]{EstEDL_EX1_AccuAsympMethodFB5U-11}}% \caption{\small Convergent results of the asymptotic-based method with the zeroth, first, and second order approximations. (a) $\theta_k \sim U(0,1)$. The slopes of the zeroth, first and second order approximation results are 1.21, 1.99, and 3.81, respectively; (b) $\theta_k \sim U(-1,1)$. The slopes of the zeroth, first and second order approximation results are 1.82, 1.82, and 3.06, respectively.}\label{fig:EstEDL_EX1_AccuAsympMethodFB4U01} \end{figure} We conclude this subsection with a discussion on the computational time of our method. In these two tests, on average it takes 164.5 seconds to compute one reference expectation of the photoluminescence. If we choose a low level SC method to compute the expectation of the photoluminescence, it takes 27.3 seconds to compute one reference expectation of the photoluminescence that gives a comparable approximation result to our asymptotic-based method. However, our method with the second order approximation only takes 1.56 seconds to obtain one result. We achieve a $18X$ speedup over the SC method. Generally, the ratio of the speedup is problem-dependent. It is expected that higher ratio of speedup can be achieved it one solves a problem where the random interface is parameterized by high-dimensional random variables. \subsection{Estimation of the exciton diffusion length} In this section, we estimate the exciton diffusion length in an inverse manner with the asymptotic-based method as the forward solver. Since only limited photoluminescence data from experiments are available, we solve the forward model \eqref{eqn:rPDE2d} to generate data in our numerical tests. Specifically, given the exciton diffusion length $\sigma$, the exciton generation function $G$, the in-plane dimension $L$, and the parametrization of the random interface $h(z,\omega)$, we solve Eq. \eqref{eqn:rPDE2d} for a series of thicknesses $\{d_i\}$, and calculate the corresponding expectations of the photoluminescence data $p\{\tilde{\textmd{I}}_i\}$ according to Eq. \eqref{eqn:PL2d}. Therefore, $\{ d_i, \tilde{\textmd{I}}_i\}$ serves as the ``experimental'' data. We then solve the minimization problem \eqref{min1} based on our numerically generated data $\{ d_i, \tilde{\textmd{I}}_i\}$ to estimate the ``exact'' exciton diffusion length $\sigma$ in the presence of randomness, denoted by $\sigma_{exact}$ and will be used for comparison later. We fix $L=4$ in all our numerical tests since it is found that this minimizer is not sensitive to the in-plane dimension $L$. We show the convergence history of exciton diffusion lengths for various $\epsilon$ in Figure \ref{fig:EstEDL_EX2_DiffusionLengthSigmaCase}, where the photoluminescence data are generated with $\sigma=5$, $\sigma=10$, and $\sigma=20$, respectively. Here the relative error is defined as $E^{n,\epsilon}=|\frac{\sigma_{exact}- \sigma^{n,\epsilon}}{\sigma_{exact}}|$, where $n$ is the iteration number, $\sigma_{exact}$ is the ``exact'' exciton diffusion length, and $\sigma^{n,\epsilon}$ is the numerical result defined in Eq. \eqref{estimate_sigma}. To show more details about the accuracy of our asymptotic-based method, in Tables \ref{ConvergenceOfDiffusionLengthSigmaCase1}, \ref{ConvergenceOfDiffusionLengthSigmaCase2}, and \ref{ConvergenceOfDiffusionLengthSigmaCase3}, we list the relative errors of our method for plotting Figures \ref{fig:EstEDL_EX2_DiffusionLengthSigmaCase1}, \ref{fig:EstEDL_EX2_DiffusionLengthSigmaCase2}, and \ref{fig:EstEDL_EX2_DiffusionLengthSigmaCase3}. In all numerical tests, we choose the same termination criteria $|{\sigma}^{(n)}-{\sigma}^{(n-1)}|<10^{-4}$ in the Newton's method. Our asymptotic-based method performs well in estimating the exciton diffusion length. In general, the smaller amplitudes the random interface, the more accurate the exciton diffusion length and the smaller the iteration number. Additionally, for larger exciton diffusion lengths $\sigma_{exact}$, a faster convergence in the optimization approach is observed. \begin{figure}[h] \centering \subfigure[$\sigma=5$]{\label{fig:EstEDL_EX2_DiffusionLengthSigmaCase1} \includegraphics[width=0.3\linewidth]{EstEDL_EX2_DiffusionLengthSigma4}}% \subfigure[$\sigma=10$]{\label{fig:EstEDL_EX2_DiffusionLengthSigmaCase2} \includegraphics[width=0.3\linewidth]{EstEDL_EX2_DiffusionLengthSigma10}}% \subfigure[$\sigma=20$]{\label{fig:EstEDL_EX2_DiffusionLengthSigmaCase3} \includegraphics[width=0.3\linewidth]{EstEDL_EX2_DiffusionLengthSigma19}} \caption{\small Convergence history of the exciton diffusion length for various $\epsilon$, measured in the relative error defined as $E^{n,\epsilon}=|\frac{\sigma_{exact}- \sigma^{n,\epsilon}}{\sigma_{exact}}|$ with $n$ the iteration number. The ``exact'' data is obtained by the 2D model (Eqs. \eqref{eqn:rPDE2d} and \eqref{eqn:PL2d}) with a prescribed $\sigma$. (a) $\sigma=5$; (b) $\sigma=10$; (c) $\sigma=20$.}\label{fig:EstEDL_EX2_DiffusionLengthSigmaCase} \end{figure} \begin{table}[h!] \centering \begin{tabular}{|c |c |c |c |c |c |c| } \hline $n$ & $ \epsilon = 0.01 $ & $\epsilon = 0.02 $ &$\epsilon = 0.04 $ &$\epsilon = 0.08 $ & $\epsilon = 0.16 $ & $\epsilon = 0.32 $ \\ \hline 1 & 2.711349 & 2.691114 & 2.620542 & 2.409153 & 2.132821 & 1.789203 \\ 2 & 0.033009 & 0.011998 & 0.048469 & 0.182973 & 1.100014 & 0.640850 \\ 3 & 0.000480 & 0.000238 & 0.000147 & 0.039389 & 0.915513 & 0.610645 \\ 4 & 0.000033 & 0.000318 & 0.001678 & 0.005017 & 0.313381 & 0.549289 \\ 5 & 0.000034 & 0.000317 & 0.001679 & 0.005194 & 0.037634 & 0.402861 \\ 6 & & & & & 0.030379 & 0.383125 \\ 7 & & & & & 0.030328 & 0.183904 \\ 8 & & & & & & 0.002054 \\ \hline \end{tabular} \caption{Relative errors $E^{n,\epsilon}=|\frac{\sigma_{exact}- \sigma^{n,\epsilon}}{\sigma_{exact}}|$ for iteration number $n=1,2,3,...$, and various $\epsilon$. The prescribed $\sigma$ is 5. Empty space means the numerical result has already converged.} \label{ConvergenceOfDiffusionLengthSigmaCase1} \end{table} \begin{table \centering \begin{tabular}{|c |c |c |c |c |c |c| } \hline $n$ & $ \epsilon = 0.01 $ & $\epsilon = 0.02 $ &$\epsilon = 0.04 $ &$\epsilon = 0.08 $ & $\epsilon = 0.16 $ & $\epsilon = 0.32 $ \\ \hline 1 & 0.677755 & 0.689595 & 0.737990 & 0.908029 & 0.414439 & 1.476995 \\ 2 & 0.392867 & 0.408261 & 0.476915 & 0.160001 & 0.084646 & 0.691197 \\ 3 & 0.089478 & 0.093146 & 0.092276 & 0.030613 & 0.029504 & 0.487247 \\ 4 & 0.006387 & 0.007161 & 0.008500 & 0.008827 & 0.027198 & 0.158495 \\ 5 & 0.000066 & 0.000377 & 0.002147 & 0.008453 & 0.027194 & 0.034154 \\ 6 & 0.000033 & 0.000340 & 0.002115 & 0.008453 & & 0.021585 \\ 7 & & & & & & 0.021471 \\ \hline \end{tabular} \caption{Relative errors $E^{n,\epsilon}=|\frac{\sigma_{exact}- \sigma^{n,\epsilon}}{\sigma_{exact}}|$ for iteration number $n=1,2,3,...$, and various $\epsilon$. The prescribed $\sigma$ is 10.} \label{ConvergenceOfDiffusionLengthSigmaCase2} \end{table} \begin{table \centering \begin{tabular}{|c |c |c |c |c |c |c| } \hline $n$ & $ \epsilon = 0.01 $ & $\epsilon = 0.02 $ &$\epsilon = 0.04 $ &$\epsilon = 0.08 $ & $\epsilon = 0.16 $ & $\epsilon = 0.32 $ \\ \hline 1& 0.108867 & 0.109572 & 0.113283 & 0.126632 & 0.161406 & 0.237664 \\ 2& 0.007695 & 0.008023 & 0.009952 & 0.016784 & 0.031044 & 0.040370 \\ 3& 0.000070 & 0.000360 & 0.002080 & 0.008108 & 0.019861 & 0.024093 \\ 4& 0.000031 & 0.000320 & 0.002038 & 0.008059 & 0.019782 & 0.023936 \\ 5& & & & & 0.019782 & 0.023936 \\ \hline \end{tabular} \caption{Relative errors $E^{n,\epsilon}=|\frac{\sigma_{exact}- \sigma^{n,\epsilon}}{\sigma_{exact}}|$ for iteration number $n=1,2,3,...$, and various $\epsilon$. The prescribed $\sigma$ is 20.} \label{ConvergenceOfDiffusionLengthSigmaCase3} \end{table} \subsection{Validation of the diffusion-type model} Now, we are in the position to validate the diffusion model in estimating the exciton diffusion length. We are interested in identifying under which condition the 1D model can be viewed as a good surrogate for the 2D model and how this condition relates to the property of organic semiconductors. Again, only limited photoluminescence data from experiments are available and we have to solve the forward model to generate data in our numerical tests. Specifically, given the exciton diffusion length $\sigma$, the exciton generation function $G$, and the parametrization of the random interface $h(\omega)$, we solve Eq. \eqref{eqn:rPDE1d} for a series of thicknesses $\{d_i\}$, and calculate the corresponding expectations of the photoluminescence data $\{\tilde{\textmd{I}}_i\}$ according to Eq. \eqref{eqn:rPL1d}. Therefore, $\{ d_i, \tilde{\textmd{I}}_i\}$ serves as the ``experimental'' data generated by the 1D model. We then solve the minimization problem \eqref{min1} based on our numerically generated data $\{ d_i, \tilde{\textmd{I}}_i\}$ to estimate the ``exact'' exciton diffusion length $\sigma$ in the presence of randomness, denoted by $\sigma_{exact}$ and will be used for comparison. In our numerical tests, we use the 1D model \eqref{eqn:rPDE1d} with $\sigma=5$ and $\sigma=10$ to generate photoluminescence data. $d_i=10i$, $i=1,...,10$, $\bar{h}=1$, and $\epsilon=\bar{h}/d_i$. We use $K=10$ random variables to parameterize the random interface. We set $\lambda_{k}=k^{\beta}$, where $\beta\leqslant 0$ controls the decay rate of $\lambda_{k}$. The random interface therefore takes the form \begin{equation}\label{eqn:ex3_interface} h(z,\omega) = \bar{h} \sum_{k=1}^{10} k^\beta \theta_k(\omega)\sin(2k\pi\frac{z}{L}) \end{equation} with $\theta_k(\omega)\sim U[-1,1]$. Figure \ref{fig:CorrelationFunctionV2} plots the covariance function of the random interface defined by Eq. \eqref{eqn:ex3_interface} for $\beta=0$ and $\beta=-2$. It is clear that the smaller the $\beta$, the larger the correlation length. \begin{figure}[h] \centering \subfigure[$\beta=0$]{\label{fig:CorrelationFunctionV2a} \includegraphics[width=0.49\linewidth]{Consis_EX3_U-11_CovarianceFunction_beta0}}% \subfigure[$\beta=-2$]{\label{fig:CorrelationFunctionV2b} \includegraphics[width=0.49\linewidth]{Consis_EX3_U-11_CovarianceFunction_beta-2}}% \caption{\small The covariance function of the random interface defined by Eq. \eqref{eqn:ex3_interface} for different $\beta$. (a) $\beta=0$; (b) $\beta=-2$.}\label{fig:CorrelationFunctionV2} \end{figure} The convergence history of the exciton diffusion length for various $\beta$ is plotted in Figure \ref{fig:Consis_EX3_U-11_DiffusionLengthSigma510}, where the photoluminescence data is generated by the 1D model (Eqs. \eqref{eqn:rPDE1d} and \eqref{eqn:rPL1d}) with $\sigma=5$ and $\sigma=10$. Again, the relative error is defined as $E^{n,\beta}=|\frac{\sigma_{exact}- \sigma^{n,\beta}}{\sigma_{exact}}|$, where $n$ is the iteration number, $\sigma_{exact}$ is the ``exact'' exciton diffusion length, and $\sigma^{n,\beta}$ is the numerical result defined in Eq. \eqref{estimate_sigma}. Note that $\sigma^{n,\beta}$ depends also on $\epsilon$ implicitly but we omit its dependence for convenience. Tables \ref{Consis_EX3_SigmaCase5} and \ref{Consis_EX3_SigmaCase10} list the relative errors of our method for plotting Figure \ref{fig:Consis_EX3_U-11_DiffusionLengthSigma510}. The same criteria $|{\sigma}^{(n)}-{\sigma}^{(n-1)}|<10^{-4}$ is used here. The numerical exciton diffusion length obtained by our method converges to the reference one with the relative error less than $1\%$ when $\beta\leqslant -1$. \begin{figure}[h] \centering \subfigure[$\sigma=5$]{\label{fig:Consis_EX3_U-11_DiffusionLengthSigma510Case1} \includegraphics[width=0.49\linewidth]{Consis_EX3_U-11_DiffusionLengthSigma5new}}% \subfigure[$\sigma=10$]{\label{fig:Consis_EX3_U-11_DiffusionLengthSigma510Case2} \includegraphics[width=0.49\linewidth]{Consis_EX3_U-11_DiffusionLengthSigma10new}}% \caption{\small Convergence history of the exciton diffusion length for various $\beta$, measured in the relative error defined as $E^{n,\beta}=|\frac{\sigma_{exact}- \sigma^{n,\beta}}{\sigma_{exact}}|$ with $n$ the iteration number. The ``exact'' data is obtained by the 1D model (Eqs. \eqref{eqn:rPDE1d} and \eqref{eqn:rPL1d}) with a prescribed $\sigma$. (a) $\sigma=5$; (b) $\sigma=10$.}\label{fig:Consis_EX3_U-11_DiffusionLengthSigma510} \end{figure} \begin{table}[h!] \centering \begin{tabular}{|c |c |c |c |c |c |} \hline $n$ & $ \beta = 0 $ & $\beta = -0.5 $ &$\beta = -1.0$ &$\beta = -1.5 $ & $\beta = -2.0$ \\ \hline 1 & 0.647340& 0.808503& 0.829974& 0.832905& 0.833375\\ 2 & 0.565447& 0.804754& 0.801511& 0.798962& 0.798645\\ 3 & 0.347626& 0.797750& 0.765892& 0.759374& 0.758543\\ 4 & 0.049548& 0.785281& 0.718355& 0.707335& 0.705927\\ 5 & 0.100595& 0.764404& 0.648943& 0.630438& 0.628049\\ 6 & 0.086369& 0.731288& 0.530314& 0.492754& 0.487726\\ 7 & 0.086302& 0.679544& 0.205261& 0.005156& 0.044032\\ 8 & & 0.593736& 0.029160& 0.000781& 0.000829\\ 9 & & 0.415006& 0.004952& 0.000777& 0.000410\\ 10& & 0.210863& 0.004758& & 0.000410\\ 11& & 0.015892& & & \\ 12& & 0.011628& & & \\ \hline \end{tabular} \caption{Relative errors $E^{n,\beta}=|\frac{\sigma_{exact}- \sigma^{n,\beta}}{\sigma_{exact}}|$ for iteration number $n=1,2,3,...$, and various $\beta$. The prescribed $\sigma$ is 5.} \label{Consis_EX3_SigmaCase5} \end{table} \begin{table}[h!] \centering \begin{tabular}{|c |c |c |c |c |c |} \hline $n$ & $ \beta = 0 $ & $\beta = -0.5 $ &$\beta = -1.0$ &$\beta = -1.5 $ & $\beta = -2.0$ \\ \hline 1& 0.420520& 0.322303& 0.307239& 0.305058& 0.304686\\ 2& 0.182350& 0.081737& 0.067669& 0.065660& 0.065316\\ 3& 0.076150& 0.014086& 0.005175& 0.003874& 0.003647\\ 4& 0.061968& 0.009871& 0.001699& 0.000493& 0.000281\\ 5& 0.061776& 0.009857& 0.001690& 0.000483& 0.000272\\ 6& 0.061776& & & & \\ \hline \end{tabular} \caption{Relative errors $E^{n,\epsilon}=|\frac{\sigma_{exact}- \sigma^{n,\beta}}{\sigma_{exact}}|$ for iteration number $n=1,2,3,...$, and various $\beta$. The prescribed $\sigma$ is 10.} \label{Consis_EX3_SigmaCase10} \end{table} Our numerical results show that a faster decay of the eigenvalues $\lambda_{k}$ leads to a better agreement between the results of the 1D model and the 2D model. The smaller the $\beta$, the better the agreement. On the other hand, the smaller the $\beta$, the larger the correlation length. Therefore, the larger the correlation length, the better the agreement. Our study sheds some light on how to select a model as simple as possible without loss of accuracy for describing exciton diffusion in organic materials. In the chemistry community, it is known that under careful fabrication conditions \cite{DirksenRing:1991, Rodetal:2013}, organic semiconductors, including small molecules and polymers, can form crystal structures, which have large correlation lengths. As a consequence, exciton diffusion in these materials can be well described by the 1D model \cite{PetterssonRomanInganas:1999, Linetal:2013, Tamai:2015}. For organic materials with low crystalline order, i.e., small correlation length, however, our result suggests that the 1D model is not a good surrogate of the high dimensional models. \section{Conclusion} \label{sec:conclusion} In this paper, we model the exciton diffusion by a diffusion-type equation with appropriate boundary conditions over a random domain. The exciton diffusion length is extracted via minimizing the mean square error between the experimental data and the model-generated data. Since the measurement uncertainty for the domain boundary is much smaller compared to the device thickness, we propose an asymptotic-based method as the forward solver. Its accuracy is justified both analytically and numerically and its efficiency is demonstrated by comparing with the SC method as the forward solver. Moreover, we find that the correlation length of randomness is the key parameter to determine whether a 1D surrogate is sufficient for the forward modeling. The discussion here focuses on the photoluminescence experiment. For the photocurrent experiment, from the modeling perspective, the forward model is the same but the objective function is different. An exciton either contributes to the photoluminescence or the photocurrent, so the photocurrent is defined as the difference between a constant (total exciton contribution) and the photoluminescence \cite{Chen:2016}. Therefore, the proposed method can be applied straightforwardly with very little modification. \medskip \noindent \textbf{Acknowledgment.} We thank Professor Carlos J. Garc\'{i}a-Cervia and Professor Thuc-Quyen Nguyen for stimulating discussions. Part of the work was done when J. Chen was visiting Department of Mathematics, City University of Hong Kong. J. Chen would like to thank its hospitality. J. Chen acknowledges the financial support by National Natural Science Foundation of China via grant 21602149. L. Lin and X. Zhou acknowledge the financial support of Hong Kong GRF (109113, 11304314, 11304715). Z. Zhang acknowledges the financial support of Hong Kong RGC grants (27300616, 17300817) and National Natural Science Foundation of China via grant 11601457. \section*{References} \bibliographystyle{amsplain} \input{1016n.bbl} \newpage
{ "timestamp": "2017-12-15T02:04:46", "yymm": "1712", "arxiv_id": "1712.05126", "language": "en", "url": "https://arxiv.org/abs/1712.05126" }
\section{Mapping to the extended Ising model and Exact solutions} We start from the extended quantum Ising model with longer-range interactions in a transverse field, with the Hamiltonian \begin{equation} H=\sum_{n=1}^{N_f}\sum_{j=1}^L\left(\frac{{J}_n^x}{2}\sigma_j^x\sigma_{j+n}^x\!+\frac{{J}_n^y}{2}\sigma_j^y\sigma_{j+n}^y\right)\!\!\prod_{l=j+1}^{j+n-1}\!\!\!\sigma_l^z+\!\sum_{j=1}^L\frac{\mu}{2}\sigma_j^z,\label{eq1} \end{equation} where $\sigma_j^{x,y,z}$ are Pauli matrices for the spin at site $j$, and $L$ (assumed even) is the total number of sites. By the Jordan-Wigner transformation \begin{equation} c_1=-\sigma_1^+=-(\sigma^x_1+i\sigma^y_1)/2,\hspace*{0.2in}c_j=-\sigma_j^+\prod_{i=1}^{j-1}\sigma^z_i, \end{equation} we can obtain a spinless fermion Hamiltonian with longer-range pairing and hopping terms with fermion parity $(-1)^{N_p}$ of the number of fermions \begin{equation} N_p=\sum_{j=1}^Lc_j^\dag c_j, \end{equation} as $H=H_{{o}}+H_{{b}}$, where the open chain part is \begin{align} H_{{o}}=&\sum_{n=1}^{N_f}\sum_{j=1}^{L-n}\left(\frac{{J}_n^+}{2}c^\dag_jc_{j+n}+\frac{{J}_n^-}{2}c^\dag_jc_{j+n}^\dag+\textrm{h.c.}\right)\nonumber\\ &-\sum_{j=1}^L\mu\left(c^\dag_jc_j-\frac{1}{2}\right), \label{open} \end{align} and the boundary part reads \begin{equation} H_{{b}}=\frac{(-1)^{N_p}}{2}\sum_{n=1}^{N_f}\!\sum_{~j=L-n+1}^L\!\!\!\!\!({J}_n^+c^\dag_jc_{j+n}+{J}_n^-c^\dag_jc_{j+n}^\dag+\textrm{h.c.}),\label{eq4} \end{equation} with ${J}_n^\pm\equiv {J}_n^x\pm {J}_n^y$. Thus, given a definite even fermion parity $(-1)^{N_p}=1$, this extended Kitaev fermion chain \cite{Alecce2017} has an antiperiodic boundary condition $c_{j+L}=-c_j$. Here we choose all the hopping and pairing parameters as real, which make the Hamiltonian preserve time-reversal symmetry and belong to the {BDI} class ($\mathbf{Z}$ type) characterized by the winding numbers \cite{Chiu2016,Li2016}. For the thermodynamic limit $L\gg N_f\geq1$, we use the Fourier transformation, \begin{equation} c_j=\frac{1}{\sqrt{L}}\sum_{q}\exp({-iqj})\;c_q, \end{equation} to express the Bogoliubov-de Gennes Hamiltonian as \begin{equation} H=\sum_{q}(c_q^\dag,c_{-q})\mathcal{H}_q\left(\begin{array}{c}c_q\\c_{-q}^\dag\end{array}\right), \end{equation} where the complete set of wavevectors is $q=2\pi m/L$ with \begin{equation} m=-\frac{L-1}{2},-\frac{L-3}{2},\cdots,\frac{L-3}{2},\frac{L-1}{2}. \end{equation} Here, we can write \begin{equation} \mathcal{H}_q=\frac{1}{2}\bm{r}(q)\cdot \bm{\sigma}, \end{equation} with the vector $\bm{r}(q)=(0,y(q),z(q))$ in the auxiliary two-dimensional $y$-$z$ space, \begin{align} &y(q)=\sum_{n=1}^{N_f}{J}_n^-\sin(nq),\\ &z(q)=\sum_{n=1}^{N_f}{J}_n^+\cos(nq)-\mu, \end{align} and $\bm{\sigma}=(\sigma^x,\sigma^y,\sigma^z)$. Using the Bogoliubov transformation \begin{equation} c_q=\cos\frac{\Theta}{2}\eta_q+i\sin\frac{\Theta}{2}\eta_{-q}^\dag, \end{equation} with $\tan\Theta=y(q)/z(q)$, we can diagonalize the Hamiltonian as \begin{equation} {H}=\sum_q\epsilon_q\left(\eta_q^\dag\eta_q-\frac{1}{2}\right), \end{equation} and obtain the ground state \begin{equation} |\mathcal{G}\rangle=\prod_{q}[\cos\frac{\Theta}{2}+i\sin\frac{\Theta}{2}\eta_q^{\dag}\eta_{-q}^\dag]|0\rangle, \end{equation} where the energy spectra are \begin{equation} \epsilon_q=\pm\frac{1}{2}\sqrt{y(q)^2+z(q)^2}. \end{equation} In Fig.~\ref{fig:S4}, we plot the energy spectra for $L = 200$ and trajectories of winding vectors for four different extended Kitaev fermion chain models \cite{Alecce2017} considered in the main text. \begin{figure*}[t] \centering \includegraphics[width=0.98\textwidth]{FigS1.eps}\\ \caption{(color online) (a-d) Energy spectra for $L=200$ and (e-h) trajectories of winding vectors for an extended Kitaev fermion chain with parameters: (a,e) ${J}_1^+={J}_1^-=1$, ${J}_{2}^+={J}_{2}^-=2$, ${J}_{3}^+={J}_{3}^-=2$ ($N_f=3$); (b,f) ${J}_1^+={J}_1^-=0.1$, ${J}_{2}^+={J}_{2}^-=0.21$, ${J}_{3}^+={J}_{3}^-=0.44$, ${J}_4^+={J}_4^-=0.9$, ${J}_{5}^+={J}_{5}^-=2$ ($N_f=5$); (c,g) ${J}_1^+={J}_1^-=0.1$, ${J}_{2}^+={J}_{2}^-=0.21$, ${J}_{3}^+={J}_{3}^-=-0.74$, ${J}_4^+={J}_4^-=0.9$ ($N_f=4$); and (d,h) ${J}_{2}^+={J}_{2}^-=2.4$, $J_3^+=2$, $J_3^-=-2$ ($N_f=3$). }\label{fig:S4} \end{figure*} \section{Winding numbers} For the {BDI} symmetry class Kitaev chain fermion systems, the winding number in the auxiliary space of momentum behaves as a $\mathbf{Z}$ topological invariant \cite{Chiu2016,Zhang2015}, which is a fundamental concept in geometric topology. The winding number of the closed loop in auxiliary $y$-$z$ plane around the origin can be written as \begin{equation} \nu=\frac{1}{2\pi}\oint\frac{ydz-zdy}{|\bm{r}|^2}. \end{equation} Via the substitution $\zeta(q)\equiv\exp(iq)$, we can rewrite in complex space that \begin{equation} y(q)=\sum_{n=1}^{N_f}\frac{{J}^-_n{(\zeta^n-\zeta^{-n})}}{2i}\equiv Y(\zeta), \end{equation} and \begin{equation} z(q)=\sum_{n=1}^{N_f}\frac{{J}^+_n{(\zeta^n+\zeta^{-n})}}{2}-\mu\equiv Z(\zeta). \end{equation} By defining a complex characteristic function \begin{eqnarray} g(\zeta)&\equiv& Z(\zeta)+iY(\zeta)\\ &=&\sum_{n=1}^{N_f}({J}_n^x\zeta^n+{J}_n^y\zeta^{-n})-\mu,\label{eq5} \end{eqnarray} we obtain the winding number by calculating the logarithmic residue of $g(\zeta)$ in accordance with the Cauchy's argument principle \cite{Ahlfors1953} \begin{equation} \nu=\frac{1}{2\pi i}\oint_{{}_{|\zeta|=1}}\!\!\!\!\!\!\!\!\!d\zeta\;\frac{g'(\zeta)}{g(\zeta)}=\mathcal{N}-\mathcal{P}, \end{equation} where in the complex region $|\zeta|<1$, $\mathcal{N}$ is the number of zeros for $g(\zeta)=0$, and $\mathcal{P}$ is the number of poles for $g(\zeta)=\infty$. For two special cases: ${J}_n^y=0$ $\forall n$, we have \begin{equation} g(\zeta)=\sum_{n=1}^{N_f}J_n^x\zeta^n+\mu, \end{equation} and only zeros exist; while ${J}_n^x=0$ there only poles exist. \section{Majorana zero modes} We can write the open-chain Hamiltonian (\ref{open}) in terms of Majorana fermion operators: \begin{equation} a_j=c_j^\dag+c_j,\hspace{0.2in}b_j=i(c^\dag_j-c_j), \end{equation} with relations $\{a_i,a_j\}=\{b_i,b_j\}=2\delta_{ij}$, $\{a_i,b_j\}=0$ as \begin{equation} H_{{o}}=-\frac{i}{2}\sum_{n=1}^{N_f}\sum_{j=1}^{L-n}({J}_n^xb_{j}a_{j+n}+{J}_n^yb_{j+n}a_{j})+\frac{i\mu}{2}\sum_{j=1}^La_{j}b_{j}. \end{equation} We can assume an ansatz wave function as a linear combination of Majorana operators $a_j$ \cite{Fendley2012}: \begin{equation} \phi=\sum_{j=1}^{L}\alpha_ja_j, \end{equation} and calculate the commutation to satisfy the condition $[H,\phi]=0$ for the existence of Majorana zero modes \cite{Sarma2015,Elliott2015}. Then, the coefficients are given by the recursion relations \begin{equation} \sum_{n=1}^{N_f}({J}_n^x\alpha_{j+n}+{J}_n^y\alpha_{j-n})-\mu\alpha_j=0, \end{equation} for $j=n+1,n+2\cdots,L-n$. These recursion equations can be solved with the solutions of characteristic equations $g(\zeta)=0$ \cite{Niu2012} given $g(\zeta)$ in Eq.~(\ref{eq5}). If $\mathcal{N}\geq\mathcal{P}$, we should require Majorana zero modes at the left end satisfying $|\alpha_{L}|\rightarrow0$, for the thermodynamic limit $L\gg1$, and only in the range $|\zeta|<1$ should the zeros $\{\zeta_{l}\}$ be considered. Thus, we have $\mathcal{N}$ independent solutions \begin{equation} \alpha_{j}=\sum_{l=1}^{\mathcal{N}}\omega_{l}(\zeta_l)^j, \end{equation} with $\{\omega_l\}$ undetermined coefficients, and for $j\leq \mathcal{P}$, we have $\mathcal{P}$ constraint conditions \begin{eqnarray} \sum_{n=1}^{N_f}{J}_n^x\alpha_{j+n}+\mu\alpha_j+\sum_{n=1}^{j-1}{J}_n^y\alpha_{j-n}=0.\label{cc} \end{eqnarray} Thus, we have $(\mathcal{N}-\mathcal{P})$ independent normalized left zero modes $\phi_{\textrm{L}}^1,...,\phi_{\textrm{L}}^{(\mathcal{N}-\mathcal{P})}$ with coefficients $\{\alpha_j^1\},...,\{\alpha_j^{(\mathcal{N}-\mathcal{P})}\}$, where the orthogonal Majorana zero modes can be obtained by using the Schmidt orthogonalization with conditions $\{\phi^i,{\phi^j}^\dag\}=2\delta_{ij}$. These considerations also hold for linear combinations of Majorana operators $\{b_j\}$ with the form \begin{equation} \psi^i=\sum_{j=1}^{L}\beta_j^i b_j, \end{equation} and \begin{equation} \beta_j^i=\alpha_{L-j+1}^i, \end{equation} because Majorana zero modes appear in pairs \cite{Kitaev2001}. For the other case $\mathcal{N}<\mathcal{P}$, we should consider right Majorana zero modes that require $|\alpha_{1}|\rightarrow0$ for $L\gg1$ and the characteristic equation $\bar{g}(\zeta)=g(1/\zeta)=0$, with $\bar{\mathcal{N}}$ zeros and $\bar{\mathcal{P}}$ poles in $|\zeta|<1$, where we can obtain that \begin{equation} \mathcal{N}+\bar{\mathcal{N}}=\bar{\mathcal{P}}+\mathcal{P}, \end{equation} and have $(\mathcal{P}-\mathcal{N})$ right Majorana zero modes $\phi_{\textrm{R}}^1,\phi_{\textrm{R}}^2,\cdots,\phi_{\textrm{R}}^{(\mathcal{P}-\mathcal{N})}$. Therefore, we derive that in the thermodynamic limit $L\gg N_f\geq1$, the number of Majorana zero modes at each end of the extended Kitaev open chain, defined as $\mathcal{M}_{0}$, equals the absolute value of the winding number: \begin{eqnarray} \mathcal{M}_{0}=|\mathcal{N}-\mathcal{P}|=|\nu| \end{eqnarray} Here, we should note that there exist special cases when degenerate solutions of Majorana zero modes might occur for some choices of parameters and could be averted as we consider the perturbation of characteristic functions. Moreover, while the coefficients $\{\alpha_j\}$ are not real, the zero modes $\phi$ and $\psi$, with conditions $\{\phi^i,{\phi^j}^\dag\}=\{\psi^i,{\psi^j}^\dag\}=2\delta_{ij}$ and $\{\phi^i,{\psi^j}^\dag\}=\{\phi^i,{\psi^j}\}=0$, are not Majorana operators \cite{Lepori2017}. Fortunately, for $\mathcal{N}\geq\mathcal{P}$, left and right Majorana zero modes can be combined as $(\mathcal{N}-\mathcal{P})$ fermion modes $d^1,d^2,\cdots,d^{(\mathcal{N}-\mathcal{P})}$ with \begin{equation} d^i={(\phi^i_{\textrm{L}}+i\psi^i_{\textrm{R}})}/{2}, \end{equation} that commute with the Hamiltonian in the thermodynamic limit. Conversely, for $\mathcal{P}\geq\mathcal{N}$, there exist $(\mathcal{P}-\mathcal{N})$ fermion zero modes with operators $\bar{d}^1,\bar{d}^2,\cdots,\bar{d}^{(\mathcal{P}-\mathcal{N})}$, where \begin{equation} \bar{d}^i={(\phi^i_{\textrm{R}}+i\psi^i_{\textrm{L}})}/{2}. \end{equation} Our discussions also provide an effective method for finding the distribution of Majorana zero modes by finding the zeros and poles of the characteristic functions $g(\zeta)$ in momentum space. Moreover, the topological phase transitions occur when the parameters satisfy the existence of zeros of the characteristic functions on the critical contour $|\zeta|=1$, see Sec.~\ref{VI} for details. \section{Quantum Fisher information of topological states} Given a generator $\mathcal{O}$ with respect to the parameter $t$, the quantum Fisher information of the pure ground state $|\mathcal{G}\rangle$ can be written as \cite{BRAUNSTEIN1994,Pezze2009,Giovannetti2011,Ma2011} \begin{eqnarray} F_Q[\mathcal{O},|\mathcal{G}\rangle]\ =\ 4(\Delta\mathcal{O})^2\ =\ 4(\langle\mathcal{O}^2\rangle_{\mathcal{G}}-\langle\mathcal{O}\rangle_{\mathcal{G}}^2). \end{eqnarray} For critical systems with $L$ sites, we consider the quantum Fisher information density with the form \begin{eqnarray} f_Q[\mathcal{O},|\mathcal{G}\rangle]=\frac{F_{Q}[\mathcal{O},|\mathcal{G}\rangle]}{L}, \end{eqnarray} and the violation of the inequality $f_Q\leq\kappa$ signals $(\kappa+1)$-partite entanglement ($1\leq\kappa\leq L-1$). For instance, we consider a Kitaev chain which is a tight-binding model with strengths of tunneling $J$ and superconducting pairing $\Delta$ \cite{Kitaev2001}: \begin{equation} H=\sum_{j=1}^{L-1}\left(\frac{\Delta}{2}c_jc_{j+1}-\frac{J}{2}c_j^\dag c_{j+1}+\textrm{h.c.}\right)-{\mu}\sum_{j=1}^L\left(n_j-\frac{1}{2}\right), \end{equation} with the fermion number operator $n_j\equiv c_j^\dag c_j$. For $J=\Delta$ and zero chemical potentials $\mu=0$, we have one Majorana zero mode at each end, and the Hamiltonian may be written in terms of Majorana operators and Dirac fermion operators \begin{equation} d_{j,1}=(b_j+ia_{j+1})/2 \end{equation} as a diagonal form \begin{eqnarray} H=i\frac{J}{2}\sum_{j=1}^{L-1}b_{j}a_{j+1}=\sum_{j=1}^{L-1}J\left(d_{j,1}^\dag d_{j,1}-\frac{1}{2}\right),\label{ge} \end{eqnarray} where we have a winding number $\nu=1$. Here, to detect multipartite entanglement, it requires to choose a pair of nonlocal generators \cite{Pezze2017} \begin{eqnarray} \mathcal{O}_{\nu=1}=\sum_{j=1}^L\sigma^{x}_j/2,\hspace*{0.2in}\mathcal{O}_{\nu=1}^{(\textrm{st})}=\sum_{j=1}^L(-)^j\sigma^{x}_j/2.\ \ \ \ \end{eqnarray} Using the Jordan-Wigner transformation as \begin{equation} -\sigma^{x}_j=c_j^\dag \exp\left({i\pi\sum_{l=1}^{j-l}c_l^\dag c_l}\right)+\exp\left({-i\pi\sum_{l=1}^{j-l}c_l^\dag c_l}\right)c_j, \end{equation} the quantum Fisher information density of the ground state of the Kitaev chain can be written in terms of longitudinal spin-spin correlation functions: \begin{align} &f_{Q}[\mathcal{O}_{\nu=1},|\mathcal{G}\rangle]=1+\sum_{r=1}^{L-1}C_{\nu=1}(r),\\ &f_{Q}[\mathcal{O}_{\nu=1}^{(\textrm{st})},|\mathcal{G}\rangle]=1+\sum_{r=1}^{L-1}(-)^rC_{\nu=1}(r), \end{align} with respect to the generators $\mathcal{O}_{\nu=1}$ and $\mathcal{O}_{\nu=1}^{(\textrm{st})}$, respectively. Here, we have used the fact that $\langle\sigma_j^{x}\rangle_{\mathcal{G}}=0$ and considered a closed chain for $L\gg1$. Moreover, the $x$-directional longitudinal correlation function can be written as \begin{equation} C_{\nu=1}(r)=\left\langle\prod_{l=i}^{j-1}(- ib_{l}a_{l+1})\right\rangle_{\!\!\!\mathcal{G}} =\left\langle\prod_{l=i}^{j-1}(1-2d_{l,1}^\dag d_{l,1})\right\rangle_{\!\!\!\mathcal{G}}, \end{equation} which represents the average of the Majorana parity from site $i$ to $j$ ($j-i=r$) and does not include the edge modes. For $J>0$, we have \begin{equation} \langle d_{l,1}^\dag d_{l,1}\rangle_{\mathcal{G}}=0, \end{equation} so the Majorana zero modes give \begin{equation} f_{Q}[\mathcal{O}_{\nu=1},|\mathcal{G}\rangle] = L, \end{equation} which signals the maximal $L$-partite entanglement with the generator $\mathcal{O}_{\nu=1}$. On the contrary, for $J<0$, we have \begin{equation} \langle d_{l,1}^\dag d_{l,1}\rangle_{\mathcal{G}}=1, \end{equation} such that the edge Majorana zero modes lead to the fact that \begin{equation} f_{Q}[\mathcal{O}_{\nu=1}^{(\textrm{st})},|\mathcal{G}\rangle] = L, \end{equation} with respect to the generator $\mathcal{O}_{\nu=1}^{(\textrm{st})}$. Therefore, the choice of generators between the operator $\mathcal{O}_{\nu=1}$ and the staggered operator $\mathcal{O}_{\nu=1}^{(\textrm{st})}$ depends on the sign of the direct interaction between the chain ends as discussed in Ref.~\cite{Kitaev2001}. These results also hold for the open chain, because the correlation function does not include the fermion edge modes. For the other case, we choose $J=-\Delta$ and $\mu=0$, where the winding number is $\nu=-1$. Then, the quantum Fisher information density $f_Q$ of the ground state $|\mathcal{G}\rangle$ with respect to the generators: \begin{eqnarray} \mathcal{O}_{\nu=-1}=\sum_{j=1}^L\sigma^{y}_j/2,\hspace*{0.2in}\mathcal{O}_{\nu=-1}^{(\textrm{st})}=\sum_{j=1}^L(-)^j\sigma^{y}_j/2.\ \ \ \ \end{eqnarray} can detect symmetry-protected topological order and Majorana zero modes with $\nu=-1$. \begin{figure*}[t] \centering \includegraphics[width=0.97\textwidth]{FigA1.eps}\\ \caption{(color online) The staggered string correlation functions $(-)^rC_\nu(r)$ versus the normalized distance $r/L$ for the extended Kitaev fermion chain with a system size $L=600$, third neighbor interactions ($N_f=3$) and nonzero parameters: ${J}_1^+={J}_1^-=1$, ${J}_{2}^+={J}_{2}^-=2$, ${J}_{3}^+={J}_{3}^-=2$.}\label{fig:A1} \end{figure*} The interchange between the quantum phases with positive and negative winding numbers $\nu=\pm1$ \begin{align} \mathcal{O}_{\nu=1}^{(\textrm{st})}\leftrightarrow\mathcal{O}_{\nu=-1}^{(\textrm{st})},&\hspace*{0.2in}\mathcal{O}_{\nu=1}\leftrightarrow\mathcal{O}_{\nu=-1}\\ {f}_Q[\mathcal{O}_{\nu=1}^{(\textrm{st})}]\leftrightarrow{f}_Q[\mathcal{O}_{\nu=-1}^{(\textrm{st})}],&\hspace*{0.2in}{f}_Q[\mathcal{O}_{\nu=1}]\leftrightarrow{f}_Q[\mathcal{O}_{\nu=-1}] \end{align} can be realized by a phase redefinition $c_j \rightarrow \pm i c_j$. Another interchange between the staggered operator $\mathcal{O}_{\nu=1}^{(\textrm{st})}$ and the operator $\mathcal{O}_{\nu=1}$, for the positive and negative signs of the interaction between Dirac fermions localized at the chain ends, respectively, \begin{align} \mathcal{O}_{\nu=1}^{(\textrm{st})}\leftrightarrow\mathcal{O}_{\nu=1},&\hspace*{0.2in}\mathcal{O}^{(\textrm{st})}_{\nu=-1}\leftrightarrow\mathcal{O}_{\nu=-1}\\ {f}_Q[\mathcal{O}_{\nu=1}^{(\textrm{st})}]\leftrightarrow{f}_Q[\mathcal{O}_{\nu=1}],&\hspace*{0.2in}{f}_Q[\mathcal{O}_{\nu=-1}^{(\textrm{st})}]\leftrightarrow{f}_Q[\mathcal{O}_{\nu=-1}] \end{align} can be realized by a Hermitian conjugate transformation $c_j \rightarrow c_j^\dag$. \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{FigS2.eps}\\ \caption{(color online) Dual quantum Fisher information density $f_Q$ of the ground state $|\mathcal{G}\rangle$ with respect to the dual generators $\mathcal{O}_{\nu}$ and $\mathcal{O}_{\nu}^{(\textrm{st})}$ as a function of $L$ for the extended Kitaev fermion chain with longer-range interactions and with nonzero parameters: ${J}_1^+={J}_1^-=0.1$, ${J}_{2}^+={J}_{2}^-=0.21$, ${J}_{3}^+={J}_{3}^-=-0.74$, ${J}_4^+={J}_4^-=0.9$ ($N_f=4$), in different topological phases. (a) For $\mu=1$, the winding number $\nu=1$, and the fitting nontrivial scaling topological index $\lambda_{\nu=1}=0.9837$. (b) For $\mu=0.6$, $\nu=3$, and $\lambda_{\nu=3}=0.9941$. (c) For $\mu=0$, $\nu=2$, and $\lambda_{\nu=2}^{(\textrm{st})}=1.0051$. (d) For $\mu=4$, $\nu=4$, and $\lambda_{\nu=4}^{(\textrm{st})}=0.9933$. }\label{fig:S1} \end{figure} Generally for $\mu\neq0$, we can calculate the longitudinal correlation function by defining \begin{eqnarray} A_l=c_l^\dag+c_l=a_l,\hspace*{0.2in}B_{l}=c_l^\dag-c_l=-ib_l. \end{eqnarray} The correlation functions in the $x$ and $y$ directions can be written as \begin{align} C_{\nu=1}(r)&=\langle\mathcal{G}|B_iA_{i+1}...A_{j-1}B_{j-1}A_j|\mathcal{G}\rangle,\\ C_{\nu=-1}(r)&=-\langle\mathcal{G}|A_iB_{i+1}...B_{j-1}A_{j-1}B_j|\mathcal{G}\rangle, \end{align} where $j-i=r$. Using Wick's theorem, we can write the $x$-directional spin correlation function into a determinant of size $r$ \cite{BAROUCH1971} \begin{eqnarray} C_{\nu=1}(r)=\left|\begin{array}{c c c c} G_{-1}&G_{-2}&\cdots&G_{-r}\\ G_0&G_{-1}&\cdots&G_{-r+1}\\ G_{1}&G_0&\cdots&G_{-r+2}\\ \vdots&\vdots&\vdots&\vdots\\ G_{r-2}&G_{r-3}&\cdots&G_{-1} \end{array}\right|, \end{eqnarray} and similarly, we have the $y$-directional spin correlation function as \begin{eqnarray} C_{\nu=-1}(r)=\left|\begin{array}{c c c c} G_{1}&G_{0}&\cdots&G_{-r+2}\\ G_2&G_{1}&\cdots&G_{-r+3}\\ G_{3}&G_2&\cdots&G_{-r+4}\\ \vdots&\vdots&\vdots&\vdots\\ G_{r}&G_{r-1}&\cdots&G_{1} \end{array}\right|, \end{eqnarray} where we have \begin{align} G_{-r}\equiv\langle\mathcal{G}|B_iA_{i+r}|\mathcal{G}\rangle \end{align} and $\langle\mathcal{G}|A_iA_{j}|\mathcal{G}\rangle=\langle\mathcal{G}|B_iB_{j}|\mathcal{G}\rangle=\delta_{ij}$. \begin{table*}[t] \begin{tabular}{c|c c |c c| c c| c c} \hline \hline $~~~\mu~~~$ & $\lambda_{\nu=1}^{(\textrm{st})}$ & $\lambda_{\nu=1}$ & $\lambda_{\nu=2}^{(\textrm{st})}$ & $\lambda_{\nu=2}$ & $\lambda_{\nu=3}^{(\textrm{st})}$ & $\lambda_{\nu=3}$ & $\lambda_{\nu=4}^{(\textrm{st})}$ & $\lambda_{\nu=4}$\\ \hline 1& $4.8\times10^{-7}$ & \textcolor{blue}{0.9837}& $-2.0\times10^{-6}$& $2.1\times10^{-6}$ & $-8.0\times10^{-7}$& $4.4\times10^{-5}$ & $1.9\times10^{-6}$& $5.2\times10^{-7}$\\ 0.6& $-8.6\times10^{-8}$ & $8.0\times10^{-8}$& $-1.3\times10^{-7}$& $3.3\times10^{-8}$& $-6.9\times10^{-8}$ & \textcolor{blue}{0.9941}& $7.4\times10^{-7}$& $-3.5\times10^{-8}$\\ $0$ & $5.8\times10^{-14}$ & $-6.7\times10^{-14}$ & $3.1\times10^{-14}$ & $1.5\times10^{-14}$ & $6.1\times10^{-14}$ & $-5.5\times10^{-14}$ & \textcolor{blue}{1.0051}& $2.5\times10^{-13}$ \\ $-1$ & $9.5\times10^{-14}$ & $-2.4\times10^{-13}$ & \textcolor{blue}{0.9933} & $2.1\times10^{-14}$ & $-2.2\times10^{-14}$ & $-1.6\times10^{-13}$ & $3.3\times10^{-14}$& $3.8\times10^{-14}$ \\ \hline \hline \end{tabular} \caption{Fitting of the scaling coefficients $\lambda_{\nu}$ and $\lambda_{\nu}^{(\textrm{st})}$ with respect to the dual generators $\mathcal{O}_{\nu}$ and $\mathcal{O}_{\nu}^{(\textrm{st})}$, respectively, for the different topological phases for the extended Kitaev fermion chain with parameters ${J}_1^+={J}_1^-=0.1$, ${J}_{2}^+={J}_{2}^-=0.21$, ${J}_{3}^+={J}_{3}^-=-0.74$, ${J}_4^+={J}_4^-=0.9$ ($N_f=4$), and chain length up to $L=2000$. The four essentially non-zero scaling coefficients are shown in blue font, and all four are close to $1$.}\label{tab:1} \end{table*} \section{Duality Transformation} The duality transformation connects different but equivalent mathematical descriptions of a system or a state of matter through a mapping by the change of variables in quantum physics \cite{FRADKIN1978,Smacchia2011,Feng2007,Qin2017}. For example, an Ising chain with an external field $h$ has a self-duality symmetry, mapping between the ordered and disordered phases, expressed as \begin{align} H_{\textrm{Ising}}=\sum_j(\sigma_j^x\sigma_{j+1}^x+h\sigma_j^z)=h\sum_j(s_j^xs_{j+1}^x+h^{-1}s_j^z) \end{align} with the duality transformation \begin{align} s_j^x=\prod_{k\leq j}\sigma_k^z,\hspace{0.2in}s_j^z=\sigma_j^x\sigma_{j+1}^x, \hspace{0.2in}s_j^y=-is_j^zs_j^x, \end{align} where both $\sigma$ and $s$ satisfy the same algebra. By this duality transformation, the cluster Ising model \cite{Smacchia2011,Cui2013} can be mapped to an anisotropic $XY$ model \begin{align} H_{\textrm{cluster}}&=\sum_j(\sigma_{j-1}^x\sigma_{j}^z\sigma_{j+1}^x+h\sigma_j^z)\\ &=\sum_j(-s_j^ys_{j+1}^y+hs_j^xs_{j+1}^x),\label{dI} \end{align} of which the ordered phase can help to characterize the symmetry-protected topological phase by a $\mathbf{Z}_2\times\mathbf{Z}_2$ symmetry of the cluster Ising model. Therefore, as shown in \cite{Smacchia2011,Cui2013}, this symmetry-protected topological phase can be characterized by the \emph{unlocal} string correlation function \cite{Venuti2005} equal to a \emph{local} correlator in the dual lattice of the Ising model with the form \begin{align} (-)^rC_{\nu=2}(r)&=(-)^r\langle s_j^ys_{j+r}^y\rangle_{\mathcal{G}}\\ =&(-)^r\left\langle\sigma_j^x\sigma_{j+1}^y\left(\prod_{k=2}^{r-1}\sigma_{j+k}^z\right)\sigma_{j+r}^y\sigma_{j+r+1}^x\right\rangle_{\mathcal{G}},\label{unlocal} \end{align} from site $j$ to $(j+r)$ in the dual lattice. It is shown in Ref.~\cite{Cobanera2011} that the Jordan-Wigner transformation mapping between a one-dimensional spin-$\frac{1}{2}$ model and free fermion chain can also be regarded as a dual transformation with a bond-algebraic approach. Through the Jordan-Wigner transformation, the cluster Ising model corresponds to an extended Kitaev chain with a $\textbf{Z}_4$ symmetry. Thus, the self-duality properties of the Ising model (\ref{dI}) can help to study topological phases and multipartite entanglement in the symmetry-protected phase with a winding number $\nu=2$ in the extended Kitaev chain. Generally, we find that for the extended Kitaev chain, the string correlation function can be written as a spin correlation function with respect to the spin operators from the self-duality symmetry of the extended Ising model. The duality transformation for topological phases with a winding number $\nu=2$ can be written as \begin{align} &\mathbb{Z}_j^{{(2)}}=\sigma_j^x\sigma_{j+1}^x,\hspace*{0.2in}\mathbb{X}_j^{{(2)}}=\prod_{l=1}^{j}\sigma_l^z,\\ &\mathbb{Y}_j^{{(2)}}=-i\mathbb{Z}_j^{{(2)}}\mathbb{X}_j^{{(2)}}=-\left(\prod_{l=1}^{j-1}\sigma_l^z\right)\sigma_j^y\sigma_{j+1}^x \end{align} which implies that \begin{equation} \mathbb{X}_j^{{(2)}} \mathbb{X}_{j+1}^{{(2)}}=\sigma_{j+1}^z. \end{equation} Therefore, the duality transformation connects two Ising models as \begin{eqnarray} \sum_{j=1}^L\sigma_j^x\sigma_{j+1}^x+\mu\sigma_j^z=\sum_{j=1}^L\mathbb{Z}_j^{{(2)}}+\mu\mathbb{X}_j^{{(2)}}\mathbb{X}_{j+1}^{{(2)}}. \end{eqnarray} The spin correlation function with dual $y$-directional spin operators between sites $i$ and $j{=i+r}$ equals to the string correlation function: \begin{align} {C_{\nu=2}(r)}=\left\langle\mathbb{Y}_i^{{(2)}}\mathbb{Y}_j^{{(2)}}\right\rangle_{\mathcal{G}}=\left\langle\prod_{l=i}^{j-1}\sigma_l^{x}\sigma_{l+1}^{z}\sigma_{l+2}^x\right\rangle_{\!\!\!\mathcal{G}}. \end{align} Similarly, the duality transformation for topological phases with $\nu=-2$ can be written as \begin{align} &{\mathbb{Z}}_j^{{(-2)}}=\sigma_j^y\sigma_{j+1}^y,\hspace*{0.2in}{\mathbb{Y}}_j^{{(-2)}}=\prod_{l=1}^{j}\sigma_k^z,\\ &{\mathbb{X}}_j^{{(-2)}}=-i{\mathbb{Y}}_j^{{(-2)}}{\mathbb{Z}}_j^{{(-2)}}=-\left(\prod_{l=1}^{j-1}\sigma_l^z\right)\sigma_j^x\sigma_{j+1}^y \end{align} which implies that \begin{equation} {\mathbb{X}}_j^{{(-2)}} {\mathbb{X}}_{j+1}^{{(-2)}}=\sigma_{j+1}^z, \end{equation} and \begin{equation} \sum_{j=1}^L\sigma_j^y\sigma_{j+1}^y+\mu\sigma_j^z=\sum_{j=1}^L\mathbb{Z}_j^{{(-2)}}+\mu\mathbb{Y}_j^{{(-2)}}\mathbb{Y}_{j+1}^{{(-2)}}. \end{equation} The dual $x$-directional correlation function between sites $i$ and $j{=i+r}$ equals to the string correlation function \begin{align} {C_{\nu=-2}(r)}=\left\langle{\mathbb{X}}_i^{{(-2)}}{\mathbb{X}}_j^{{(-2)}}\right\rangle_{\mathcal{G}} =\left\langle\prod_{l=i}^{j-1}\sigma_l^{y}\sigma_{l+1}^{z}\sigma_{l+2}^y\right\rangle_{\!\!\!\mathcal{G}}. \end{align} We can therefore define the dual spin operators as \begin{eqnarray}\left\{ \begin{array}{ll} \tau_j^{{(2)}}=\mathbb{Y}_j^{{(2)}},&\textrm{for}\ \nu=2,\\ \tau_j^{{(-2)}}=\mathbb{X}_j^{{(-2)}},&\textrm{for}\ \nu=-2. \end{array}\right. \end{eqnarray} The duality transformation for $\nu=3$ can be written as \begin{align} &\mathbb{Z}_j^{{(3)}}=\sigma_j^x\sigma_{j+1}^z\sigma_{j+2}^x,\hspace*{0.2in}\mathbb{X}_j^{{(3)}}=\sigma_{j+1}^x,\\ &\mathbb{Y}_j^{{(3)}}=-i\mathbb{Z}_j^{{(3)}}\mathbb{X}_j^{{(3)}}=\sigma_j^x\sigma_{j+1}^y\sigma_{j+2}^x \end{align} which implies that \begin{equation} \mathbb{X}_j^{{(3)}}\mathbb{Z}_{j+1}^{{(3)}}\mathbb{X}_{j+2}^{{(3)}}=\sigma_{j+2}^z. \end{equation} The duality transformation for $\nu=-3$ can be written as \begin{align} &\mathbb{Z}_j^{{(-3)}}=\sigma_j^y\sigma_{j+1}^z\sigma_{j+2}^y,\hspace*{0.2in}\mathbb{Y}_j^{{(-3)}}=\sigma_{j+1}^y,\\ &\mathbb{X}_j^{{(-3)}}=-i\mathbb{Y}_j^{{(-3)}}\mathbb{Z}_j^{{(-3)}}=\sigma_j^y\sigma_{j+1}^x\sigma_{j+2}^y \end{align} which implies that \begin{equation} \mathbb{Y}_j^{{(-3)}}\mathbb{Z}_{j+1}^{{(-3)}}\mathbb{Y}_{j+2}^{{(-3)}}=\sigma_{j+2}^z. \end{equation} Thus, we can define the dual spin operators as \begin{eqnarray}\left\{ \begin{array}{ll} \tau_j^{{(3)}}=\mathbb{Y}_j^{{(3)}},&\textrm{for}\ \nu=3,\\ \tau_j^{{(-3)}}=\mathbb{X}_j^{{(-3)}},&\textrm{for}\ \nu=-3. \end{array}\right. \end{eqnarray} Generally, the formalism of string correlation functions and dual spin operators depend on the parity of the winding numbers \cite{Fidkowski2011}. We first consider the odd winding numbers with $p>1$: For positive odd winding numbers $\nu=2p-1$, we have \begin{align} &\mathbb{Z}_j^{(2p-1)}=\sigma_j^x\left(\prod_{l=1}^{2p-3}\sigma_{j+l}^z\right)\sigma_{j+2p-2}^x,\\ &\mathbb{X}_j^{(2p-1)}=\left(\prod_{l=1}^{p-2}\sigma_{j+2l-1}^x\sigma_{j+2l}^y\right)\sigma_{j+2p-3}^x,\\ &\mathbb{Y}_j^{(2p-1)}=\sigma_j^x\left(\prod_{l=1}^{p-1}\sigma_{j+2l-1}^y\sigma_{j+2l}^x\right), \end{align} which implies \begin{equation} \mathbb{X}_j^{(2p-1)}\left(\prod_{l=1}^{2p-3}\mathbb{Z}_{j+l}^{(2p-1)}\right)\mathbb{X}_{j+2p-2}^{(2p-1)}=\sigma_{j+2p-2}^z. \end{equation} For negative odd winding numbers $\nu=1-2p$, we have \begin{align} &{\mathbb{Z}}_j^{(1-2p)}=\sigma_j^y\left(\prod_{l=1}^{2p-3}\sigma_{j+l}^z\right)\sigma_{j+2p-2}^y,\\ &{\mathbb{Y}}_j^{(1-2p)}=\left(\prod_{l=1}^{p-2}\sigma_{j+2l-1}^y\sigma_{j+2l}^x\right)\sigma_{j+2p-3}^y,\\ &{\mathbb{X}}_j^{(1-2p)}=\sigma_j^y\left(\prod_{l=1}^{p-1}\sigma_{j+2l-1}^x\sigma_{j+2l}^y\right), \end{align} which implies \begin{equation} \mathbb{Y}_j^{(1-2p)}\left(\prod_{l=1}^{2p-3}\mathbb{Z}_{j+l}^{(1-2p)}\right)\mathbb{Y}_{j+2p-2}^{(1-2p)}=\sigma_{j+2p-2}^z. \end{equation} Thus, we can define the dual spin operators as \begin{eqnarray} \left\{ \begin{array}{ll} \tau_j^{(2p-1)}=\mathbb{Y}_j^{(2p-1)},&\textrm{for}\ \nu=2p-1,\\ \tau_j^{(1-2p)}=\mathbb{X}_j^{(1-2p)},&\textrm{for}\ \nu=1-2p. \end{array}\right. \end{eqnarray} We then consider the even winding numbers with $p>1$: For positive even winding numbers $\nu=2p$, we have \begin{align} &\mathbb{Z}_j^{{(2p)}}=\sigma_j^x\left(\prod_{l=1}^{2p-2}\sigma_{j+l}^z\right)\sigma_{j+2p-1}^x,\\ &\mathbb{X}_j^{{(2p)}}=\left(\prod_{k=1}^{j}\sigma_k^z\right)\left(\prod_{l=1}^{p-1}\sigma_{j+2l-1}^y\sigma_{j+2l}^x\right)\\ &\mathbb{Y}_j^{{(2p)}}=-\left(\prod_{k=1}^{j-1}\sigma_k^z\right)\left(\prod_{l=1}^{p}\sigma_{j+2l-2}^y\sigma_{j+2l-1}^x\right) \end{align} which implies \begin{equation} \mathbb{X}_j^{{(2p)}}\left(\prod_{l=1}^{2p-2}\mathbb{Z}_{j+l}^{{(2p)}}\right)\mathbb{X}_{j+2p-1}^{{(2p)}}=\sigma_{j+2p-1}^z. \end{equation} For negative even winding numbers $\nu=-2p$, we have \begin{align} &{\mathbb{Z}}_j^{{(-2p)}}=\sigma_j^y\left(\prod_{l=1}^{2p-2}\sigma_{j+l}^z\right)\sigma_{j+2p-1}^y,\\ &{\mathbb{Y}}_j^{{(-2p)}}=\left(\prod_{k=1}^{j}\sigma_k^z\right)\left(\prod_{l=1}^{p-1}\sigma_{j+2l-1}^x\sigma_{j+2l}^y\right)\\ &{\mathbb{X}}_j^{{(-2p)}}=-\left(\prod_{k=1}^{j-1}\sigma_k^z\right)\left(\prod_{l=1}^{p}\sigma_{j+2l-2}^x\sigma_{j+2l-1}^y\right) \end{align} which implies \begin{equation} \mathbb{Y}_j^{{(-2p)}}\left(\prod_{l=1}^{2p-2}\mathbb{Z}_{j+l}^{{(-2p)}}\right)\mathbb{Y}_{j+2p-1}^{{(-2p)}}=\sigma_{j+2p-1}^z. \end{equation} Thus, we can write the dual spin operators as \begin{eqnarray}\left\{ \begin{array}{ll} \tau_j^{{(2p)}}=\mathbb{Y}_j^{{(2p)}},&\textrm{for}\ \nu=2p,\\ \tau_j^{{(-2p)}}=\mathbb{X}_j^{{(-2p)}},&\textrm{for}\ \nu=-2p. \end{array}\right. \end{eqnarray} \begin{figure}[t] \centering \includegraphics[width=0.46\textwidth]{FigS3.eps}\\ \caption{(color online) Quantum Fisher information density $f_Q[\mathcal{O}_\nu^{(\textrm{st})},|\mathcal{G}\rangle]$ as a function of $L$ for the extended Kitaev fermion chain with nonzero parameters ${J}_1^+={J}_1^-=1$, ${J}_{2}^+={J}_{2}^-=2$, ${J}_{3}^+={J}_{3}^-=2$ ($N_f=3$) on the contour between different topological phases for (a) $\mu=5$, (b) $\mu=\sqrt{3}-1$, (c) $\mu=-1$, and (d) $\mu=-\sqrt{3}-1$. The scaling coefficients $\lambda_\nu^{(\textrm{st})}$ are shown in Tab.~\ref{tab:2}.}\label{fig:S2} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.467\textwidth]{FigS4.eps}\\ \caption{(color online) Quantum Fisher information density $f_Q$ of the ground state $|\mathcal{G}\rangle$ with respect to the dual generators $\mathcal{O}_{\nu}$ and $\mathcal{O}_{\nu}^{(\textrm{st})}$ as a function of $L$ for the extended Kitaev fermion chain when $\mu=1$ with nonzero parameters: (a) ${J}_1^+={J}_1^-=1$ ($N_f=1$); (b) ${J}_2^+={J}_2^-=1$ ($N_f=2$); (c) ${J}_3^+={J}_3^-=1$ ($N_f=3$); and (d) ${J}_4^+={J}_4^-=1$ ($N_f=4$). The scaling coefficients $\lambda_{\nu}$ and $\lambda_{\nu}^{(\textrm{st})}$ are shown in Tab.~\ref{tab:3}. }\label{fig:S3} \end{figure} \section{Quantum Fisher information density and string correlation functions} For higher winding numbers $\nu=\pm2,\pm3,\cdots$, the quantum Fisher information with respect to the dual generators \begin{equation} {\mathcal{O}_\nu}=\sum_{j=1}^{M}\tau_j^{{(\nu)}},\hspace*{0.2in}{\mathcal{O}_\nu^{(\textrm{st})}}=\sum_{j=1}^{M}(-)^j\tau_j^{{(\nu)}} \end{equation} can be written as \begin{align} &{F_{Q}[\mathcal{O}_{\nu},|\mathcal{G}\rangle]}=M+M\sum_{r=1}^{M-1}\langle\tau_i^{{{(\nu)}}}\tau_{i+r}^{{{(\nu)}}}\rangle_{\mathcal{G}}\\ &{F_{Q}[\mathcal{O}^{(\textrm{st})}_{\nu},|\mathcal{G}\rangle]}=M+M\sum_{r=1}^{M-1}(-)^{r}\langle\tau_i^{{{(\nu)}}}\tau_{i+r}^{{{(\nu)}}}\rangle_{\mathcal{G}} \end{align} where $(\tau_{j}^{{(\nu)}})^2=\mathbb{I}$, with $\mathbb{I}$ the identity, and we let \begin{equation} M\equiv L-|\nu|+1. \end{equation} For the thermodynamic limit $L\gg N_f\geq1$, we can obtain the dual quantum Fisher information density as \begin{align} &{f_{Q}[\mathcal{O}_{\nu},|\mathcal{G}\rangle]}=\frac{{F_{Q}[\mathcal{O}_{\nu},|\mathcal{G}\rangle]}}{L}=1+\sum_{r=1}^{L-|\nu|}{C_\nu}(r),\\ &{f_{Q}[\mathcal{O}^{(\textrm{st})}_{\nu},|\mathcal{G}\rangle]}=\frac{{F_{Q}[\mathcal{O}^{(\textrm{st})}_{\nu},|\mathcal{G}\rangle]}}{L}=1+\sum_{r=1}^{L-|\nu|}{(-)^{r}C_\nu}(r), \label{scf} \end{align} where $M\simeq L$ as $|\nu|\leq N_f$, and \begin{eqnarray} C_{\nu}(r)\equiv\langle\tau_i^{{(\nu)}}\tau_{i+r}^{{(\nu)}}\rangle_{\mathcal{G}} \end{eqnarray} is the so-called string correlation function \cite{Venuti2005,Cui2013} from site $i$ to $j=i+r$ in the dual lattice. The string correlation function is shown able to reveal hidden symmetry-protected order by $\mathbf{Z}$ symmetry in many topological systems \cite{Venuti2005,Cui2013,Feng2007,Smacchia2011}. It is easier to rewrite the string correlation function in terms of Majorana operators and fermion operators \begin{equation} d_{l,\nu}=(b_l+ia_{l+\nu})/{2},\hspace{0.2in}d_{l,\nu}^\dag=(b_l-ia_{l+\nu})/{2} \end{equation} as \begin{equation}\label{cf} C_{\nu}(r)=\left\langle\prod_{l=i}^{j-1}(- ib_{l}a_{l+\nu})\right\rangle_{\!\!\!\mathcal{G}}\!=\left\langle\prod_{l=i}^{j-1}(1-2d_{l,\nu}^\dag d_{l,\nu})\right\rangle_{\!\!\!\mathcal{G}} \end{equation} Usually, the string correlation function is written in terms of Pauli matrices as \begin{eqnarray} {C_{\nu}(r)}=\left\langle\prod_{l=i}^{j-1}\Big(\sigma_l^{\alpha}\sigma_{l+|\nu|}^{\alpha}\prod_{k=l+1}^{l+|\nu|-1}\sigma_k^z\Big)\right\rangle_{\!\!\!\mathcal{G}}, \end{eqnarray} where $\alpha=x$ for positive $\nu$, and $\alpha=y$ for negative $\nu$. The interchange between the quantum phases with positive and negative winding numbers $\nu=\pm n$ ($n$ is a positive integer) \begin{align} {\mathcal{O}_{\nu=n}^{(\textrm{st})}}\leftrightarrow{\mathcal{O}_{\nu=-n}^{(\textrm{st})}},&\hspace*{0.2in}{\mathcal{O}_{\nu=n}}\leftrightarrow{\mathcal{O}_{\nu=-n}}\\ {{f}_Q[\mathcal{O}_{\nu=n}^{(\textrm{st})}]}\leftrightarrow{{f}_Q[\mathcal{O}_{\nu=-n}^{(\textrm{st})}]},&\hspace*{0.2in}{{f}_Q[\mathcal{O}_{\nu=n}]}\leftrightarrow{{f}_Q[\mathcal{O}_{\nu=-n}]} \end{align} can be realized by a phase redefinition $c_j \rightarrow \pm i c_j$. Another interchange between the staggered operator $\mathcal{O}_{\nu=1}^{(\textrm{st})}$ and the operator $\mathcal{O}_{\nu=1}$, for the positive and negative signs of the interaction between Dirac fermions localized at the chain ends, respectively, \begin{align} {\mathcal{O}_{\nu=n}^{(\textrm{st})}}\leftrightarrow{\mathcal{O}_{\nu=n}},&\hspace*{0.2in}{\mathcal{O}^{(\textrm{st})}_{\nu=-n}}\leftrightarrow{\mathcal{O}_{\nu=-n}}\\ {{f}_Q[\mathcal{O}_{\nu=n}^{(\textrm{st})}]}\leftrightarrow{{f}_Q[\mathcal{O}_{\nu=n}]},&\hspace*{0.2in}{{f}_Q[\mathcal{O}_{\nu=-n}^{(\textrm{st})}]}\leftrightarrow{{f}_Q[\mathcal{O}_{\nu=-n}]} \end{align} can be realized by a Hermitian conjugate transformation $c_j \rightarrow c_j^\dag$. Following the calculations in previous sections, we can write the string correlation function into a determinant of size ($r-|\nu|+1$) as \begin{equation} {C_{\nu}(r)}=\left|\begin{array}{c c c c} G_{-\nu}&G_{-\nu-1}&\cdots&G_{-r}\\ G_{1-\nu}&G_{-\nu}&\cdots&G_{1-r}\\ \vdots&\vdots&\vdots&\vdots\\ G_{r-2\nu}&G_{r-2\nu+1}&\cdots&G_{-\nu} \end{array}\right| \end{equation} for positive $\nu$ and \begin{equation} {C_{\nu}(r)}=\left|\begin{array}{c c c c} G_{-\nu}&G_{-\nu-1}&\cdots&G_{-r-2\nu}\\ G_{1-\nu}&G_{-\nu}&\cdots&G_{1-r-2\nu}\\ \vdots&\vdots&\vdots&\vdots\\ G_{r}&G_{r-1}&\cdots&G_{-\nu} \end{array}\right| \end{equation} for negative $\nu$. Because the string correlation function decays exponentially versus the distance $r$ when breaking the hidden $\mathbf{Z}$ symmetry (see, for example, Fig.~\ref{fig:A1}), the quantum Fisher information density as a function of $L$ has a scaling form in the thermodynamic limit, \begin{align} {f}_{Q}[\mathcal{O}_\nu, |\mathcal{G}\rangle]&\simeq1+\gamma_\nu L^{\lambda_\nu},\\ {f}_{Q}[\mathcal{O}^{(\textrm{st})}_\nu, |\mathcal{G}\rangle]&\simeq1+\gamma_\nu^{(\textrm{st})} L^{\lambda_\nu^{(\textrm{st})}} \end{align} and becomes linear: \begin{equation} \lambda_\nu\ \textrm{or}\ \lambda_\nu^{(\textrm{st})}\simeq1 \end{equation} in the topological quantum phase with a winding number $\nu$ and constant: \begin{equation} \lambda_\nu\ \textrm{and}\ \lambda_\nu^{(\textrm{st})}\simeq0, \end{equation} in the other phases, see Fig.~\ref{fig:S1} for example. Thus, the scaling coefficient $\lambda_\nu$ or $\lambda_\nu^{(\textrm{st})}$ obtained by numerical calculations can identify the topological phases with higher winding numbers, see numerical results in Tab.~\ref{tab:1}. \begin{table}[b] \begin{tabular}{c|c c c} \hline \hline $\mu$ & {$\lambda_{\nu=1}^{(\textrm{st})}$} & ${\lambda_{\nu=2}^{(\textrm{st})}}$ & ${\lambda_{\nu=3}^{(\textrm{st})}}$ \\ \hline 6\footnote{Inside topological phases.}& $2.8\times10^{-5}$ & $-4.3\times10^{-7}$& $-1.6\times10^{-6}$\\ $3$ & \textcolor{blue}{0.9965}& $9.4\times10^{-14}$& $2.5\times10^{-13}$\\ $0$ &$-4.2\times10^{-14}$ &$1.4\times10^{-13}$ & \textcolor{blue}{1.0047}\\ $-2$ & $-5.6\times10^{-7}$ & \textcolor{blue}{0.9957} & $2.9\times10^{-7}$\\ \hline 5\footnote{On the critical contour between phases.}& $\textcolor{blue}{0.7492}$ & $4.1\times10^{-7}$& $-1.9\times10^{-6}$\\ $\sqrt{3}-1$ & $\textcolor{blue}{0.5054}$& $-2.8\times10^{-3}$& $\textcolor{blue}{0.5165}$\\ $-1$ &$6.8\times10^{-5}$ &\textcolor{blue}{0.7518} & \textcolor{blue}{0.7547}\\ $-\sqrt{3}-1$ & $1.0\times10^{-3}$ &\textcolor{blue}{0.5088}& $-5.6\times10^{-4}$\\ \hline \hline \end{tabular} \caption{Fitting of the scaling coefficients $\lambda_{\nu}^{(\textrm{st})}$ of the dual quantum Fisher information density ${f}_Q[\mathcal{O}_{\nu}^{(\textrm{st})},|\mathcal{G}\rangle]$ inside different topological phases and on the critical contour between phases for the extended Kitaev fermion chain with nonzero parameters ${J}_1^+={J}_1^-=1$, ${J}_{2}^+={J}_{2}^-=2$, ${J}_{3}^+={J}_{3}^-=2$ ($N_f=3$), and chain length up to $L=2000$. The nine essentially non-zero scaling coefficients are show in blue font. }\label{tab:2} \end{table} \begin{table*}[t] \begin{tabular}{c|c c |c c| c c| c c} \hline \hline ~~~~~~~~$g(\zeta)$~~~~~~~~ & ~~~~~~~{$\lambda_{\nu=1}^{(\textrm{st})}$}~~~~~~~ & ~~~~~~~{$\lambda_{\nu=1}$}~~~~~~~ & ~~~~~~~{$\lambda_{\nu=2}^{(\textrm{st})}$}~~~~~~~ & ~~~~~~~{$\lambda_{\nu=2}$}~~~~~~~ & ~~~~~~~{$\lambda_{\nu=3}^{(\textrm{st})}$}~~~~~~~ & ~~~~~~~{$\lambda_{\nu=3}$}~~~~~~~ & ~~~~~~~{$\lambda_{\nu=4}^{(\textrm{st})}$}~~~~~~~ & ~~~~~~~{$\lambda_{\nu=4}$}~~~~~~~\\ \hline $\zeta-1$& \textcolor{blue}{0.7506} & $<10^{-5}$& $<10^{-5}$& $<10^{-3}$ & $<10^{-5}$& $<10^{-4}$ & $<10^{-4}$& $<10^{-4}$\\ $\zeta^2-1$& \textcolor{blue}{0.5072} & \textcolor{blue}{0.5072}& \textcolor{blue}{0.5040}& $<10^{-4}$ & $<10^{-3}$ & $<10^{-16}$& $<10^{-4}$& $<10^{-16}$\\ $\zeta^3-1$& \textcolor{blue}{0.2873} & 0.0043 & $<10^{-3}$ & \textcolor{blue}{0.2441} & \textcolor{blue}{0.2809} & $<10^{-16}$ & $<10^{-3}$ & $<10^{-16}$ \\ $\zeta^4-1$ & \textcolor{blue}{0.1313} & \textcolor{blue}{0.1313} & \textcolor{blue}{0.0950} & \textcolor{blue}{0.0950} & \textcolor{blue}{0.0745} & $<10^{-16}$ & \textcolor{blue}{0.1223} & $<10^{-16}$ \\ \hline \hline \end{tabular} \caption{Fitting of the scaling coefficients $\lambda_{\nu}$ and $\lambda_{\nu}^{(\textrm{st})}$ with respect to the dual generators $\mathcal{O}_{\nu}$ and $\mathcal{O}_{\nu}^{(\textrm{st})}$, respectively, on the critical contour between phases for the extended Kitaev fermion chain with characteristic functions $g(\zeta)$ and chain length up to $L=2000$. The thirteen essentially non-zero scaling coefficients are shown in blue font.}\label{tab:3} \end{table*} \section{Topological phase transitions and Half-integer winding numbers with zeros on the critical contour}\label{VI} For completeness, we discuss the case when zeros of the characteristic equation appear on the contour $|\zeta|=1$, and interpret the physical implications of half-integer winding numbers therein. We can find that the topological phase transitions occur at the critical points satisfying \begin{eqnarray} g(\zeta)=\sum_{n=1}^{N_f}({J}_n^x\zeta^n+{J}_n^y\zeta^{-n})-\mu=0\label{supeq} \end{eqnarray} for $|\zeta|=1$. For example, we choose the parameters of the extended Kitaev fermion chain as ${J}_1^+={J}_1^-=1$, ${J}_{2}^+={J}_{2}^-=2$, ${J}_{3}^+={J}_{3}^-=2$ ($N_f=3$), and calculate the real solutions of the chemical potential $\mu$: for $\zeta=1$, $\mu=5$; for $\zeta=-1$, $\mu=-1$; for \begin{equation} \zeta=\exp\{\pm i\arccos[(-\sqrt{3}-1)/4]\}, \end{equation} $\mu=\sqrt{3}-1$; and for \begin{equation} \zeta=\exp\{\pm i\arccos[(\sqrt{3}-1)/4]\}, \end{equation} $\mu=-\sqrt{3}-1$, where the topological phase transitions occur. For another example, we consider the parameters of the extended Kitaev fermion chain as $J_{2}^+=J_{2}^-=\lambda$, $J_1^+=1$, $J_1^-=-1$, $\mu=1$, and change the value of $\lambda$. We can obtain the critical points of topological phase transitions by solving the characteristic equation: \begin{eqnarray} g(\zeta)=\lambda\zeta^2+\zeta^{-1}-1=0 \end{eqnarray} where we can obtain the transition points: for $\zeta=1$, $\lambda=0$; for $\zeta=-1$, $\lambda=2$; for \begin{equation} \zeta=\exp\{\pm i\arccos[(1-\sqrt{5})/4]\}, \end{equation} $\lambda=(-\sqrt{5}-1)/2$; and for \begin{equation} \zeta=\exp\{\pm i\arccos[(1+\sqrt{5})/4]\}, \end{equation} $\lambda=(\sqrt{5}-1)/2$. We then consider the critical behaviors of quantum states on the transition points. From the viewpoint of geometric topology, we consider the Kitaev closed chain as $\Delta=J$ and assume an anti-periodic boundary conditions $c_{j+L}=-c_{j}$. If \begin{equation} \Delta=-{\mu}=-1, \end{equation} the characteristic function becomes \begin{eqnarray} g(\zeta)=\zeta-1, \end{eqnarray} and the winding number can be calculated by the Cauchy principal value: \begin{align} &\nu=\frac{1}{2\pi i}\oint_{{}_{|\zeta|=1}}\!\!\!\!\!\!\!\!\!d\zeta\;\frac{1}{\zeta-1}\\ &=\frac{1}{2\pi i}\lim_{\varepsilon\rightarrow0}\left[\int_{-\varepsilon}^{2\pi-\varepsilon}\!\!\!\!\!\!\!d\zeta\;\frac{1}{\zeta-1}\right]\nonumber\\ &~~=\frac{1}{2\pi i}\lim_{\varepsilon\rightarrow0}\left[i\varepsilon\int^{\frac{3\pi}{2}}_{\frac{\pi}{2}}\!\!\!d\theta\;\frac{e^{i\theta}}{(\varepsilon e^{i\theta}+1)-1}\right]\\ &=\frac{1}{2}, \end{align} where we can only obtain massive Dirac edge modes \cite{Viyuela2016} for the open Kitaev chain. Moreover, in consideration of the boundary parts for the closed chain, we can write the Hamiltonian in terms of Majorana fermion operators as \begin{eqnarray} i{H}=\sum_{j=1}^{L}a_jb_j+\sum_{j=1}^{L-1}b_ja_{j+1}+(-1)^{N_p}b_La_1, \end{eqnarray} where we have that \begin{equation} \phi=\frac{1}{\sqrt{L}}\sum_{j=1}^L a_j,\hspace{0.2in}\psi=\frac{1}{\sqrt{L}}\sum_{j=1}^L b_j, \end{equation} are a pair of zero modes (obviously not edge modes) for even $N_p$, but there exists no zero mode for odd parity. Therefore, the half-integer winding number represents a critical phenomenon when the Majorana zero mode exists or not for different fermion parities $(-1)^{N_p}$ in consideration of boundary Hamiltonian. Generally, it can be inferred that if we have even number of zeros on the contour, the winding number is still an integer for different fermion parities. In Fig.~\ref{fig:S2}, we plot the quantum Fisher information density as a function of $L$ in critical cases for the extended Kitaev fermion chain with ${J}_1^+={J}_1^-=1$, ${J}_{2}^+={J}_{2}^-=2$, ${J}_{3}^+={J}_{3}^-=2$ ($N_f=3$), and present the scaling coefficients $\lambda_{\nu}^{(\textrm{st})}$ in Tab.~\ref{tab:2}. Then, we plot in Fig.~\ref{fig:S3} the quantum Fisher information density as a function of $L$ for an extended Kitaev fermion chain with characteristic functions: \begin{itemize} \item[(a)] $g(\zeta)=\zeta-1$, \item[(b)] $g(\zeta)=\zeta^2-1$, \item[(c)] $g(\zeta)=\zeta^3-1$, \item[(d)] $g(\zeta)=\zeta^4-1$, \end{itemize} where the zeros are on the contour $|\zeta|=1$ given $\mu=1$. The scaling coefficients $\lambda_{\nu}$ and $\lambda_{\nu}^{(\textrm{st})}$ are shown in Tab.~\ref{tab:3}. We should note that our discussions would be inappropriate to discuss the Dirac sector of the topological phase diagram for the extended Kitaev chain which would have a half integer winding number \cite{Vodola2014,Viyuela2016,Vodola2015,Lepori2017}, because the boundary conditions (anti-periodic and periodic) for finite chain length $L$ would destroy long-range hopping and pairing terms, and the thermodynamic limit $L\gg N_f\geq1$ could not be satisfied. \section{Characterization of topological phases in a Kitaev honeycomb model via dual multipartite entanglement} The Kitaev honeycomb model (i.e., a two-dimensional spin model on a hexagonal lattice with direction-dependent interactions between adjacent lattice sites) is an analytically solvable model with topological quantum phase transitions at zero temperature \cite{Kitaev2006a}. The Hamiltonian is \begin{align}\label{honey} H_{\textrm{hc}}=-\sum_{\kappa=x,y,z}\!J_\kappa\!\sum_{\langle ij\rangle_\kappa}\sigma_i^\kappa\sigma_j^\kappa, \end{align} where $\langle ij\rangle_\kappa$ denotes the nearest-neighbor bonds in the $\kappa$-direction. At each site, we define four Majorana operators $a^\alpha$, with $\alpha=0,x,y,z$, satisfying $(a^\alpha)^\dag=a^\alpha$, $\{a^\alpha,a^\beta\}=2\delta_{\alpha\beta}$, and $a^xa^ya^za^0 = 1$, and write the Pauli operators as \begin{equation} \sigma_j^\kappa=ia_j^\kappa a_j, \end{equation} with $\kappa=x,y,z$ and $a_j^0\equiv a_j$. The Hamiltonian is then rewritten with \begin{align} \hat{u}_{{\langle ij\rangle}_\kappa}\equiv i a^\kappa_ia^\kappa_j \end{align} as \begin{equation} H_{\textrm{hc}}=\frac{i}{2}\sum_{\langle ij\rangle_\kappa}J_{\kappa_{\langle ij\rangle}}\hat{u}_{{\langle ij\rangle}_\kappa} a_ia_j, \end{equation} where the factor $\frac{1}{2}$ is due to each lattice being counted twice in the summation. We have $\hat{u}_{{\langle ij\rangle}_\kappa}^2=1$ and $[H_{\textrm{hc}},\hat{u}_{{\langle ij\rangle}_\kappa}]=0$. Here we take $\hat{u}_{{\langle ij\rangle}_\kappa}=1$ for all bonds ($\pi$-flux phase), because this vortex-free configuration has the lowest energy \cite{Kitaev2006a,Lieb1994}. The system size is $N=2LM$, and at first, we set $M=L$. \begin{figure}[t] \centering \includegraphics[width=0.38\textwidth]{FigS5.eps}\\ \caption{(color online) (a) A graphic representation of the Kitaev honeycomb model with two sublattices (empty and full circles). There are three types of bonds labeled by $x,y,z$. (b) The equivalent brick-wall lattice with three rows ($m=1,2,3$). (c) A single-chain representation of the two-leg spin ladder. }\label{fig:S5} \end{figure} Using the Fourier transformation, the Hamiltonian in the momentum representation is \cite{Chen2016} \begin{align} H_{\textrm{hc}}=\sum_{\bm{q}}(a_{-\bm{q},1},a_{-\bm{q},2})\ \mathcal{H}_{\bm{q}}\left(\begin{array}{c}a_{\bm{q},1}\\a_{\bm{q},1}\end{array}\right), \end{align} where $\bm{q}=(q_1,q_2)$ is the momentum vector and the Bloch matrix of $\mathcal{H}_{\bm{q}}$ is \begin{align} \mathcal{H}_{\bm{q}}=-\Delta_{\bm{q}}\sigma^x-\epsilon_{\bm{q}}\sigma^y=\left(\begin{array}{c c}0&i\Upsilon_{\bm{q}}\\-i\Upsilon^*_{\bm{q}}&0\end{array}\right), \end{align} with \begin{align} \Upsilon_{\bm{q}}&=\epsilon_{\bm{q}}+i\Delta_{\bm{q}},\\ \epsilon_{\bm{q}}&=J_x\cos q_1 +J_y\cos q_2 +J_z,\\ \Delta_{\bm{q}}&=J_x\sin q_1 +J_y\sin q_2. \end{align} By choosing the coordinate axes in the $\bm{n}_1$ and $\bm{n}_2$ directions as shown in Fig.~\ref{fig:S5}(a), then the momentum vectors $q_1=\bm{q}\cdot\bm{n}_1$ and $q_2=\bm{q}\cdot\bm{n}_2$ take the values \begin{align} q_{1,2}=\frac{2l\pi}{L}, \hspace*{0.2in}l=-\frac{L-1}{2},\cdots,\frac{L-1}{2}. \end{align} Using the Bogoliubov transformation \begin{align} D_{\bm{q},1}=u_{\bm{q}}a_{\bm{q},1}+v_{\bm{q}}a_{\bm{q},2},\hspace*{0.2in}D_{\bm{q},1}=v^*_{\bm{q}}a_{\bm{q},1}-u^*_{\bm{q}}a_{\bm{q},2} \end{align} with $u_{\bm{q}}=1/\sqrt{2}$ and $v_{\bm{q}}=i\Upsilon_{\bm{q}}/(\sqrt{2}|\Upsilon_{\bm{q}}|)$, the Hamiltonian is diagonalized \begin{align} H_{\textrm{hc}}=\sum_{\bm{q}}|f_{\bm{q}}|(1-2D^\dag_{\bm{q},2}D_{\bm{q},2}), \end{align} where we have used $\{D^\dag_{\bm{q},\mu},D^\dag_{\bm{q}',\mu'}\}=\delta_{\bm{q},\bm{q}'}\delta_{\mu,\mu'}$, $D^2_{\bm{q},\mu}=0$, and $D^\dag_{\bm{q},1}D_{\bm{q},1}=1-D^\dag_{\bm{q},2}D_{\bm{q},2}$. The ground state is \begin{align} |\mathcal{G}\rangle=\prod_{\bm{q}}D^\dag_{\bm{q},2}|0\rangle \end{align} and the energy gap is $2\min_{\bm{q}}\{|\Upsilon_{\bm{q}}|\}$. Then, we consider positive bonds, $J_{x,y,z}>0$, and focus on the $J_x+J_y+J_z=1$ parametric plane. As presented in Fig.~\ref{fig:S6}(a), in the region of $J_x\leq J_y+J_z$, $J_y\leq J_z+J_x$ and $J_z\leq J_x+J_y$, there is a gapless phase B with non-Abelian excitations, and in other regions, there are three gapped phases with Abelian anyon excitations \cite{Kitaev2006a} \begin{align} A_x:\ J_x\geq J_y+J_z,\\ A_y:\ J_y\geq J_z+J_x,\\ A_z:\ J_z\geq J_x+J_y. \end{align} \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{FigS6.eps}\\ \caption{(color online) (a) The phase diagram of the Kitaev honeycomb model in the $J_x+J_y+J_z=1$ plane. (b) Quantum Fisher information density in the dual lattice as a function of $L$ for the two-leg spin ladder. The scaling coefficients are $\lambda_x^{(\textrm{st})}\simeq0.9992$ for $J_{x,y,z}=0.6,0.2,0.2$, $\lambda_x^{(\textrm{st})}\simeq0.7508$ for $J_{x,y,z}=0.5,0.25,0.25$, and $\lambda_x^{(\textrm{st})}<10^{-12}$ for $J_{x,y,z}=0.4,0.3,0.3$. (c) Scaling topological index $\lambda_x^{(\textrm{st})}$ with different values of $J_{x,y,z}$ in the $J_x+J_y+J_z=1$ plane versus the system size $2L$ up to 400. }\label{fig:S6} \end{figure} Following \cite{Feng2007}, we consider a two-leg spin ladder of the Kitaev honeycomb model and relabel all the sites along a special path [as shown in Fig.~\ref{fig:S5}(c)] and express the Hamiltonian with the third-nearest-neighbor couplings \cite{Feng2007} \begin{align} H_{\textrm{2l}}=-\sum_{j=1}^L(J_x\sigma_{2j-1}^x\sigma_{2j}^x+J_y\sigma_{2j}^y\sigma_{2j+3}^y+J_z\sigma_{2j}^z\sigma_{2j+1}^z). \end{align} By considering the duality transformation introduced in \cite{Feng2007} \begin{align} &\sigma_j^x=\check{s}^x_{j-1}\check{s}^x_{j},\hspace*{0.2in}\sigma_j^z=\prod_{k=j}^{2L}\check{s}^z_k,\\ &\sigma_j^y=-i\sigma_j^z\sigma_j^x=\check{s}^x_{j-1}\check{s}^y_{j}\prod_{k=j+1}^{2L}\check{s}^z_k, \end{align} we obtain an anisotropic $XY$ spin chain with a transverse field in the dual space \begin{align} H_{\textrm{2l}}=-\sum_{j=1}^L(J_x\check{s}^x_{2j}\check{s}^x_{2j+2}\!+J_y{W}_{j}\check{s}^y_{2j}\check{s}^y_{2j+2}\!+J_z\check{s}^z_{2j}), \end{align} where \begin{align} {W}_{j}=\check{s}^x_{2j-1}\check{s}^z_{2j+1}\check{s}^x_{2j+3} \end{align} is the plaquette operator in the dual lattice and a good quantum number \cite{Feng2007}. We have ${W}_{j}=-1$ ($\pi$-flux phase \cite{Lieb1994}) for the ground state. We consider the inverse dual transformation \begin{align} &\check{s}^x_j=\prod_{k=1}^{j}\sigma_k^x,\hspace{0.2in}\check{s}_j^z=\sigma^z_{j}\sigma^z_{j+1}\\ &\check{s}_j^y=-i\check{s}_j^z\check{s}_j^x=\sigma^z_{j+1}\sigma^y_{j}\prod_{k=1}^{j-1}\sigma_k^x \end{align} and consider the spin correlation function in the dual lattice \begin{align} C_x(r)\equiv\langle \check{s}^x_{2i}\check{s}^x_{2j}\rangle_{\mathcal{G}}=\left\langle\prod_{k=2i+1}^{2j}\!\!\!\sigma_{k}^x\right\rangle_{\!\!\mathcal{G}} \end{align} where $r=j-i$. It is shown in Ref.~\cite{Feng2007} that the string correlation order \begin{align} \lim_{r\rightarrow\infty}(-)^rC_x(r)\neq0 \end{align} in the phase $A_x$ ($\ J_x\geq J_y+J_z$) and equals to zero in other regions. Similarly, with respect to the dual generator \begin{align} \mathcal{O}_x^{(\textrm{st})}=\sum_{j=1}^{L}(-)^j\check{s}^x_{2j}, \end{align} the quantum Fisher information density in the dual lattice is \begin{align} f_Q[\mathcal{O}_x^{(\textrm{st})},|\mathcal{G}\rangle]&\equiv1+\sum_{r=1}^{L-1}(-)^rC_x(r)\\ &\simeq1+\gamma_x^{(\textrm{st})} L^{\lambda_x^{(\textrm{st})}}. \end{align} In the gapped phase $A_x$, the dual QFI density is linear \begin{align} \lambda_x^{(\textrm{st})}\simeq1 \end{align} and constant \begin{align} \lambda_x^{(\textrm{st})}\simeq0 \end{align} in other regions, see Fig.~\ref{fig:S6}(b,c) for example. Moreover, the gapped phases $A_y$ and $A_z$ as shown in Fig.~\ref{fig:S6}(a) can be obtained by the substitutions $J_x\rightarrow J_y\rightarrow J_z\rightarrow J_x$ and $J_x\rightarrow J_z\rightarrow J_y\rightarrow J_x$, respectively. Therefore, the scaling coefficient of the dual quantum Fisher information density in the dual lattice can identify three gapped phases $A_x$, $A_y$ and $A_z$ with Abelian anyon excitations. Generally, we consider the equivalent brick-wall lattice of the Kitaev honeycomb model as shown in Fig.~\ref{fig:S5}(b) and rewrite the Hamiltonian (\ref{honey}) as \begin{align} H_{\textrm{hc}}=-\sum_{j=1}^{L}\sum_{m=1}^M(&J_x\sigma_{2j-1,m}^x\sigma_{2j,m}^x+J_y\sigma_{2j,m}^y\sigma_{2j+3,m+1}^y\nonumber\\ &+J_z\sigma_{2j,m}^z\sigma_{2j+1,m}^z). \end{align} In the two-dimensional limit $M\rightarrow \infty$, the above results for the two-leg spin ladder using string correlation functions and dual quantum Fisher information density to detect topological phase transitions can also be extended to the general two-dimensional lattice by transforming the second index $m$ to momentum space \cite{Feng2007,Zhang2018}.
{ "timestamp": "2018-04-04T02:10:54", "yymm": "1712", "arxiv_id": "1712.05286", "language": "en", "url": "https://arxiv.org/abs/1712.05286" }
\section{The pentagon-wheel cocycle} \hangindent=-6.5cm\hangafter=-3% {\unitlength=1mm \begin{picture}(0,0)(-32,2.5) \put(65,0){$\boldsymbol{\gamma}_5 = {}$} \put(85,0){ {\unitlength=0.3mm \begin{picture}(55,53)(5,-5 \put(27.5,8.5){\circle*{3}} \put(0,29.5){\circle*{3}} \put(-27.5,8.5){\circle*{3}} \put(-17.5,-23.75){\circle*{3}} \put(17.5,-23.75){\circle*{3}} \qbezie (27.5,8.5)(0,29.5)(0,29.5) \qbezie (0,29.5)(-27.5,8.5)(-27.5,8.5) \qbezie (-27.5,8.5)(-17.5,-23.75)(-17.5,-23.75) \qbezie (-17.5,-23.75)(17.5,-23.75)(17.5,-23.75) \qbezie (17.5,-23.75)(27.5,8.5)(27.5,8.5) \put(0,0){\circle*{3}} \qbezie (27.5,8.5)(0,0)(0,0) \qbezie (0,29.5)(0,0)(0,0) \qbezie (-27.5,8.5)(0,0)(0,0) \qbezie (-17.5,-23.75)(0,0)(0,0) \qbezie (17.5,-23.75)(0,0)(0,0) \end{picture} } } \put(95,0){${}+\dfrac{5}{2}$} \put(114,0){ {\unitlength=0.4mm \begin{picture}(50,30)(0,-4 \put(12,0){\circle*{2.5}} \put(-12,0){\circle*{2.5}} \put(25,15){\circle*{2.5}} \put(-25,15){\circle*{2.5}} \put(-25,-15){\circle*{2.5}} \put(25,-15){\circle*{2.5}} \put(-12,0){\line(1,0){24}} \put(-25,15){\line(1,0){50}} \put(-25,-15){\line(1,0){50}} \put(-25,-15){\line(0,1){32}} \put(25,15){\line(0,-1){32}} \qbezie (25,15)(12,0)(12,0) \qbezie (-25,15)(-12,0)(-12,0) \qbezie (-25,-15)(-12,0)(-12,0) \qbezie (25,-15)(12,0)(12,0) \put(-12.5,17){\oval(25,10)[t]} \put(12.5,-17){\oval(25,10)[b]} \put(0,2){\line(0,1){11}} \put(0,-2){\line(0,-1){11}} \end{picture} }% } \end{picture}% }% \label{FigPentagon}% Now consider the pentagon\/-\/wheel cocycle $\boldsymbol{\gamma}_5 \in \ker \Id$, see~\cite{JNMP17}. By orienting both graphs in $\boldsymbol{\gamma}_5$ (i.e.\ by shifting the vertex labelling by $+1 = m-1$, adding two edges to the sinks $\mathsf{0}$,\ $\mathsf{1}$, and keeping only those oriented graphs out of $1024 = 2^{\text{\#edges}}$ which are built from $\xleftarrow{}{\bullet} \xrightarrow{}$) and skew\/-\/symmetrizing with respect to $\mathsf{0} \rightleftarrows \mathsf{1}$, we obtain $91$ parameters for Kontsevich graphs on $2$ sinks, $6$ internal vertices, and $12$ ($=6$ pairs) of edges. We take the sum $\mathcal{Q}$ of these $91$ bi-\/vector graphs (or skew differences of Kontsevich graphs) with their undetermined coefficients, and for the set of tri\/-\/vector graphs occurring in $\schouten{\mathcal{P},\mathcal{Q}}$, we generate all the possibly needed tri\/-\/vector ``Leibniz'' graphs with $\schouten{\mathcal{P},\mathcal{P}}$ inside.\footnote{% The algorithm from~\cite[\S1.2]{JPCS17} produces 41031 Leibniz graphs in $\nu=3$ iterations and 56509 at~$\nu\geqslant7$.} This yields 41031 such Leibniz graphs, which, with undetermined coefficients, provide the ansatz for the r.-h.s.\ of the factorization problem \label{EqFactor} $\schouten{\mathcal{P},\mathcal{Q}(\mathcal{P})} = \Diamond\bigl(\mathcal{P},\schouten{\mathcal{P},\mathcal{P}}\bigr)$. This gives us an inhomogeneous system of 463,344 linear algebraic equations for both the coefficients in $\mathcal{Q}$ and~$\Diamond$. In its l.-h.s., we fix the coefficient of one bi\/-\/vector graph\footnote{This is done because it is anticipated that, counting the number of ways to obtain a given bi\/-\/vector while orienting the nonzero cocycle~$\boldsymbol{\gamma}_5$, none of the coefficients in a solution~$\mathcal{Q}_5$ vanishes.} by setting it to~${\mathbf{+2}}$. \begin{claim}For~$\boldsymbol{\gamma}_5$, the factorization problem $\schouten{\mathcal{P},\mathcal{Q}(\mathcal{P})} = \Diamond(\mathcal{P},\schouten{\mathcal{P},\mathcal{P}})$ has a solution $(\mathcal{Q}_5, \Diamond_5)$\textup{;} the sum $\mathcal{Q}_5$ of $167$ Kontsevich graphs \textup{(}on $m=2$ sinks $\mathsf{0},\mathsf{1}$ and $n=6$ internal vertices $\mathsf{2}$\textup{,} $\ldots$\textup{,} $\mathsf{7}$\textup{)} with integer coefficients is given in the table below \footnote{% The analytic formula of degree\/-\/six nonlinear differential polynomial $\mathcal{Q}_5(\mathcal{P})$ is given in App.~\ref{AppFormula}. The encoding of $ 691$ Leibniz tri\/-\/vector graphs containing the Jacobiator $\schouten{\mathcal{P},\mathcal{P}}$ for the Poisson structure $\mathcal{P}$ that occur in the r.-h.s.\ $\Diamond(\mathcal{P}, \schouten{\mathcal{P},\mathcal{P}})$ is available at \texttt{https://rburing.nl/Q5d5.txt}. The machine format to encode such graphs (with one tri\/-\/valent vertex for the Jacobiator) is explained in~\cite{JPCS17} (see also~\cite{f16,cpp}).}% \end{claim} {\tiny\centerline \begin{tabular}{l|r|} 0 1 2 4 2 5 3 6 4 7 2 4 & $10$\\ 0 1 2 4 2 5 2 6 4 7 3 4 & $-10$\\ 0 3 1 4 2 5 6 7 2 4 3 4 & $10 $\\ 0 3 4 5 1 2 6 7 2 3 3 4 & $-10$\\ 0 3 1 4 2 5 2 6 4 7 3 4 & $10 $\\ 0 3 4 5 1 2 4 6 3 7 2 3 & $-10$\\ 0 3 1 4 2 5 3 6 4 7 2 4 & $-10$\\ 0 3 4 5 1 2 2 6 3 7 3 4 & $-10$\\ 0 3 1 4 5 6 2 3 5 7 2 5 & $-10$\\ 0 3 4 5 2 6 4 7 1 2 4 6 & $10 $\\ 0 3 4 5 1 6 2 4 5 7 2 5 & $10 $\\ 0 3 4 5 2 6 4 6 1 7 2 4 & $-10$\\ 0 3 4 5 2 6 4 7 2 7 1 4 & $-10$\\ 0 3 4 5 1 6 2 4 3 7 2 3 & $10 $\\ 0 3 4 5 2 6 6 7 1 3 2 3 & $-10$\\ 0 3 4 5 2 6 2 7 1 3 3 6 & $10 $\\ 0 3 4 5 1 6 4 7 2 3 2 3 & $-10$\\ 0 3 4 5 1 5 2 6 2 7 4 5 & $10 $\\ 0 3 4 5 1 6 2 7 2 3 3 4 & $10 $\\ 0 3 4 5 1 5 2 6 4 7 2 5 & $10 $\\ 0 3 4 5 1 2 4 6 4 7 2 4 & $-10$\\ 0 3 1 4 2 5 2 6 2 7 2 3 & $-10$\\ 0 3 1 4 2 5 3 6 3 7 2 3 & $-10 \end{tabular} \hskip 10pt \begin{tabular}{l|r|} 0 3 4 5 1 2 2 6 2 7 2 4 & $-10$\\ 0 3 1 4 5 6 2 3 3 7 2 3 & $-10$\\ 0 3 4 5 2 6 2 7 1 2 2 6 & $10$\\ 0 1 2 4 2 5 2 6 2 7 2 3 & $\mathbf{2}$\\ 0 1 2 4 2 5 2 6 3 7 3 4 & $-5$\\ 0 1 2 4 2 5 3 6 3 7 2 4 & $5 $\\ 0 1 2 4 2 5 2 6 3 7 4 5 & $-5 $\\ 0 1 2 4 2 5 2 6 4 7 3 5 & $-5 $\\ 0 3 1 4 5 6 2 7 5 7 2 3 & $5 $\\ 0 3 4 5 5 6 6 7 2 7 1 2 & $5 $\\ 0 3 1 4 2 5 6 7 2 4 3 6 & $5 $\\ 0 3 4 5 1 2 6 7 2 7 3 4 & $-5 $\\ 0 3 1 4 2 5 2 6 3 7 4 5 & $5 $\\ 0 3 4 5 1 2 4 6 2 7 3 5 & $-5 $\\ 0 3 1 4 2 5 2 6 4 7 3 5 & $5 $\\ 0 3 4 5 1 2 4 6 3 7 2 5 & $-5 $\\ 0 3 4 5 1 2 6 7 2 3 4 6 & $5 $\\ 0 3 1 4 2 5 6 7 2 7 3 4 & $5 $\\ 0 3 4 5 1 2 2 6 4 7 3 5 & $5 $\\ 0 3 1 4 2 5 3 6 2 7 4 5 & $-5 $\\ 0 3 4 5 1 2 2 6 3 7 4 5 & $5 $\\ 0 3 1 4 2 5 3 6 4 7 2 5 & $-5 $\\ 0 3 4 5 2 6 6 7 1 2 3 4 & $5 $ \end{tabular} \hskip 10pt \begin{tabular}{l|r|} 0 3 1 4 5 6 2 3 2 7 4 5 & $5 $\\ 0 3 4 5 2 6 4 7 1 2 3 6 & $5 $\\ 0 3 1 4 5 6 2 3 5 7 2 4 & $-5 $\\ 0 3 4 5 1 2 6 7 2 4 4 6 & $-5 $\\ 0 3 1 4 2 5 6 7 2 3 2 6 & $-5 $\\ 0 3 1 4 5 6 2 3 5 7 2 3 & $-5 $\\ 0 3 4 5 2 6 4 7 1 2 2 6 & $5 $\\ 0 3 1 4 2 5 6 7 2 3 3 4 & $5 $\\ 0 3 4 5 1 2 6 7 2 3 2 4 & $-5 $\\ 0 3 1 4 2 5 3 6 4 7 2 3 & $-5 $\\ 0 3 4 5 1 2 2 6 3 7 2 4 & $-5 $\\ 0 3 1 4 2 5 6 7 2 3 3 6 & $-5 $\\ 0 3 4 5 1 2 6 7 2 4 2 6 & $-5 $\\ 0 3 4 5 1 2 6 7 2 4 3 4 & $-5 $\\ 0 3 1 4 2 5 6 7 2 3 2 4 & $5 $\\ 0 3 4 5 1 2 4 6 3 7 2 4 & $-5 $\\ 0 3 1 4 2 5 2 6 4 7 2 3 & $-5 $\\ 0 1 2 4 2 5 6 7 2 7 3 4 & $-5 $\\ 0 1 2 4 2 5 3 6 2 7 4 5 & $5 $\\ 0 1 2 4 2 5 3 6 4 7 2 5 & $5 $\\ 0 1 2 4 2 5 3 6 2 7 3 5 & $5 $\\ 0 1 2 4 2 5 3 6 3 7 2 5 & $5 $\\ 0 3 4 5 1 2 4 6 2 7 4 5 & $-5$ \end{tabular} \hskip 10pt \begin{tabular}{l|r|} 0 3 1 4 2 5 2 6 3 7 2 5 & $5 $\\ 0 3 4 5 1 2 4 6 4 7 2 5 & $-5 $\\ 0 3 1 4 2 5 2 6 2 7 3 5 & $5 $\\ 0 3 1 4 5 6 2 6 3 7 2 3 & $-5 $\\ 0 3 4 5 2 6 4 7 2 7 1 2 & $-5 $\\ 0 3 1 4 5 6 2 3 2 7 3 4 & $-5 $\\ 0 3 4 5 2 6 6 7 1 2 2 3 & $-5 $\\ 0 3 1 4 5 6 2 3 3 7 2 4 & $-5 $\\ 0 3 4 5 2 6 2 7 1 2 3 6 & $5 $\\ 0 3 1 4 2 5 3 6 2 7 3 5 & $-5 $\\ 0 3 4 5 1 2 2 6 4 7 2 5 & $5 $\\ 0 3 1 4 2 5 3 6 3 7 2 5 & $-5 $\\ 0 3 4 5 1 2 2 6 2 7 4 5 & $5 $\\ 0 3 4 5 5 6 6 7 1 2 2 6 & $-5 $\\ 0 3 1 4 5 6 2 6 2 7 2 3 & $5 $\\ 0 1 2 4 2 5 2 6 2 7 3 4 & $-5 $\\ 0 1 2 4 2 5 2 6 3 7 2 5 & $-5 $\\ 0 1 2 4 2 5 2 6 2 7 3 5 & $-5 $\\ 0 3 4 5 2 6 6 7 1 2 4 6 & $5 $\\ 0 3 1 4 5 6 2 3 2 7 2 5 & $-5 $\\ 0 3 4 5 1 2 4 6 4 7 2 3 & $-5 $\\ 0 3 1 4 2 5 2 6 2 7 3 4 & $5 $\\ \multicolumn{2}{c|}{(\textit{see next page})} \end{tabular}} } \twocolumn \begin{minipage}{\textwidth} {\tiny\centerline \begin{tabular}{l|r|} 0 3 4 5 1 2 2 6 4 7 3 4 & $-5 $\\ 0 3 1 4 2 5 3 6 2 7 2 4 & $-5 $\\ 0 3 1 4 5 6 2 3 3 7 2 5 & $-5 $\\ 0 3 4 5 2 6 2 7 1 2 4 6 & $5 $\\ 0 3 1 4 5 6 2 7 3 7 2 3 & $-5 $\\ 0 3 4 5 2 6 6 7 2 7 1 2 & $-5 $\\ 0 3 1 4 2 5 3 6 3 7 2 4 & $-5 $\\ 0 3 4 5 1 2 2 6 2 7 3 4 & $-5 $\\ 0 3 1 4 2 5 2 6 3 7 3 4 & $5 $\\ 0 3 4 5 1 2 4 6 2 7 2 3 & $-5 $\\ 0 3 4 5 1 6 2 7 5 7 2 4 & $-5 $\\ 0 3 4 5 2 6 4 6 1 7 2 5 & $-5 $\\ 0 3 4 5 1 6 2 7 2 5 4 6 & $5 $\\ 0 3 4 5 1 6 4 7 2 5 2 3 & $-5 $\\ 0 3 4 5 1 6 2 6 2 7 4 5 & $5 $\\ 0 3 4 5 1 6 2 7 2 7 3 4 & $5 $\\ 0 3 4 5 2 6 6 7 1 7 2 3 & $-5 $\\ 0 3 4 5 1 5 6 7 2 3 2 4 & $5 $\\ 0 3 4 5 2 6 4 6 1 7 2 3 & $-5$ \end{tabular} \hskip 10pt \begin{tabular}{l|r|} 0 3 4 5 1 5 6 7 2 4 2 6 & $5 $\\ 0 3 4 5 2 6 2 7 1 5 3 6 & $5 $\\ 0 3 4 5 1 6 2 6 3 7 2 4 & $5 $\\ 0 3 4 5 2 6 2 6 1 7 3 4 & $-5 $\\ 0 3 4 5 2 6 4 7 1 5 2 6 & $-5 $\\ 0 3 4 5 1 6 2 7 2 5 3 4 & $5 $\\ 0 3 4 5 1 6 4 7 2 5 2 6 & $5 $\\ 0 3 4 5 1 6 4 7 2 7 2 3 & $-5 $\\ 0 3 4 5 1 6 4 6 2 7 2 5 & $5 $\\ 0 3 4 5 1 6 2 7 3 5 2 4 & $-5 $\\ 0 3 4 5 2 5 6 7 1 4 2 6 & $-5 $\\ 0 3 4 5 2 6 4 7 2 7 1 3 & $-5 $\\ 0 3 4 5 2 5 6 7 1 3 2 6 & $-5 $\\ 0 3 4 5 2 6 6 7 1 7 2 4 & $5 $\\ 0 3 4 5 1 6 2 4 5 7 2 3 & $5 $\\ 0 3 4 5 2 6 6 7 2 7 1 4 & $-5 $\\ 0 3 4 5 1 6 2 4 3 7 2 5 & $5 $\\ 0 3 4 5 2 6 2 7 1 3 4 6 & $5 $\\ 0 3 4 5 2 6 6 7 1 3 2 4 & $-5$ \end{tabular} \hskip 10pt \begin{tabular}{l|r|} 0 3 4 5 1 6 2 7 2 3 4 6 & $-5 $\\ 0 3 4 5 1 5 2 6 4 7 2 3 & $5 $\\ 0 3 4 5 1 5 2 6 2 7 3 4 & $-5 $\\ 0 3 4 5 1 6 4 7 2 3 2 6 & $-5 $\\ 0 3 4 5 1 6 2 4 2 7 4 5 & $-5 $\\ 0 3 4 5 1 6 2 7 2 7 2 4 & $-5 $\\ 0 3 4 5 1 6 2 4 5 7 2 4 & $5 $\\ 0 3 4 5 2 6 2 6 1 7 2 4 & $-5 $\\ 0 3 4 5 1 5 2 6 4 7 2 4 & $5 $\\ 0 3 4 5 1 6 2 7 2 3 2 4 & $5 $\\ 0 3 4 5 1 6 2 4 2 7 3 4 & $5 $\\ 0 3 4 5 1 6 2 6 2 7 2 4 & $-5 $\\ 0 3 4 5 1 6 2 4 3 7 2 4 & $5 $\\ 0 3 4 5 2 6 2 7 1 5 2 6 & $5 $\\ 0 3 4 5 2 6 6 7 1 3 2 6 & $-5 $\\ 0 3 4 5 2 6 2 7 1 3 2 6 & $5 $\\ 0 3 4 5 1 6 4 7 2 3 2 4 & $-5 $\\ 0 3 4 5 1 5 2 6 2 7 2 4 & $-5 $\\ 0 3 4 5 1 6 4 7 2 7 2 4 & $5$ \end{tabular} \hskip 10pt \begin{tabular}{l|r|} 0 3 4 5 1 6 2 4 2 7 2 5 & $5 $\\ 0 3 4 5 1 6 4 6 2 7 2 4 & $5 $\\ 0 3 4 5 1 6 2 4 2 7 2 3 & $5 $\\ 0 3 4 5 2 6 4 7 5 7 1 2 & $5 $\\ 0 3 1 4 5 6 2 6 3 7 2 5 & $5 $\\ 0 3 4 5 2 5 6 7 1 2 4 6 & $-5 $\\ 0 3 1 4 5 6 2 7 3 5 2 6 & $5 $\\ 0 3 4 5 2 5 6 7 1 2 3 6 & $-5 $\\ 0 3 1 4 5 6 2 7 3 5 2 4 & $5 $\\ 0 3 4 5 2 6 6 7 3 7 1 2 & $5 $\\ 0 3 1 4 5 6 2 7 3 7 2 4 & $5 $\\ 0 3 4 5 5 6 6 7 1 2 2 3 & $5 $\\ 0 3 1 4 5 6 2 6 2 7 3 4 & $5 $\\ 0 3 4 5 1 2 2 6 4 7 2 4 & $-5 $\\ 0 3 1 4 2 5 3 6 2 7 2 3 & $-5 $\\ 0 3 4 5 2 6 6 7 1 2 2 6 & $5 $\\ 0 3 1 4 5 6 2 3 2 7 2 3 & $-5 $\\ 0 3 4 5 1 2 4 6 2 7 2 4 & $-5 $\\ 0 3 1 4 2 5 2 6 3 7 2 3 & $-5$ \end{tabular}} } \smallskip \begin{rem} To establish the formula for the morphism ${\rm O\mathaccent "017E\relax {r}}$ that would be universal with respect to all cocycles $\gamma \in \ker \Id$, we are accumulating a sufficient number of pairs ($\Id$-\/cocycle $\gamma$, $\partial_\mathcal{P}$-\/cocycle $\mathcal{Q}$), in which $\mathcal{Q}$ is built exactly from graphs that one obtains from orienting the graphs in~$\gamma$. Let us remember that not only nontrivial cocycles (e.g., $\boldsymbol{\gamma}_3$,\ $\boldsymbol{\gamma}_5$,\ or $\boldsymbol{\gamma}_7$ from~\cite{JNMP17}, cf.~\cite{DolgushevRogersWillwacher,WillwacherGRT}) but also $\Id$-\/trivial, like $\delta_6$ on p.~\pageref{ExDifferential}, or even the `zero' non\/-\/oriented graphs are suited for this purpose: e.g., a unique ${\rm O\mathaccent "017E\relax {r}}(w_4)(\mathcal{P})\equiv 0$ constrains~${\rm O\mathaccent "017E\relax {r}}$. In every such case, the respective $\partial_\mathcal{P}$-\/cocycle is obtained\footnote The actually found $\partial_\mathcal{P}$-\/cocycle $\mathcal{Q}$ might differ from the value ${\rm O\mathaccent "017E\relax {r}}(\gamma)$ by $\partial_\mathcal{P}$-\/trivial or improper terms, i.e.\ $\mathcal{Q} = {\rm O\mathaccent "017E\relax {r}}(\gamma) + \partial_\mathcal{P}({\EuScript X}} %{\mathcal{X}) + \nabla(\mathcal{P},\schouten{\mathcal{P},\mathcal{P}}) $ for some vector field ${\EuScript X}} %{\mathcal{X}$ realized by Kontsevich graphs and for some ``Leibniz'' bi\/-\/vector graphs $\nabla$ vanishing identically at every Poisson structure~$\mathcal{P}$.} by solving the factorization problem $\schouten{\mathcal{P},\mathcal{Q}(\mathcal{P})} \doteq 0$ via $\schouten{\mathcal{P},\mathcal{P}} = 0$. The formula of the orientation morphism ${\rm O\mathaccent "017E\relax {r}}$ will be the object of another paper. \end{rem} {\small \noindent\textbf{Acknowledgements.} The authors thank M.~Kontsevich and T.~Willwacher for recalling the existence of the orientation morphism~${\rm O\mathaccent "017E\relax {r}}$. A.V.K.\ thanks the organizers of international workshop SQS'17 (July~31 -- August~5, 2017 at JINR Dubna, Russia) for discussions.% \footnote{As soon as the expression of $167$ Kontsevich graph coefficients in $\mathcal{Q}_5$ via the $91$ integer parameters was obtained, the linear system in factorization $\schouten{\mathcal{P}, \mathcal{Q}_5(\mathcal{P})} = \Diamond(\mathcal{P}, \schouten{\mathcal{P},\mathcal{P}})$ for the pentagon\/-\/wheel flow $\dot{\mathcal{P}} = \mathcal{Q}_5(\mathcal{P})$ was solved independently by A. Steel (Sydney) using the Markowitz pivoting run in \textsc{Magma}. The flow components $\mathcal{Q}_5$ of all the known solutions $(\mathcal{Q}_5, \Diamond_5)$ match identically. (For the flow $\dot{\mathcal{P}} = \mathcal{Q}_5(\mathcal{P}) = {\rm O\mathaccent "017E\relax {r}}(\boldsymbol{\gamma}_5)(\mathcal{P})$, uniqueness is not claimed for the operator $\Diamond$ in the r.-h.s.\ of the factorization.)% } } \end{minipage}% ] {\footnotesize
{ "timestamp": "2017-12-15T02:08:50", "yymm": "1712", "arxiv_id": "1712.05259", "language": "en", "url": "https://arxiv.org/abs/1712.05259" }
\section{Introduction} \label{section1} In this paper we consider the Arratia flow $\{x(u,\cdot),\; u\in\mbR\}$, which is an ordered family of standard Brownian motions starting from every point of the real line such that for any $u,v\in\mbR$ the joint quadratic variation of $x(u,\cdot)$ and $x(v,\cdot)$ is given by \[ \jqv{x(u,\cdot)}{x(v,\cdot)}_t=\int\limits_0^t \1_{\{0\}}(x(u,s)-x(v,s))\,ds,\quad t\ges 0, \] where $\1_{\{0\}}$ stands for the indicator function of the set $\{0\}$. This flow was constructed by R.~A.~Arratia~\cite{Arratia} as a weak limit of families of coalescing simple random walks and can be informally described as a system of Brownian particles any two of which move independently until they meet, after which they coalesce and move together. In~\cite{Harris} T.~E.~Harris considered a generalisation of the Arratia flow, in which the indicator function $\1_{\{0\}}$ is replaced by a non-negative definite function $\Gamma$, which is called the covariance function of the flow, and proved its existence under certain conditions on $\Gamma$. In the same paper T.~E.~Harris proved that for the Arratia flow $\{x(u,\cdot),\; u\in\mbR\}$ for any time $t>0$ and interval $[u_1;u_2]\subset\mbR$ the set $x([u_1;u_2],t)$ is almost surely finite. From this it follows that for any time $t>0$ and interval $[u_1;u_2]$ the number \[ \nu_t([u_1;u_2]):=\#\; x([u_1;u_2],t) \] of elements of the set $x([u_1;u_2],t)$ is almost surely finite (for a different proof see monograph~\cite{Dorogovtsev2007} of A.~A.~Dorogovtsev). R.~Tribe and O.~Zaboronski~\cite{TribeZaboronski} proved that for any $t>0$ the random point process $x(\mbR,t)$ is Pfaffian and found its kernel; basing on some of their formulae, the distribution of $\nu_t([0;u])$ was found in~\cite{Fomichov}. Earlier for Harris flows the necessary and sufficient condition of the existence of coalescence of particles and an estimate for the mean value of the number of clusters were obtained by H.~Matsumoto in~\cite{Matsumoto} respectively. For the Arratia flow the large deviation principle and the law of the iterated logarithm for the size of the cluster containing the point zero were established by A.~A.~Dorogovtsev and O.~V.~Ostapenko~\cite{DorogovtsevOstapenko} and A.~A.~Dorogovtsev, A.~V.~Gnedin and M.~B.~Vovchanskii~\cite{DorogovtsevGnedinVovchanskii} respectively. Since the covariance of any two particles in Harris flows depends only on the distance between them, such flows are stationary with respect to the spatial variable. In~\cite{Glinyanaya}, under the assumption that the covariance function converges to zero at infinity, their ergodicity with respect to the spatial variable was established and an estimate for the strong mixing coefficient was found. In this paper we prove the following central limit theorem for $\nu_t([0;n])$ as $n\to\infty$. \begin{theorem} \label{theorem1} For any $t>0$ \[ \dfrac{\nu_t([0;n])-\E\nu_t([0;n])}{\sqrt{n}} \Longrightarrow \mcN(0;\sigma_t^2),\quad n\to\infty, \] where $\sigma_t^2:=\dfrac{3-2\sqrt{2}}{\sqrt{\pi t}}$. \end{theorem} Furthermore, we also obtain an estimate for the rate of this convergence by proving the following inequality of the Berry--Esseen type. \begin{theorem} \label{theorem2} For any $n\ges 1$ \[ \sup_{z\in\mbR} \abs{\Prob{\dfrac{\nu_t([0;n])-\E\nu_t([0;n])}{\sqrt{n}}\les z}-\int\limits_{-\infty}^z \dfrac{1}{\sqrt{2\pi\sigma_t^2}} e^{-r^2/2\sigma^2_t}\,dr}\les Cn^{-1/2}(\log n)^2. \] \end{theorem} Let us note that due to the scaling invariance of the Arratia flow (e.~g., see~\cite[subsection~2.3]{TribeZaboronski}) \begin{equation} \label{equation0} x(\cdot,\cdot)\stackrel{d}{=}\dfrac{1}{\ve} x(\ve \cdot,\ve^2 \cdot),\quad \ve>0, \end{equation} from Theorem~\ref{theorem1} the following corollary can be deduced. \begin{corollary} The following convergence in distribution takes place: \[ \sqrt[4]{t} \cdot \nu_t([0,1])-\dfrac{1}{\sqrt[4]{t} \cdot \sqrt{\pi}} \Longrightarrow \mcN(1;\sigma^2),\quad t\to 0+, \] where $\sigma^2=\dfrac{3-2\sqrt{2}}{\sqrt{\pi}}$. \end{corollary} The main part of this paper consists of two sections. In Section~\ref{section2} we establish the asymptotic behaviour of the variance and all moments of $\nu_t([0;u])$ and in Section~\ref{section3} we give the proof of Theorems~\ref{theorem1} and~\ref{theorem2}. \section{Asymptotics of the variance and moments of $\nu_t([0;u])$} \label{section2} In this section we establish the asymptotic behaviour of the variance and all moments of $\nu_t([0;u])$. Our proof is based on the results of R.~Tribe and O.~Zaboronski~\cite{TribeZaboronski}, and we refer the reader to this paper for the definitions of the objects we use in this section. In paper~\cite{TribeZaboronski} the authors proved that for any time $t>0$ the clusters of the Arratia flow form a Pfaffian point process with the kernel \[ K_t(u,v)=\dfrac{1}{\sqrt{t}} K\left(\dfrac{u}{\sqrt{t}}, \dfrac{v}{\sqrt{t}}\right), \] where \[ K(u,v)= \begin{pmatrix} -F''(v-u) & -F'(v-u)\\ F'(v-u) & \sign(v-u) \cdot F(\abs{v-u}) \end{pmatrix} \] with the function $F$ given by \[ F(z):=\dfrac{1}{\sqrt{\pi}} \int\limits_z^{+\infty} e^{-r^2/4}\,dr,\quad z>0. \] In particular, it means that for any $n\ges 1$ the $n$th factorial moment (for this moment we will use the notation $a^{[n]}=a(a-1) \ldots (a-n+1)$, $a\in\mbZ_+$) of the number $N_t([0;u])$ of particles of the Arratia flow which at time $t>0$ are found at the interval $[0;u]$ is given by \[ \E N_t^{[n]}([0;u])=\int\limits_0^u \stackrel{n}{\ldots} \int\limits_0^u \rho_t^{(n)}(v_1,\ldots,v_n)\,dv_1 \ldots dv_n, \] where $\rho_t^{(n)}$ is the $n$-point density permitting the following representation: \[ \rho_t^{(n)}(v_1,\ldots,v_k)=\mathrm{Pf}\left[K_t(v_i,v_j),\; i,j=1,\ldots,n\right],\quad v_1,\ldots,v_n\in\mbR. \] To obtain the expressions for the moments of $\nu_t([0;u])$ it remains to note that \begin{equation} \label{equation} \nu_t([0;u])\stackrel{d}{=} N_t([0;u])+1, \end{equation} which can be easily proved with the help of the dual flow (e.~g., see~\cite{TothWerner}, \cite{Dorogovtsev}, \cite[subsection~2.2]{TribeZaboronski}). Recall that for fixed time $t_0>0$ the dual flow is a system $\{y(u,t),\; u\in\mbR,\; 0\les t\les t_0\}$ of coalescing Brownian motions in backward time starting from every point of the real line characterised by the property that its trajectories do not intersect those of the particles of the restriction $\{x(u,t),\; u\in\mbR,\; 0\les t\les t_0\}$ of the Arratia flow to the time interval $[0;t_0]$. It is known that $\{y(u,t),\; u\in\mbR,\; 0\les t\les t_0\}$ agrees in distribution with $\{x(u,t),\; u\in\mbR,\; 0\les t\les t_0\}$, and equality~\eqref{equation} follows from the fact that the set $y(\mbR,t)$ coincides with the set of points of discontinuity of the mapping $x(\cdot,t) \colon \mbR \rightarrow \mbR$. \begin{proposition} For any $t>0$ and $u>0$ we have \[ \Var\nu_t([0;u])=-\dfrac{4}{\pi}+\dfrac{3u}{\sqrt{\pi t}}+ \dfrac{4}{\pi}e^{-u^2/2t}-\dfrac{2}{\pi}\int\limits_0^{u/\sqrt{t}} e^{-z^2/4}\,dz-\dfrac{4u}{\pi\sqrt{t}}\int\limits_0^{u/\sqrt{t}}e^{-z^2/2}\,dz. \] \end{proposition} \begin{proof} First of all, let us note that \[ \E N_t([0;u])=\E N_t^{[1]}([0;u])=\int\limits_0^u \rho_t^{(1)}(v)\,dv= \dfrac{u}{\sqrt{\pi t}}, \] and so \begin{equation} \label{equation1} \E\nu_t([0;u])=1+\dfrac{u}{\sqrt{\pi t}}. \end{equation} Moreover, on the one hand, \begin{equation} \label{equation2} \E N_t^{[2]}([0;u])=\E\nu_t^2([0;u])-3\E\nu_t([0,u])+2, \end{equation} and, on the other hand, \begin{equation} \label{equation3} \E N_t^{[2]}([0;u])=\int\limits_0^u \int\limits_0^u \rho^{(2)}_t(v_1,v_2)\,dv_1dv_2, \end{equation} where (for notational simplicity here and below for antisymmetric matrices we omit their entries below the diagonal) \begin{gather*} \rho^{(2)}_t(v_1,v_2)= \mathrm{Pf} \left[ \begin{matrix} 0 & \dfrac{1}{\sqrt{\pi t}} & -\dfrac{v_2-v_1}{2\sqrt{\pi} \cdot t} e^{-(v_2-v_1)^2/4t} & \dfrac{1}{\sqrt{\pi t}} e^{-(v_2-v_1)^2/4t}\\ & 0 & -\dfrac{1}{\sqrt{\pi t}} e^{-(v_2-v_1)^2/4t} & \dfrac{\sign (v_2-v_1)}{\sqrt{\pi t}} \cdot \int\limits_{\abs{v_2-v_1}/\sqrt{t}}^{+\infty} e^{-v^2/4}\,dv\\ & & 0 & \dfrac{1}{\sqrt{\pi t}}\\ & & & 0 \end{matrix} \right] = \\ =\dfrac{1}{\pi t}\left(1+\dfrac{\abs{v_2-v_1}}{2\sqrt{t}} \cdot e^{-(v_2-v_1)^2/4t} \cdot \int\limits_{\abs{v_2-v_1}/\sqrt{t}}^{+\infty} e^{-v^2/4}\,dv-e^{-(v_2-v_1)^2/2t}\right). \end{gather*} Therefore, computing the integral in~\eqref{equation3} by integrating by parts (several times) and using~\eqref{equation1} and~\eqref{equation2}, we obtain \begin{equation} \label{equation4} \E\nu_t^2([0;u])=1-\dfrac{4}{\pi}+\dfrac{5u}{\sqrt{\pi t}}+\dfrac{u^2}{\pi t}+ \dfrac{4}{\pi}e^{-u^2/2t}-\dfrac{2}{\pi}\int\limits_0^{u/\sqrt{t}} e^{-z^2/4}\,dz-\dfrac{4u}{\pi\sqrt{t}}\int\limits_0^{u/\sqrt{t}}e^{-z^2/2}\,dz. \end{equation} Finally, using~\eqref{equation1} and~\eqref{equation4}, we arrive at the desired result. \end{proof} \begin{corollary} \label{corollary5} The following assertions hold true: \begin{gather*} \Var\nu_t([0;u])\sim (3-2\sqrt{2}) \cdot \dfrac{u}{\sqrt{\pi t}},\quad u\to +\infty \text{ or } t\to 0+,\\ \Var\nu_t([0;u])\sim (3-\dfrac{2}{\sqrt{\pi}}) \cdot \dfrac{u}{\sqrt{\pi t}},\quad u\to 0+ \text{ or } t\to +\infty. \end{gather*} \end{corollary} \begin{theorem} For any $k\ges 1$ we have \[ \E\nu_t^k([0;u])\sim \left(\dfrac{u}{\sqrt{\pi t}}\right)^k,\quad u\to +\infty \text{ or } t\to 0+. \] \end{theorem} \begin{proof} Due to the scaling invariance~\eqref{equation0} of the Arratia flow it is enough to prove the corresponding assertion for $t\to 0+$. To do it, we will use induction. For $k=1$ the assertion follows from~\eqref{equation1}. Now suppose that it holds true for all $k'\les k-1$. Then from~\eqref{equation} it follows that \[ \lim_{t\to 0+} t^{k/2}\E\nu_t^k([0;u])=\lim_{t\to 0+} t^{k/2}\E N_t^{[k]}([0;u]), \] provided that the limit on the right-hand side exists. However, \[ t^{k/2}\E N_t^{[k]}([0;u])=\int\limits_0^u \stackrel{k}{\ldots} \int\limits_0^u \mathrm{Pf}\left[\sqrt{t} \cdot K_t(v_i,v_j),\; i,j=1,\ldots,k\right]\,dv_1 \ldots dv_k, \] and the Pfaffian on the right-hand side converges as $t\to 0+$ to the Pfaffian \[ \mathrm{Pf} \left[ \begin{matrix} 0 & 1/\sqrt{\pi} & 0 & 0 & 0 & \ldots & 0 & 0 & 0\\ {} & 0 & 0 & 0 & 0 & \ldots & 0 & 0 & 0\\ {} & {} & 0 & 1/\sqrt{\pi} & 0 & \ldots & 0 & 0 & 0\\ {} & {} & {} & 0 & 0 & \ldots & 0 & 0 & 0\\ {} & {} & {} & {} & 0 & \ldots & 0 & 0 & 0\\ {} & {} & {} & {} & {} & \ddots & \vdots & \vdots & \vdots\\ {} & {} & {} & {} & {} & {} & 0 & 0 & 0\\ {} & {} & {} & {} & {} & {} & {} & 0 & 1/\sqrt{\pi}\\ {} & {} & {} & {} & {} & {} & {} & {} & 0\\ \end{matrix} \right] =\left(\dfrac{1}{\sqrt{\pi}}\right)^k. \] Thus, by the dominated convergence theorem we obtain \[ \lim_{t\to 0+} t^{k/2}\E N_t^{[k]}([0;u])=\left(\dfrac{u}{\sqrt{\pi}}\right)^k. \] The theorem is proved. \end{proof} \section{Proof of the main results} \label{section3} \begin{proof}[Proof of Theorem~\ref{theorem1}] Fixing arbitrary $t>0$, let us note that for any $u_1<u_2<u_3$ we have \begin{equation} \label{equation5} \nu_t([u_1;u_3])+1=\nu_t([u_1;u_2])+\nu_t([u_2;u_3]), \end{equation} since on the right-hand side the cluster containing the point $x(u_2,t)$ is taken into account twice due to the almost sure continuity of the random mapping $x(\cdot,t) \colon \mbR \rightarrow \mbR$ at the point $u_2$. From~\eqref{equation5} it follows that for all $n\ges 1$ \begin{equation} \label{equation6} \nu_t([0;n])-\E\nu_t([0;n])=\sum_{k=1}^n \eta_k, \end{equation} where \[ \eta_k:=\nu_t([k-1;k])-\E\nu_t([k-1;k]),\quad k\ges 1. \] Since the stochastic process $\{x(u,t)-u,\; u\in\mbR\}$ is strictly stationary, so is the sequence $\{\eta_n,\; n\ges 1\}$. Now to this sequence we would like to apply the following theorem. \begin{theorem} \label{theorem7} \textup{\cite[Theorem~18.5.3]{IbragimovLinnik}} Let $\{X_n,\; n\ges 1\}$ be a strictly stationary sequence of centered random variables with finite variance such that \[ \Var\sum_{k=1}^n X_k\longrightarrow +\infty,\quad n\to +\infty, \] and for some $\delta>0$ \[ \E\abs{X_1}^{2+\delta}<+\infty \] and \[ \sum_{n=1}^\infty \left(\alpha^X(n)\right)^{\delta/(2+\delta)}<+\infty, \] where $\alpha^X$ is its strong mixing coefficient: \begin{gather*} \alpha^X(n):=\sup\{\abs{\mbP(AB)-\mbP(A)\mbP(B)} \mid A\in \sigma(X_j,\; j\les k),\\ B\in\sigma(X_j,\; j\ges k+n),\; k\in\mbZ\},\quad n\in\mbZ, \end{gather*} with $\sigma(\mcA)$ standing for the $\sigma$-field generated by the set $\mcA$ of random variables. Then the series \[ \E X_1^2+2\sum_{k=2}^\infty \E X_1X_k \] is absolutely convergent and, provided that its sum $\sigma^2$ is strictly positive, the following convergence in distribution takes place: \[ \dfrac{1}{\sqrt{n}} \sum_{k=1}^n X_k \Longrightarrow \mcN(0,\sigma^2),\quad n\to\infty. \] \end{theorem} \begin{remark} Note that $\sigma^2$ permits the representation \[ \sigma^2=\lim_{n\to \infty} \dfrac{1}{n} \Var\sum_{k=1}^n X_k, \] since \[ \dfrac{1}{n} \Var\sum_{k=1}^n X_k=\dfrac{1}{n} \E\left(\sum_{k=1}^n X_k\right)^2=\dfrac{1}{n} \sum_{i,j=1}^n \E X_iX_j=\E X_1^2+2\sum_{k=2}^n \dfrac{n-k}{n} \E X_1X_k, \] and, if the series $\sum \E X_1X_k$ is absolutely convergent, by the dominated convergence theorem \[ \lim_{n\to\infty} \sum_{k=2}^n \dfrac{n-k}{n} \E X_1X_k=\sum_{k=2}^\infty \E X_1X_k-\lim_{n\to\infty} \sum_{k=2}^n \dfrac{k}{n} \E X_1X_k=\sum_{k=2}^\infty \E X_1X_k. \] \end{remark} Now let us verify that the conditions of this theorem are satisfied for the sequence $\{\eta_n,\; n\ges 1\}$. First, we note that all absolute moments of $\eta_1$ are finite, since such are those of $\nu_t([0;1])$. Second, from equality~\eqref{equation6} and Corollary~\ref{corollary5} we get \[ \dfrac{1}{n}\Var\sum_{k=1}^n \eta_k=\dfrac{1}{n}\Var\nu_t([0;n])\longrightarrow \dfrac{3-2\sqrt{2}}{\sqrt{\pi t}}>0,\quad n\to\infty, \] and so in particular \[ \Var\sum_{k=1}^n \eta_k\longrightarrow +\infty,\quad n\to\infty. \] Third, it is easy to check that for the strong mixing coefficient $\alpha^\eta$ of the sequence $\{\eta_n,\; n\ges 1\}$ we have \[ \alpha^\eta(n)\les \alpha(n),\quad n\ges 1, \] where \begin{gather*} \alpha(n):=\sup\{\abs{\mbP(AB)-\mbP(A)\mbP(B)},\; A\in\sigma(x(u,t)-u,\; u\les h),\\ B\in\sigma(x(u,t)-u,\; u\ges h+n),\; h\in\mbR\}. \end{gather*} In~\cite{Glinyanaya} it was proved that for $n\ges 1$ large enough \[ \alpha(n)\les 2\sqrt{\dfrac{2}{\pi t}} \int\limits_n^{+\infty} e^{-r^2/2t}\,dr. \] Therefore, using the standard estimate for the tails of the Gaussian distribution, we obtain that for $n\ges 1$ large enough \[ \alpha^\eta(n)\les 2\sqrt{\dfrac{2}{\pi t}} \int\limits_n^{+\infty} e^{-r^2/2t}\,dr\les \dfrac{2}{n}\sqrt{\dfrac{2}{\pi t}} e^{-n^2/2t}, \] and so for all $\delta>0$ \[ \sum_{n=1}^\infty \left(\alpha^\eta(n)\right)^{\delta/(2+\delta)}<+\infty. \] Thus, applying Theorem~\ref{theorem7} to the sequence $\{\eta_n,\; n\ges 1\}$ and using equality~\eqref{equation6} finishes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem2}] The proof is based on the following theorem. \begin{theorem} \textup{\cite[Theorem~2]{Tikhomirov}} Let $\{X_n,\; n\ges 1\}$ be a strictly stationary sequence of centered random variables with finite variance such that for some $\delta\in (0,1]$ \[ \E\abs{X_1}^{2+\delta}<+\infty \] and for some constants $K>0$ and $\beta>0$ \[ \alpha^X(n)\les Ke^{-\beta n},\quad n\ges 1. \] Then there exists a constant $A=A(K,\beta,\delta)>0$ such that \[ \sup_{z\in\mbR} \abs{\Prob{\frac 1{\sigma_n} \sum_{k=1}^n X_k\les z}- \dfrac{1}{\sqrt{2\pi}} \int\limits_{-\infty}^z e^{-r^2/2}\,dr}\les An^{-\delta/2}(\log n)^{1+\delta},\quad n\ges 1, \] where \[ \sigma_n^2=\E\left(\sum_{k=1}^n X_k\right)^2. \] \end{theorem} Applying this theorem to the sequence $\{\eta_n,\; n\ges 1\}$ defined above and using equality~\eqref{equation6}, we obtain the desired result. \end{proof}
{ "timestamp": "2017-12-15T02:03:53", "yymm": "1712", "arxiv_id": "1712.05098", "language": "en", "url": "https://arxiv.org/abs/1712.05098" }
\section{Introduction} Topological Data Analysis (TDA)~\cite{carlsson,eh}, which clarifies the geometric features of data from the viewpoint of topology, is developed rapidly in this century both in theory and application. In TDA, persistent homology and its persistence diagram (PD) \cite{elz,zc} are important tool for TDA. Persistent homology enables us to capture multiscale topological features effectively and quantitatively. Fast computation softwares for persistent homology are developed \cite{dipha,phat} and many applications are achieved such as materials science \cite{Hiraoka28062016,granular,PhysRevE.95.012504}, sensor networks \cite{sensor}, evolutions of virus~\cite{virus}, and so on. From the viewpoint of data analysis, a PD has some significant properties: translation and rotation invariance, multiscalability and robustness to noise. PDs are considered to be compact descriptors for complicated geometric data. $q$th homology $H_q$ encodes $q$ dimensional geometric structures of data such as connected components ($q=0$), rings ($q=1$), cavities ($q=2$), etc. $q$th persistent homology encodes the information about $q$ dimensional geometric structures with their scale. A PD, a multiset\footnote{A multiset is a set with multiplicity on each point.} in $\mathbb{R}\times(\mathbb{R} \cup \{\infty\})$, is used to summarize the information. Each point in a PD is called a birth-death pair, which represents a homologous structure in the data, and the scale is encoded on x-axis and y-axis. \begin{figure}[hbtp] \centering \includegraphics[width=0.5\hsize]{amorphous.pdf} \caption{The 1st PD for the atomic configuration of amorphous silica in \cite{Hiraoka28062016}, reproduced from the simulation data. The data is provided by Dr. Nakamura. } \label{fig:amorphous-silica} \end{figure} Typical workflow of the data analysis with persistent homology is as follows: \begin{enumerate} \item Construct a filtration from data \begin{itemize} \item Typical input data is a point cloud, a finite set of points in $\mathbb{R}^n$ and a typical filtration is an alpha filtration \end{itemize} \item Compute the PD from the filtration \item Analyze the PD to investigate the geometric features of the data \end{enumerate} In the last part of the above workflow, we often want to inversely reconstruct a geometric structure corresponding each birth-death pair on the PD, such as a ring or a cavity, into the original input data. Such inverse analysis is practically important for the use of PDs. For example, we consider the 1st PD shown in Fig.~\ref{fig:amorphous-silica} from the atomic configuration of amorphous silica computed by molecular dynamical simulation \cite{Hiraoka28062016}. In this PD, there are some characteristic bands $C_P, C_T, C_O, B_O$, and these bands correspond to typical geometric structures in amorphous silica. To analyze the PD more deeply, we want to reconstruct rings corresponding such birth-death pairs in the original data. In the paper, optimal cycles, one of such inverse analysis methods, are effectively used to clarify such typical structures. \begin{figure}[htbp] \centering \includegraphics[width=\hsize]{optcyc_one_hole.pdf} \caption{A simplicial complex with one hole.} \label{fig:optcyc_one_hole} \end{figure} A representative cycle of a generator of the homology vector space has such information, but it is not unique and we want to find a better cycle to understand the homology generator for the analysis of a PD. For example, Fig.~\ref{fig:optcyc_one_hole}(a) has one homology generator on $H_1$, and cycles $z_1$, $z_2$, and $z_3$ shown in Fig.~\ref{fig:optcyc_one_hole} (b), (c), and (d) are the same homologous information. However, we consider that $z_3$ is best to understand the homology. Optimization problems on homology are used to find such a representative cycle. We can find the ``tightest'' representative cycle under a certain formalization. Such optimization problems have been widely studied under various settings\cite{optimal-Day,Erickson2005,Chen2011}, and two concepts, optimal cycles\cite{Escolar2016} and volume optimal cycles\cite{voc}, have been successfully applied to persistent homology. The optimal cycle minimizes the size of the cycle, while the volume optimal cycle minimizes the internal volume of the cycle. Both of these two methods give a tightest cycle in different senses. The volume optimal cycles for persistent homology have been proposed in \cite{voc} under the restriction of dimension. We can use them only for $(n-1)$-th persistent homology embedded in $\mathbb{R}^n$, but under the restriction, there is an efficient computation algorithm using Alexander duality. In this paper, we generalize the concept of volume optimal cycles on any persistent homology and show the computation algorithm. The idea in \cite{voc} is not applied to find a volume optimal ring (a volume optimal cycle for $q=1$) in a point cloud in $\mathbb{R}^3$ but our method is applicable to such a case. In that case, optimal cycles are also applicable, but our new algorithm is simpler, faster for large data, and gives us better information. The contributions of this paper are as follows: \begin{itemize} \item The concept of volume optimal cycles is proposed to identify good representatives of generators in persistent homology. This is useful to understand a persistence diagram. \begin{itemize} \item The concept has been already proposed in \cite{voc} in a strongly limited sense about dimension and this paper generalize it. \item Optimal cycles are also usable for the same purpose, but the algorithm in this paper is easier to implement, faster, and gives better information. \begin{itemize} \item Especially, children birth-death pairs shown in Section \ref{sec:compare} are available only with volume optimal cycles. \end{itemize} \end{itemize} \item Mathematical properties of volume optimal cycles are clarified. \item Effective computation algorithms for volume optimal cycles are proposed. \item The algorithm is implemented and some examples are computed by the program to show the usefulness of volume optimal cycles. \end{itemize} The rest of this paper is organized as follows. The fundamental ideas such as persistent homology and simplicial complexes are introduced in Section~\ref{sec:ph}. In Section~\ref{sec:oc} the idea of optimal cycles is reviewed. Section~\ref{sec:voc} is the main part of the paper. The idea of volume optimal cycles and the computation algorithm in a general setting are presented in Section~\ref{sec:voc}. Some mathematical properties of volume optimal cycles are also shown in this section. In Section~\ref{sec:vochd} we show some special properties of $(n-1)$-th persistent homology in $\mathbb{R}^n$ and the faster algorithm. We also explain tree structures in $(n-1)$-th persistent homology. In Section~\ref{sec:compare}, we compare volume optimal cycles and optimal cycles. In Section~\ref{sec:example} we show some computational examples by the proposed algorithms. In Section~\ref{sec:conclusion}, we conclude the paper. \section{Persistent homology}\label{sec:ph} In this section, we explain some preliminaries about persistent homology and geometric models. Persistent homology is available on various general settings, but we mainly focus on the persistent homology on a filtration of simplicial complexes, especially an alpha filtration given by a point cloud. \subsection{Persistent homology} Let $\mathbb{X} = \{X_t \mid t \in T\}$ be a \textit{filtration} of topological spaces where $T$ is a subset of $\mathbb{Z}$ or $\mathbb{R}$. That is, $X_t \subset X_{t'}$ holds for every $t \leq t'$. Then we define $q$th homology vector spaces $\{H_q(X_t)\}_{t \in T}$ whose coefficient is a field $\Bbbk$ and homology maps $\varphi_s^t : H_q(X_s) \to H_q(X_t)$ for all $s \leq t$ induced by inclusion maps $X_s \xhookrightarrow{} X_t$. The family $H_q(\mathbb{X}) = (\{H_q(X_t)\}_t, \{ \varphi_s^t\}_{s \leq t})$ is called the $q$th \textit{persistent homology}. The theory of persistent homology enables us to analyze the structure of this family. Under some assumptions, $H_q(\mathbb{X})$ is uniquely decomposed as follows~\cite{elz,zc}: \begin{align*} H_q(\mathbb{X}) = \bigoplus_{i=1}^p I(b_i, d_i), \end{align*} where $b_i \in T, d_i \in T \cup \{\infty\}$ with $b_i < d_i$. Here, $I(b, d) = (U_t, f_s^t)$ consists of a family of vector spaces and linear maps: \begin{align*} U_t&=\left\{ \begin{array}{ll} \Bbbk, &\mbox{if } b \leq t < d, \\ 0, & \mbox{otherwise}, \end{array} \right. \\ f_s^t&:U_s \to U_t \\ f_s^t&=\left\{ \begin{array}{ll} \textrm{id}_\Bbbk, &\mbox{if } b \leq s \leq t < d, \\ 0, & \mbox{otherwise}. \end{array} \right. \end{align*} This means that for each $I(b_i, d_i)$ there is a $q$ dimensional hole in $\mathbb{X}$ and it appears at $t = b_i$, persists up to $t < d_i$ and disappears at $t = d_i$. In the case of $d_i = \infty$, the $q$ dimensional hole never disappears on $\mathbb{X}$. $b_i$ is called a \textit{birth time}, $d_i$ is called a \textit{death time}, and the pair $(b_i, d_i)$ is called a \textit{birth-death pair}. When $\mathbb{X}$ is a filtration of finite simplicial/cell/cubical complexes on $T$ with $\#T < \infty$ (we call $\mathbb{X}$ a \textit{finite filtration} under the condition), such a unique decomposition exists. When we have the unique decomposition, the $q$th \textit{persistence diagram} of $\mathbb{X}$, $D_q(\mathbb{X})$, is defined by a multiset \begin{align*} D_q(\mathbb{X}) = \{(b_i, d_i) \mid i=1,\ldots, p\}, \end{align*} and the 2D scatter plot or the 2D histogram of $D_q(\mathbb{X})$ is often used to visualize the diagram. We investigate the detailed algebraic structure of persistent homology for the preparation. For simplicity, we assume the following condition on $\mathbb{X}$. \begin{cond}\label{cond:ph} Let $X = \{\sigma_1, \ldots, \sigma_K\}$ be a finite simplicial complex. For any $1 \leq k \leq K$, $X_k = \{\sigma_1, \ldots, \sigma_k\}$ is a subcomplex of $X$. \end{cond} Under the condition, \begin{align} \mathbb{X}: \emptyset = X_0 \subset X_1 \subset \cdots \subset X_K = X,\label{eq:ph} \end{align} is a filtration of complexes. For a general finite filtration, we can construct a filtration satisfying Condition~\ref{cond:ph} by ordering all simplices properly. Let $\partial_q: C_q(X) \to C_{q-1}(X)$ be the boundary operator on $C_q(X)$ and $\partial_q^{(k)}: C_q(X_k) \to C_{q-1}(X_k)$ be a boundary operator of $C_q(X_k)$. Cycles $Z_q(X_k)$ and boundaries $B_q(X_k)$ are defined by the kernel of $\partial_q^{(k)}$ and the image of $\partial_{q+1}^{(k)}$, and $q$th homology vector spaces are defined by $H_q(X_k) = Z_q(X_k)/B_q(X_k)$. Condition~\ref{cond:ph} says that if $\sigma_k $ is a $q$-simplex, \begin{equation} \label{eq:chain_plus1} \begin{aligned} C_q(X_{k}) & = C_q(X_{k-1})\oplus\left<\sigma_k\right>, \\ C_{q'}(X_{k}) & = C_{q'}(X_{k-1}), \mbox{ for $q' \not = q$}, \end{aligned} \end{equation} holds. From the decomposition theorem and \eqref{eq:chain_plus1}, for each birth-death pair $(b_i, d_i)$, we can find $z_i \in C_q(X)$ such that \begin{align} &z_i \not \in Z_q(X_{b_i-1}), \label{eq:birth_pre}\\ &z_i \in Z_q(X_{b_i}) = Z_q(X_{b_i-1}) \oplus \left<\sigma_{b_i}\right>, \label{eq:birth_post}\\ &z_i \not \in B_q(X_{k}) \mbox{ for $k < d_i$}, \label{eq:death_pre} \\ &z_i \in B_q(X_{d_i}) = B_q(X_{d_i-1}) \oplus \left<\partial \sigma_{d_i}\right>, \label{eq:death_post} \\ &\{[z_i]_k \mid b_i \leq k < d_i\} \text{ is a basis of } H_q(X_k), \label{eq:ph-basis} \end{align} where $[z]_k = [z]_{B_q(X_k)} \in H_q(X_k)$. \eqref{eq:death_post} holds only if $d_i \not = \infty$. This $[z_i]_k$ is a homology generator that persists from $k={b_i}$ to $k = {d_i-1}$. $\{z_i\}_{i=1}^p$ is called the \textit{persistence cycles} for $D_p(\mathbb{X}) = \{(b_i, d_i)\}_{i=1}^p$. An algorithm of computing a PD actually finds persistence cycles from a given filtration. The persistence cycle of $(b_i, d_i)$ is not unique, therefore we want to find a ``good'' persistence cycle to find out the geometric structure corresponding to each birth-death pair. That is the purpose of the volume optimal cycle, which is the main topic of this paper. We remark that the condition \eqref{eq:ph-basis} can be easily proved from (\ref{eq:birth_pre}-\ref{eq:death_post}) and the decomposition theorem, and hence we only need to show (\ref{eq:birth_pre}-\ref{eq:death_post}) to prove that given $\{z_i\}_{i=1}^p$ are persistence cycles. \subsection{Alpha filtration} One of the most used filtrations for data analysis using persistent homology is an alpha filtration~\cite{eh, em}. An alpha filtration is defined from a point cloud, a set of finite points $P = \{x_i \in \mathbb{R}^n\}$. The alpha filtration is defined as a filtration of alpha complexes and they are defined by a Voronoi diagram and a Delaunnay triangulation. The \textit{Voronoi diagram} for a point cloud $P$, which is a decomposition of $\mathbb{R}^n$ into \textit{Voronoi cells} $\{V(x_i) \mid x_i \in P\}$, is defined by \begin{align*} V(x_i) = \{x \in \mathbb{R}^n \mid \|x - x_i\|_2 \leq \|x - x_j\|_2 \text{ for any } j\not = i\}. \end{align*} The \textit{Delaunnay triangulation} of $P$, $\textrm{Del}(P)$, which is a simplicial complex whose vertices are points in $P$, is defined by \begin{align*} \textrm{Del}(P) = \{[x_{i_1} \cdots x_{i_q}] \mid V(x_{i_1}) \cap \cdots \cap V(x_{i_q}) \not = \emptyset\}, \end{align*} where $[x_{i_0} \cdots x_{i_q}]$ is the $q$-simplex whose vertices are $x_{i_0}, \ldots, x_{i_q} \in P$. Under the assumption of general position in the sense of \cite{em}, the Delaunnay triangulation is a simplicial decomposition of the convex hull of $P$ and it has good geometric properties. The \textit{alpha complex} $\textrm{Alp}(P, r)$ with radius parameter $r \geq 0$, which is a subcomplex of $\textrm{Del}(P)$, is defined as follows: \begin{align*} \textrm{Alp}(P, r) = \{[x_{i_0} \cdots x_{i_q}] \in \textrm{Del}(P) \mid B_r(x_{i_0}) \cap \cdots \cap B_r(x_{i_q}) \not = \emptyset \}, \end{align*} where $B_r(x)$ is the closed ball whose center is $x$ and whose radius is $r$. A significant property of the alpha complex is the following homotopy equivalence to the $r$-ball model. \begin{align*} \bigcup_{x_i \in P} B_r(x_i) \simeq |\textrm{Alp}(P, r)|, \end{align*} where $|\textrm{Alp}(P,r)|$ is the geometric realization of $\textrm{Alp}(P, r)$. The \emph{alpha filtration} for $P$ is defined by $\{\textrm{Alp}(P,r)\}_{r\geq 0}$. Figure~\ref{fig:alpha} illustrates an example of a filtration by $r$-ball model and the corresponding alpha filtration. The 1st PD of this filtration is $\{(r_2, r_5), (r_3, r_4)\}$. Since there are $r_1 < \cdots < r_K$ such that $\textrm{Alp}(P, s) = \textrm{Alp}(P, t)$ for any $r_i \leq s < t < r_{i+1}$, we can treat the alpha filtration as a finite filtration $\textrm{Alp}(P, r_1) \subset \cdots \subset \textrm{Alp}(P, r_K)$. \begin{figure}[htbp] \centering \includegraphics[width=0.8\hsize]{pc-alpha.pdf} \caption{An $r$-ball model and the corresponding alpha filtration. Each red simplex in this figure appears at the radius parameter $r_i$. } \label{fig:alpha} \end{figure} We mention an weighted alpha complex and its filtration~\cite{weightedalpha}. An alpha complex is topologically equivalent to the union of $r$-balls, while an weighted alpha complex is topologically equivalent to the union of $\sqrt{r^2+\alpha_i}$-balls, where $\alpha_i$ depends on each point. The weighted alpha filtration is useful to study the geometric structure of a point cloud whose points have their own radii. For example, for the analysis of atomic configuration, the square of ionic radii or Van der Waals radii are used as $\alpha_i$. \section{Optimal cycle}\label{sec:oc} First, we discuss an optimal cycle on normal homology whose coefficient is $\Bbbk = \mathbb{Z}_2$. Figure~\ref{fig:optcyc_one_hole}(a) shows a simplicial complex whose 1st homology vector space $H_1$ is isomorphic to $\mathbb{Z}_2$. In Fig.~\ref{fig:optcyc_one_hole}(b), (c), and (d), $z_1$, $z_2$, and $z_3$ have same information about $H_1$. That is, $H_1 = \left<[z_1]\right> = \left<[z_2]\right> = \left<[z_3]\right>$. However, we intuitively consider that $z_3$ is the best to represent the hole in Fig.~\ref{fig:optcyc_one_hole} since $z_3$ is the shortest loop in these loops. Since the size of a loop $z = \sum_{\sigma:1-\text{simplex}} \alpha_\sigma\sigma \in Z_1(X)$ is equal to \begin{align*} \#\{\sigma : 1\text{-simplex} \mid \alpha_\sigma \not = 0 \}, \end{align*} and this is $\ell^0$ ``norm''\footnote{ For a finite dimensional $\mathbb{R}$- or $\mathbb{C}$- vector space whose basis is $\{g_i\}_i$, the $\ell^0$ norm $\|\cdot\|_0$ is defined by $\|\sum_i \alpha_i g_i \|_0 = \# \{i \mid \alpha_i \not = 0 \}$. Mathematically this is not a norm since it is not homogeneous, but in information science and statistics, it is called $\ell^0$ norm. }\footnote{ On a $\mathbb{Z}_2$-vector space, any norm is not defined mathematically, but it is natural that we call this $\ell^0$ norm. }, we write it $\|z\|_0$. Here, $z_3$ is the solution of the following problem: \begin{align*} \mbox{minimize } \|z\|_0 ,\mbox{ subject to } z\sim z_1. \end{align*} The minimizing $z$ is called the \textit{optimal cycle} for $z_1$. From the definition of homology, we can rewrite the problem as follows: \begin{equation} \label{eq:optcyc_one_hole} \begin{aligned} \mbox{minimize } &\|z\|_0, \mbox{ subject to:} \\ z &= z_1 + \partial w, \\ w &\in C_2(X). \end{aligned} \end{equation} Now we complete the formalization of the optimal cycle on a simplicial complex with one hole. \begin{figure}[tbp] \centering \includegraphics[width=0.4\hsize]{optcyc_two_holes.pdf} \caption{A simplicial complex with two holes.} \label{fig:optcyc_two_hole} \end{figure} How about the case if a complex has two or more holes? We consider the example in Fig.~\ref{fig:optcyc_two_hole}. From $z_1$ and $z_2$, we try to find $z_1'$ and $z_2'$ using a similar formalization. If we apply the optimization \eqref{eq:optcyc_one_hole} to each $z_1$ and $z_2$, $z_1''$ and $z_2'$ are found. How can we find $z_1'$ from $z_1$ and $z_2$? The problem is a hole represented by $z_2'$, therefore we ``fill'' that hole and solve the minimization problem. Mathematically, filling a hole corresponds to considering $Z_1(X)/(B_1(X) \oplus \left<z_2'\right>)$ instead of $Z_1(X)/B_1(X)$ and the following optimization problem gives us the required loop $z_1'$. \begin{align*} \mbox{minimize } &\|z\|_0, \mbox{ subject to:} \\ z & = z_1 + \partial w + k z_2, \\ w &\in C_2(X), \\ k & \in \mathbb{Z}_2. \end{align*} When you have a complex that has many holes, you can apply the idea repeatedly to find all optimal cycles. The idea of optimal cycles obviously applied $q$th homology for any $q$. \subsection{How to compute an optimal cycle}\label{subsec:fast-computation} Finding a basis of a homology vector space is not a difficult problem for a computer. We prepare a matrix representation of the boundary operator and apply matrix reduction algorithm. Please read \cite{comphom} for the detailed algorithm. Therefore the problem is how to solve the above minimizing problem. In general, solving a optimization problem on a $\mathbb{Z}_2$ linear space is a difficult problem. The problem is a kind of combinatorial optimization problems. They are well studied but it is well known that such a problem is sometimes hard to solve on a computer. One approach is using linear programming, used in \cite{sensor-l0-l1}. Since optimization problem on $\mathbb{Z}_2$ is hard, we use $\mathbb{R}$ as a coefficient. For $\mathbb{R}$ coefficient, $\ell^0$ norm also means the size of loop and $\ell^0$ optimization is natural for our purpose. However, $\ell^0$ optimization is also a difficult problem. Therefore we replace $\ell^0$ norm to $\ell^1$ norm. It is well known in the fields of sparse sensing and machine learning that $\ell^1$ optimization gives a good approximation of $\ell^0$ optimization. That is, we solve the following optimization problem instead of \eqref{eq:optcyc_one_hole}. \begin{equation} \label{eq:optcyc_one_hole-l1} \begin{aligned} \mbox{minimize } &\|z\|_1, \mbox{ subject to:} \\ z &= z_1 + \partial w, \\ w &\in C_2(X; \mathbb{R}). \end{aligned} \end{equation} This is called a linear programming and we can solve the problem very efficiently by good solvers such as cplex\footnote{\url{https://www-01.ibm.com/software/commerce/optimization/cplex-optimizer/}} and Clp\footnote{\url{https://projects.coin-or.org/Clp}}. Another approach is using integer programming, used in \cite{optimal-Day,Escolar2016}. $\ell^1$ norm optimization gives a good approximation, but maybe the solution is not exact. However, if all coefficients are restricted into $0$ or $\pm 1$ in the optimization problem \eqref{eq:optcyc_one_hole-l1}, the $\ell^0$ norm and $\ell^1$ norm is identical, and it gives a better solution. This restriction on the coefficients has another advantage that we can understand the optimal solution in more intuitive way. Such an optimization problem is called integer programming. Integer programming is much slower than linear programming, but some good solvers such as cplex and Clp are available for integer programming. \subsection{Optimal cycle for a filtration} Now, we explain optimal cycles on a filtration to analyze persistent homology shown in \cite{Escolar2016}. We start from the example Fig.~\ref{fig:optcyc_filtration}. \begin{figure}[htbp] \centering \includegraphics[width=\hsize]{optcyc_filtration.pdf} \caption{A filtration example for optimal cycles.} \label{fig:optcyc_filtration} \end{figure} In the filtration, a hole $[z_1]$ appears at $X_2$ and disappear at $X_3$, another hole $[z_2]$ appears at $X_4$ and $[z_3]$ appears at $X_5$. The 1st PD of the filtration is $\{(2,3), (4,\infty), (5, \infty)\}$. The persistence cycles $z_1, z_2, z_3$, are computable by the algorithm of persistent homology and we want to find $z_3'$ or $z_3''$ to analyze the hole corresponding to the birth-death pair $(5, \infty)$. The hole $[z_1]$ has been already dead at $X_5$ and $[z_2]$ remains alive at $X_5$, so we can find $z_3'$ or $z_3''$ to solve the following optimization problem: \begin{align*} \mbox{minimize } & \|z\|_0 \mbox{ subject to: } \\ z &= z_3 + \partial w + k z_2, \\ w & \in C_1(X_5), \\ k & \in \Bbbk. \end{align*} In this case, $z_3''$ is chosen because $\|z_3'\|_0 > \|z_3''\|_0$. By generalizing the idea, we show Algorithm~\ref{alg:optcyc} to find optimal cycles for a filtration $\mathbb{X}$\footnote{ In fact, in \cite{Escolar2016}, two slightly different algorithms are shown, and this algorithm is one of them. }. Of course, to solve the optimization problem in Algorithm~\ref{alg:optcyc}, we can use the computation techniques shown in Section~\ref{subsec:fast-computation}. \begin{algorithm}[ht] \caption{Computation of optimal cycles on a filtration}\label{alg:optcyc} \begin{algorithmic} \State Compute $D_q(\mathbb{X})$ and persistence cycles $z_1, \ldots, z_n$ \State Choose $(b_i, d_i) \in D_q(\mathbb{X})$ by a user \State Solve the following optimization problem \begin{align*} \mbox{minimize } &\|z\|_1, \mbox{ subject to:} \\ z &= z_i + \partial w + \sum_{j \in T_i} \alpha_j z_j, \\ w & \in C_q(X_{b_i}), \\ \alpha_j & \in \Bbbk, \\ \text{where } T_i& = \{j \mid b_j < b_i < d_j\}. \end{align*} \end{algorithmic} \end{algorithm} \section{Volume optimal cycle}\label{sec:voc} In this section, we propose volume optimal cycles, a new tool to characterize generators appearing in persistent homology. In this section, we will show the generalized version of volume optimal cycles and the computation algorithm. The limited version of volume optimal cycles shown in \cite{voc} will be explained in the next section. We assume Condition~\ref{cond:ph} and consider the filtration $\mathbb{X}: \emptyset = X_0 \subset \cdots \subset X_K = X$. A \textit{persistent volume} for $(b_i, d_i) \in D_q(\mathbb{X})$ is defined as follows. \begin{definition} $z \in C_{q+1}(X)$ is a persistent volume for $(b_i, d_i) \in D_q(\mathbb{X})$ if $z$ satisfies the following conditions: \begin{align} z &= \sigma_{d_i} + \sum_{\sigma_k \in \mathcal{F}_{q+1}} \alpha_k \sigma_k, \label{eq:vc-1}\\ \tau^*(\partial z) &= 0 \mbox{ for all } \tau \in \mathcal{F}_{q}, \label{eq:vc-2}\\ \sigma_{b_i}^*(\partial z) &\not = 0, \label{eq:vc-3} \end{align} where $\mathcal{F}_{q} = \{ \sigma_k : q\textup{-simplex} \mid b_i < k < d_i \}$, $\{ \alpha_k \in \Bbbk \}_{\sigma_k \in \mathcal{F}_{q+1}}$, and $\sigma_k^*$ is the dual basis of cochain $C^q(X)$, i.e. $\sigma_k^*$ is the linear map on $C_q(X)$ satisfying $\sigma_k^*(\sigma_j) = \delta_{kj}$ for any $\sigma_k, \sigma_j$: $q$-simplex. \end{definition} Note that the persistent volume is defined only if the death time is finite. The \textit{volume optimal cycle} for $(b_i, d_i)$ and the \textit{optimal volume} for the pair are defined as follows. \begin{definition}\label{defn:voc} $\partial \hat{z}$ is the volume optimal cycle and $\hat{z}$ is the optimal volume for $(b_i, d_i) \in D_q(\mathbb{X})$ if $\hat{z}$ is the solution of the following optimization problem. \begin{center} minimize $\|z\|_0$, subject to \eqref{eq:vc-1}, \eqref{eq:vc-2}, and \eqref{eq:vc-3}. \end{center} \end{definition} The following theorem ensures that the optimization problem of the volume optimal cycle always has a solution. \begin{theorem}\label{thm:existence_voc} There is always a persistent volume of any $(b_i, d_i) \in D_q(\mathbb{X})$. \end{theorem} The following theorem ensures that the volume optimal cycle is good to represent the homology generator corresponding to $(b_i, d_i)$. \begin{theorem}\label{thm:good_voc} Let $\{x_j \mid j=1, \ldots, p\}$ be all persistence cycles for $D_q(\mathbb{X})$. If $z_i$ is a persistent volume of $(b_i, d_i) \in D_q(\mathbb{X})$, $\{x_j \mid j\not = i\} \cup \{\partial z_i\}$ are also persistence cycles for $D_q(\mathbb{X})$. \end{theorem} Intuitively say, a homology generator is dead by filling the internal volume of a ring, a cavity, etc., and a persistent volume is such an internal volume. The volume optimal cycle minimize the internal volume instead of the size of the cycle. \begin{proof}[Proof of Theorem \ref{thm:existence_voc}] Let $z_i$ be a persistence cycle satisfying (\ref{eq:birth_pre}-\ref{eq:death_post}). Since \begin{align*} z_i \in B_q(X_{d_i}) \backslash B_q(X_{d_i-1}), \end{align*} we can write $z_i$ as follows. \begin{equation} \begin{aligned} z_i &= \partial (w_0 + w_1), \\ w_0 &= \sigma_{d_i} + \sum_{\sigma_k \in \mathcal{F}_{q+1}} \alpha_k \sigma_k,\\ w_1 &= \sum_{\sigma_k \in \mathcal{G}_{q+1}} \alpha_k \sigma_k, \end{aligned}\label{eq:phbase_decomp} \end{equation} where $\mathcal{G}_{q+1} = \{\sigma_k: (q+1)\textrm{-simplex} \mid k < b_i\}$. Note that the coefficient of $\sigma_{d_i}$ in $w_0$ can be normalized as in \eqref{eq:phbase_decomp}. Now we prove that $w_0$ is a persistent volume. From $z_i \in Z_q(X_{b_i})$ and $\partial w_1 \in C_q(X_{b_i-1})$, we have $\partial w_0 = z_i - \partial w_1 \in C_q(X_{b_i})$ and this means that $\tau^*(\partial w_0) = 0$ for all $\tau \in \mathcal{F}_q$. From $\partial w_1 \in C_q(X_{b_i-1})$, we have $\sigma_{b_i}^*(\partial w_1) = 0$ and therefore $\sigma_{b_i}^*(\partial w_0) = \sigma_{b_i}^*(z_i)$, and the right hand side is not zero since $z_i \in Z_q(X_{b_i}) \backslash Z_q(X_{b_i-1}) \subset C_q(X_{b_i})\backslash C_q(X_{b_i-1})$. Therefore $w_0$ satisfies all conditions (\ref{eq:vc-1}-\ref{eq:vc-3}). \end{proof} \begin{proof}[Proof of Theorem \ref{thm:good_voc}] We prove the following arguments. The theorem follows from these arguments. \begin{align*} \partial z_i &\in Z_q(X_{b_i}) \backslash Z_q(X_{b_i-1}), \\ \partial z_i &\in B_q(X_{d_i}) \backslash B_q(X_{d_i-1}). \end{align*} The condition \eqref{eq:vc-2}, $\tau^*(\partial z_i) = 0 \mbox{ for all } \tau \in \mathcal{F}_{q} $, means $\partial z_i \in Z_q(X_{b_i})$. The condition \eqref{eq:vc-3}, $\sigma_{b_i}^*(\partial z_i) \not = 0$, means $\partial z_i \not \in Z_q(X_{b_i-1})$. Since $\partial z_i = \partial \sigma_{d_i} + \sum_{\sigma_k \in \mathcal{F}_{q+1}} \alpha_k \partial \sigma_k,$ and $B_q(X_{d_i}) = B_q(X_{d_i - 1}) \oplus \left< \partial \sigma_{d_i} \right>$, we have $\partial z_i \in B_q(X_{d_i}) \backslash B_q(X_{d_i-1})$ and this finishes the proof. \end{proof} \subsection{Algorithm for volume optimal cycles} To compute the volume optimal cycles, we can apply the same strategies as optimal cycles. Using linear programming with $\mathbb{R}$ coefficient and $\ell^1$ norm is efficient and gives sufficiently good results. Using integer programming is slower, but it gives better results. Now we remark the condition \eqref{eq:vc-3}. In fact it is impossible to handle this condition by linear/integer programming directly. We need to replace this condition to $|\sigma_{b_i}^*(\partial z)| \geq \epsilon$ for sufficiently small $\epsilon > 0$ and we need to solve the optimization problem twice for $\sigma_{b_i}^*(\partial z) \geq \epsilon$ and $\sigma_{b_i}^*(\partial z) \leq -\epsilon$. However, as mentioned later, we can often remove the constraint \eqref{eq:vc-3} to solve the problem and this fact is useful for faster computation. We can also apply the following heuristic performance improvement technique to the algorithm for an alpha filtration by using the locality of an optimal volume. The simplices which contained in the optimal volume for $(b_i, d_i)$, are contained in a neighborhood of $\sigma_{d_i}$. Therefore we take a parameter $r > 0$, and we use $\mathcal{F}_q^{(r)} = \{\sigma \in \mathcal{F}_q \mid \sigma \subset B_r(\sigma_{d_i}) \}$ instead of $\mathcal{F}_q$ to reduce the size of the optimization problem, where $B_r(\sigma_{d_i})$ is the ball of radius $r$ whose center is the centroid of $\sigma_{d_i}$. Obviously, we cannot find a solution with a too small $r$. In Algorithm~\ref{alg:volopt}, $r$ is properly chosen by a user but the computation software can automatically increase $r$ when the optimization problem cannot find a solution. We also use another heuristic for faster computation. To treat the constraint \eqref{eq:vc-3}, we need to apply linear programming twice for positive case and negative case. In many examples, the optimized solution automatically satisfies \eqref{eq:vc-3} even if we remove the constrain. There is an example in which the corner-cutting does not work (shown in \ref{subsec:properties-voc}), but it works well in many cases. One way is that we try to solve the linear programming without \eqref{eq:vc-3} and check the \eqref{eq:vc-3}, and if \eqref{eq:vc-3} is satisfied, output the solution. Otherwise, we solve the linear programming twice with \eqref{eq:vc-3}. The algorithm to compute a volume optimal cycle for an alpha filtration is Algorithm~\ref{alg:volopt}. \begin{algorithm}[h!] \caption{Algorithm for a volume optimal cycle}\label{alg:volopt} \begin{algorithmic} \Procedure{Volume-Optimal-Cycle}{$\mathbb{X}, r$} \State Compute the persistence diagram $D_q(\mathbb{X})$ \State Choose a birth-death pair $(b_i, d_i) \in D_q(\mathbb{X})$ by a user \State Solve the following optimization problem: \begin{align*} \mbox{minimize } &\|z\|_1, \mbox{ subject to:}\\ z &= \sigma_{d_i} + \sum_{\sigma_k \in \mathcal{F}_{q+1}^{(r)}} \alpha_k \sigma_k, \\ \tau^*(\partial z) &= 0 \mbox{ for all } \tau \in \mathcal{F}_{q}^{(r)}. \\ \end{align*} \If{we find the optimal solution $\hat{z}$} \If{$\sigma_{b_i}^*(\partial \hat{z}) \not = 0$} \State \Return $\hat{z}$ and $\partial \hat{z}$ \Else \State Retry optimization twice with the additional constrain: \begin{align*} \sigma_{b_i}^*(\partial z) \geq \epsilon \text{ or } \sigma_{b_i}^*(\partial z) \leq -\epsilon \end{align*} \EndIf \Else \State \Return the error message to the user to choose larger $r$. \EndIf \EndProcedure \end{algorithmic} \end{algorithm} If your filtration is not an alpha filtration, possibly you cannot use the locality technique. However, in that case, the core part of the algorithm works fine and you can use the algorithm. \subsection{Some properties about volume optimal cycles} \label{subsec:properties-voc} In this subsection, we remark some properties about volume optimal cycles. First, the volume optimal cycle for a birth-death pair is not unique. Figure~\ref{fig:multiple-voc} shows such an example. In this example, $D_1 = \{(1, 5), (3, 4), (2, 6)\}$ and both (b) and (c) is the optimal volumes of the birth-death pair $(2, 6)$. In this filtration, any weighted sum of (b) and (c) with weight $\lambda$ and $1-\lambda$ ($0 \leq \lambda \leq 1$) in the sense of chain complex is the volume optimal cycle of $(2, 6)$ if we use $\mathbb{R}$ as a coefficient and $\ell^1$ norm. However, standard linear programing algorithms choose an extremal point solution, hence the algorithms choose either $\lambda=0$ or $\lambda=1$ and our algorithm outputs either (b) or (c). \begin{figure}[htbp] \centering \includegraphics[width=\hsize]{voc-not-unique.pdf} \caption{An example of non-unique volume optimal cycles.} \label{fig:multiple-voc} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.9\hsize]{voc-wrong.pdf} \caption{An example of the failure of the computation of the volume optimal cycle if the constrain \eqref{eq:vc-3} is removed.} \label{fig:voc-failure} \end{figure} Second, by the example in Fig~\ref{fig:voc-failure}, we show that the optimization problem for the volume optimal cycle may give a wrong solution if the constrain \eqref{eq:vc-3} is removed. In this example, $(b_1, d_1), (b_2, d_2), (b_3, d_3)$ are birth-death pairs in the 1st PD, and the volume optimal cycle for $(b_1, d_1)$ is ($\alpha$) in Fig.~\ref{fig:voc-failure}, but the algorithm gives ($\beta$) if the constrain \eqref{eq:vc-3} is removed. \section{Volume optimal cycle on $(n-1)$-th persistent homology}\label{sec:vochd} In this section, we consider a triangulation of a convex set in $\mathbb{R}^n$ and its $(n-1)$-th persistent homology. More precisely, we assume the following conditions. \begin{cond}\label{cond:rn} A simplicial complex $X$ in $\mathbb{R}^n$ satisfies the following conditions. \begin{itemize} \item Any $k$-simplex $(k<n)$ in $X$ is a face of an $n$-simplex \item $|X|$ is convex \end{itemize} \end{cond} For example, an alpha filtration satisfies the above conditions if the point cloud has more than $n$ points and satisfies the general position condition. In addition, we assume Condition~\ref{cond:ph} to simplify the statements of results and algorithms. The thesis \cite{voc} pointed out that $(n-1)$-th persistent homology is isomorphic to 0th persistent cohomology of the dual filtration by the Alexander duality under the assumption. By using this fact, the thesis defined volume optimal cycles under different formalization from ours. The thesis defined a volume optimal cycle as an output of Algorithm~\ref{alg:volopt-hd-compute}. In fact, the two definitions of volume optimal cycles are equivalent on $(n-1)$-th persistent homology. 0th persistent cohomology is deeply related to the connected components, and we can compute the volume optimal cycle by linear computation cost. The thesis also pointed out that $(n-1)$-th persistent homology has a tree structure called persistence trees (or PH trees). In this section, we always use $\mathbb{Z}_2$ as a coefficient of homology since using $\mathbb{Z}_2$ makes the problem easier. The following theorems hold. \begin{theorem}\label{thm:vochd-unique} The optimal volume for $(b_i, d_i) \in D_{n-1}(\mathbb{X})$ is uniquely determined. \end{theorem} \begin{theorem}\label{thm:vochd-tree} If $z_i$ and $z_j$ are the optimal volumes for two different birth-death pairs $(b_i, d_i)$ and $(b_j, d_j)$ in $D_{n-1}(\mathbb{X})$, one of the followings holds: \begin{itemize} \item $z_i \cap z_j = \emptyset$, \item $z_i \subset z_j$, \item $z_i \supset z_j$. \end{itemize} Note that we can naturally regard any $z = \sum_{\sigma: n\text{-simplex}} k_{\sigma} \sigma \in C_{n}(X)$ as a subset of $n$-simplices of $X$, $\{\sigma : n\text{-simplex} \mid k_{\sigma} \not = 0\}$, since we use $\mathbb{Z}_2$ as a homology coefficient. \end{theorem} From Theorem~\ref{thm:vochd-tree}, we know that $D_{n-1}(\mathbb{X})$ can be regarded as a forest (i.e. a set of trees) by the inclusion relation. The trees are called \textit{persistence trees}. We can compute all optimal volumes and persistence trees on $D_{n-1}(\mathbb{X})$ by the merge tree algorithm (Algorithm~\ref{alg:volopt-hd-compute}). This algorithm is a modified version of the algorithm in \cite{voc}. To describe the algorithm, we prepare a directed graph $(V, E)$ where $V$ is a set of nodes and $E$ is a set of edges. In the algorithm, an element of $V$ is a $n$-cell in $X \cup \{\sigma_{\infty}\}$ and an element of $E$ is a directed edge between two $n$-cells, where $\sigma_\infty = \mathbb{R}^n \backslash X$ is the $n$-cell in the one point compactification space $\mathbb{R}^n \cup \{\infty\} \simeq S^n$. An edge has extra data in $\mathbb{Z}$ and we write the edge from $\sigma$ to $\tau$ with extra data $k$ as $(\sigma \xrightarrow{k} \tau)$. Since the graph is always a forest through the whole algorithm, we always find a root of a tree which contains a $n$-cell $\sigma$ in the graph $(V, E)$ by recursively following edges from $\sigma$. We call this procedure \textproc{Root}($\sigma, V, E$). \begin{algorithm} \caption{Computing persistence trees by merge-tree algorithm}\label{alg:volopt-hd-compute} \begin{algorithmic} \Procedure{Compute-Tree}{$\mathbb{X}$} \State initialize $V = \{\sigma_\infty\}$ and $E = \emptyset$ \For{$k=K,\ldots,1$} \If{$\sigma_k$ is a $n$-simplex} \State add $\sigma_k$ to $V$ \ElsIf{$\sigma_k$ is a $(n-1)$-simplex} \State let $\sigma_s$ and $\sigma_t$ are two $n$-cells whose common face is $\sigma_k$ \State $\sigma_{s'} \gets \textproc{Root}(\sigma_s, V, E)$ \State $\sigma_{t'} \gets \textproc{Root}(\sigma_t, V, E)$ \If{$s'=t'$} \State \textbf{continue} \ElsIf{$s'> t'$} \State Add $(\sigma_{t'} \xrightarrow{k} \sigma_{s'})$ to $E$ \Else \State Add $(\sigma_{s'} \xrightarrow{k} \sigma_{t'})$ to $E$ \EndIf \EndIf \EndFor \Return $(V, E)$ \EndProcedure \end{algorithmic} \end{algorithm} The following theorem gives us the interpretation of the result of the algorithm to the persistence information. \begin{theorem}\label{thm:vochd-alg} Let $(V, E)$ be a result of Algorithm~\ref{alg:volopt-hd-compute}. Then the followings hold. \begin{enumerate}[(i)] \item $D_{n-1}(\mathbb{X}) = \{(b, d) \mid (\sigma_d \xrightarrow{b} \sigma_s) \in E\}$ \item The optimal volume for $(b, d)$ is all descendant nodes of $\sigma_d$ in $(E, V)$ \item The persistence trees is computable from $(E, V)$. That is, $(b_i, d_i)$ is a child of $(b_j, d_j)$ if and only if there are edges $\sigma_{d_i} \xrightarrow{b_i} \sigma_{d_j} \xrightarrow{b_j} \sigma_{s}$. \end{enumerate} \end{theorem} The theorems in this section can be proven from the following facts: \begin{itemize} \item From Alexander duality, for a simplicial complex $X$ in $\mathbb{R}^n$, \begin{align*} H_q(X) \simeq H^{n-q-1}((\mathbb{R}^n\backslash X)\cup\{\infty\}), \end{align*} holds. \begin{itemize} \item $\infty$ is required for one point compactification of $\mathbb{R}^n$. \item More precisely, we use the dual decomposition of $X$. \end{itemize} \item By applying above Alexander duality to a filtration, $(n-1)$-th persistent homology is isomorphic to $0$-th persistent cohomology of the dual filtration. \item On a cell complex $\bar{X}$, a basis of $0$-th cohomological vector space is given by \begin{align*} \{ \sum_{\sigma \in C} \sigma^* &\mid C \in \textrm{cc}(\bar{X})\}, \end{align*} where $\textrm{cc}(\bar{X})$ is the connected component decomposition of 0-cells in $\bar{X}$. \item Merge-tree algorithm traces the change of connectivity in the filtration, and it gives the structure of 0-the persistent cohomology. \end{itemize} We prove the theorems in Appendix~\ref{sec:pfvochd}. \subsection{Computation cost for merge-tree algorithm} \label{sec:faster} In the algorithm, we need to find the root from its descendant node. The naive way to find the root is following the graph step by step to the ancestors. In the worst case, the time complexity of the naive way is $O(N)$ where $N$ is the number of of $n$-simplices, and total time complexity of the algorithm becomes $O(N^2)$. The union-find algorithm~\cite{unionfind} is used for a similar data structure, and we can apply the idea of union-find algorithm. By adding a shortcut path to the root in a similar way as the union-find algorithm, the amortized time complexity is improved to almost constant time\footnote{ More precisely, the amortized time complexity is bounded by the inverse of Ackermann function and it is less than 5 if the data size is less than $2^{2^{2^{2^{16}}}}$. Therefore we can regard the time complexity as constant. }. Using the technique, the total time complexity of the Algorithm~\ref{alg:volopt-hd-compute} is $O(N)$. \section{Comparison between volume optimal cycles and optimal cycles}\label{sec:compare} In this section, we compare volume optimal cycles and optimal cycles. In fact, optimal cycles and volume optimal cycles are identical in many cases. However, since we can use optimal volumes in addition to volume optimal cycles, we have more information than optimal cycles. One of the most prominent advantage of volume optimal cycles is children birth-death pairs, explained below. \subsection{Children birth-death pairs} In the above section, we show that there is a tree structure on an $(n-1)$-th persistence diagram computed from a triangulation of a convex set in $\mathbb{R}^n$. Unfortunately, such a tree structure does not exist in a general case. However, in the research of amorphous solids by persistent homology\cite{Hiraoka28062016}, a hierarchical structure of rings in $\mathbb{R}^3$ is effectively used, and it will be helpful if we can find such a structure on a computer. In \cite{Hiraoka28062016}, the hierarchical structure was found by computing all optimal cycles and searching multiple optimal cycles which have common vertices. However, computing all optimal cycles or all volume optimal cycles is often expensive as shown in Section \ref{subsec:performance} and we require a cheaper method. The optimal volume is available for that purpose. When the optimal volume for a birth-death pair $(b_i, d_i)$ is $\hat{z} = \sigma_{d_i} + \sum_{\sigma_k \in \mathcal{F}_{q+1}} \hat{\alpha}_k \sigma_k$, the \textit{children birth-death pairs} of $(b_i, d_i)$ is defined as follows: \begin{align*} \{(b_j, d_j) \in D_q(\mathbb{X}) \mid \sigma_{d_j} \in \mathcal{F}_{q+1}, \hat{\alpha}_{d_j} \not = 0 \}. \end{align*} This is easily computable from a optimal volume with low computation cost. Now we remark that if we consider $(n-1)$-th persistent homology in $\mathbb{R}^n$, the children birth-death pairs of $(b_i, d_i) \in D_{n-1}(\mathbb{X})$ is identical to all descendants of $(b_i, d_i)$ in the tree structure. This fact is known from Theorem~\ref{thm:vochd-tree}. This fact suggests that we can use children birth-death pairs as a good substitute for the tree structure appearing on $D_{n-1}(\mathbb{X})$ in $\mathbb{R}^n$. The ability of children birth-death pairs is shown in Section \ref{sec:example-silica}, the example of amorphous silica. \subsection{Some examples in which volume optimal cycles and optimal cycles are different} We show some differences between optimal cycles and volume optimal cycles on a filtration. In Fig~\ref{fig:oc-voc-diff-1}, the 1st PD of this filtration is $\{(2, 5), (3, 4)\}$. The optimal cycle of $(3, 4)$ is $z_1$ since $\|z_1\|_1 < \|z_2\|_1$ but the volume optimal cycle is $z_2$. In this example, $z_2$ is better than $z_1$ to represent the birth-death pair $(3, 4)$. The example is deeply related to Theorem~\ref{thm:good_voc}. Such a theorem does not hold for optimal cycles and it means that an optimal cycle may give misleading information about a birth-death pair. This is one advantage of volume optimal cycles compared to optimal cycles. \begin{figure}[htbp] \centering \includegraphics[width=0.9\hsize]{oc-voc-diff-1.pdf} \caption{A filtration whose optimal cycle and volume optimal cycle are different.} \label{fig:oc-voc-diff-1} \end{figure} In Fig.~\ref{fig:oc-voc-diff-2} and Fig.~\ref{fig:oc-voc-diff-3}, optimal cycles and volume optimal cycles are also different. In Fig.~\ref{fig:oc-voc-diff-2}, the optimal cycle is $z_1$ but the volume optimal cycle $z_2$. In Fig.~\ref{fig:oc-voc-diff-3}, the optimal cycle for $(3, 4)$ is $z_1$ but the volume optimal cycle is $z_1 + z_2$. \begin{figure}[htbp] \centering \includegraphics[width=0.3\hsize]{oc-voc-diff-2.pdf} \caption{Another filtration whose optimal cycle and volume optimal cycle are different.} \label{fig:oc-voc-diff-2} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.7\hsize]{oc-voc-diff-3.pdf} \caption{Another filtration whose optimal cycle and volume optimal cycle are different.} \label{fig:oc-voc-diff-3} \end{figure} In Fig~\ref{fig:no-voc}, the 1st PD is $(2, \infty)$ and we cannot define the volume optimal cycle but can define the optimal cycle. In general, we cannot define the volume optimal cycle for a birth-death pair with infinite death time. If we use an alpha filtration in $\mathbb{R}^n$, such a problem doest not occur because a Delaunnay triangulation is always acyclic. But if we use another type of a filtration, we possibly cannot use volume optimal cycles. That may be a disadvantage of volume optimal cycles if we use a filtration other than an alpha filtration, such as a Vietoris-Rips filtration. \begin{figure}[htbp] \centering \includegraphics[width=0.3\hsize]{no-voc.pdf} \caption{A filtration without a volume optimal cycle.} \label{fig:no-voc} \end{figure} One more advantage of the volume optimal cycles is the simplicity of the computation algorithm. For the computation of the optimal cycles we need to keep track of all persistence cycles but for the volume optimal cycles we need only birth-death pairs. Some efficient algorithms implemented in phat and dipha do not keep track of such data, hence we cannot use such softwares to compute the optimal cycles without modification. By contrast we can use such softwares for the computation of the volume optimal cycles. \section{Example}\label{sec:example} In this section, we will show the example results of our algorithm. In all of these examples, we use alpha or weighted alpha filtrations. For all of these examples, optimal volumes and volume optimal cycles are computed on a laptop PC with 1.2 GHz Intel(R) Core(TM) M-5Y71 CPU and 8GB memory on Debian 9.1. Dipha~\cite{dipha} is used to compute PDs, CGAL\footnote{\url{http://www.cgal.org/}} is used to compute (weighted) alpha filtrations, and Clp~\cite{coin} is used to solve the linear programming. Python is used to write the program and pulp\footnote{\url{https://github.com/coin-or/pulp}} is used for the interface to Clp from python. Paraview\footnote{\url{https://www.paraview.org/}} is used to visualize volume optimal cycles. If you want to use the software, please contact with us. Homcloud\footnote{\url{http://www.wpi-aimr.tohoku.ac.jp/hiraoka_labo/research-english.html}}, a data analysis software with persistent homology developed by our laboratory, provides the algorithms shown in this paper. Homcloud provides the easy access to the volume optimal cycles. We can visualize the volume optimal cycle of a birth-death pair only by clicking the pair in a PD on Homcloud's GUI. \subsection{2-dimensional Torus} The first example is a 2-dimensional torus in $\mathbb{R}^3$. 2400 points are randomly scattered on the torus and PDs are computed. Figure~\ref{fig:pd-torus} shows the 1st and 2nd PDs. The 1st PD has two birth-death pairs $(0.001, 0.072)$ and $(0.001, 0.453)$ and the 2nd PD has one birth-death pair $(0.008, 0.081)$ far from the diagonal. These birth-death pairs correspond to generators of $H_1(\mathbb{T}^2) \simeq \Bbbk^2$ and $H_2(\mathbb{T}^2) \simeq \Bbbk$. \begin{figure}[tbp] \centering \includegraphics[width=0.4\hsize]{torus-pd1.png} \includegraphics[width=0.4\hsize]{torus-pd2.png} \caption{The 1st and 2nd PDs of the point cloud on a torus.} \label{fig:pd-torus} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=0.3\hsize]{torus-w-1.png} \includegraphics[width=0.3\hsize]{torus-w-2.png} \includegraphics[width=0.3\hsize]{torus-w-3.png} \caption{Volume optimal cycles for $(0.001, 0.072)$ and $(0.001, 0.453)$ in $D_1$ and $(0.008, 0.081)$ in $D_2$ on the torus point cloud.} \label{fig:torus-voc} \end{figure} Figure~\ref{fig:torus-voc} shows the volume optimal cycles of these three birth-death pairs using Algorithm~\ref{alg:volopt}. Blue lines show volume optimal cycles, red lines show optimal volumes, black lines show $\sigma_d$ for each birth death pair $(b, d)$ (we call this simplex the \textit{death simplex}). Black dots show the point cloud. By the figure, we understand how homology generators appear and disappear in the filtration of the torus point cloud. The computation times are 25sec, 33sec, and 7sec on our laptop PC. By using Algorithm~\ref{alg:volopt-hd-compute}, we can also compute volume optimal cycles in $D_2$. In this example, the computation time by Algorithm~\ref{alg:volopt-hd-compute} is about 2sec. This is much faster than Algorithm~\ref{alg:volopt} even if Algorithm~\ref{alg:volopt-hd-compute} computes \emph{all} volume optimal cycles. \subsection{Amorphous silica} \label{sec:example-silica} In this example, we use the atomic configuration of amorphous silica computed by molecular dynamical simulation as a point cloud and we try to reproduce the result in \cite{Hiraoka28062016}. In this example, we use weighted alpha filtration whose weights are the radii of atoms. The number of atoms are 8100, 2700 silicon atoms and 5400 oxygen atoms. Figure~\ref{fig:amorphous-silica} shows the 1st PD. This diagram have four characteristic areas $C_P$, $C_T$, $C_O$, and $B_O$. These areas correspond to the typical ring structures in the amorphous silica as follows. Amorphous silica consists of silicon atoms and oxygen atoms and the network structure is build by covalent bonds between silicons and oxygens. $C_P$ has rings whose atoms are \ce{$\cdots$ -Si-O-Si-O- $\cdots$ } where \ce{-} is a covalent bond between a silicon atom and a oxygen atom. $C_P$ has triangles consisting of \ce{O-Si-O}. $C_O$ has triangles consisting of three oxygen atoms appearing alternately in \ce{$\cdots$-O-Si-O-Si-O-$\cdots$}. $B_O$ has many types of ring structures, but one typical ring is a quadrangle consists of four oxygen atoms appearing alternately in \ce{$\cdots$-O-Si-O-Si-O-Si-O-$\cdots$}. Figure~\ref{fig:voc-silica} shows the volume optimal cycles for birth-death pairs in $C_P, C_t, C_O$ and $B_O$. In this figure oxygen (red) and silicon (blue) atoms are also shown in addition to volume optimal cycles, optimal volumes, and death simplices. We can reproduce the result of \cite{Hiraoka28062016} about ring reconstruction. \begin{figure}[tbp] \centering \includegraphics[width=0.24\hsize]{c_p.png} \includegraphics[width=0.24\hsize]{c_t.png} \includegraphics[width=0.24\hsize]{c_o.png} \includegraphics[width=0.24\hsize]{b_o.png} \caption{Volume optimal cycles in amorphous silica in $C_P, C_T, C_O$, and $B_O$ (from left to right).} \label{fig:voc-silica} \end{figure} We also know that the oxygen atom rounded by the green circle in this figure is important to determine the death time. The death time of this birth-death pair is determined by the radius of circumcircle of the black triangle (the death simplex), hence if the oxygen atom moves away, the death time becomes larger. The oxygen atom is contained in another \ce{$\cdots$ -Si-O-Si-O- $\cdots$} ring structure around the volume optimal cycle (the blue ring). By the analysis of the optimal volume, we clarify that such an interaction of covalent bond rings determines the death times of birth-death pairs in $C_P$. This analysis is impossible for the optimal cycles, and the volume optimal cycles enable us to analyze persistence diagrams more deeply. \begin{figure}[tbp] \centering \includegraphics[width=0.5\hsize]{children.pdf} \caption{Children birth-death pairs. Red circles are children birth-death pairs of the green birth-death pair.} \label{fig:children-bd-pairs} \end{figure} Figure~\ref{fig:children-bd-pairs} shows the children birth-death pairs of the green birth-death pair. The rings corresponding to these children birth-death pairs are subrings of the large ring corresponding to the green birth-death pair. This computation result shows that a ring in $C_P$ has subrings in $C_T$, $C_O$, and $B_O$. The hierarchical structure of these rings shown in \cite{Hiraoka28062016}. We can easily find such a hierarchical structure by using our new algorithm. The computation time is 3 or 4 seconds for each volume optimal cycle on the laptop PC. The computation time for amorphous silica is much less than that for 2-torus even if the number of points in amorphous silica is larger than that in 2-torus. This is because the locality of volume optimal cycles works very fine in the example of amorphous silica. \subsection{Face centered cubic lattice with defects} The last example uses the point cloud of face centered cubic (FCC) lattice with defects. By this example, we show how to use the persistence trees computed by Algorithm~\ref{alg:volopt-hd-compute}. The point cloud is prepared by constructing perfect FCC lattice, adding small Gaussian noise to each point, and randomly removing points from the point cloud. \begin{figure}[thbp] \centering \includegraphics[width=0.8\hsize]{fcc-pds.pdf} \caption{(a) The 2nd PD of the perfect FCC lattice with small Gaussian noise. (b) The 2nd PD of the lattice with defects. }\label{fig:fcc-pd} \end{figure} Figure~\ref{fig:fcc-pd}(a) shows the 2nd PD of FCC lattice with small Gaussian noise. (i) and (ii) in the figure correspond to octahedron and tetrahedron cavities in the FCC lattice. In materials science, these cavities are famous as octahedron sites and tetrahedron sites. Figure~\ref{fig:fcc-pd}(b) shows the 2nd PD of the lattice with defects. In the PD, birth-death pairs corresponding to octahedron and tetrahedron cavities remain ((i) and (ii) in Fig~\ref{fig:fcc-pd}(b)), but other types of birth-death pairs appear in this PD. These pairs correspond to other types of cavities generated by removing points from the FCC lattice. Figure~\ref{fig:fcc-tree-1}(a) shows a tree computed by Algorithm~\ref{alg:volopt-hd-compute}. Red markers are nodes of the tree, and lines between two markers are edges of the tree, where upper left nodes are ancestors and lower right nodes are descendants. The tree means that the largest cavity corresponding to most upper-left node has sub cavities corresponding descendant nodes. Figure~\ref{fig:fcc-tree-1}(b) shows the volume optimal cycle of the most upper-left node, (c) shows the volume optimal cycles of pairs in (i), and (d) shows the volume optimal cycles of pairs in (ii). Using the algorithm, we can study the hierarchical structures of the 2nd PH. \begin{figure}[thbp] \centering \includegraphics[width=0.85\hsize]{fcc-tree-1.pdf} \caption{A persistence tree and related volume optimal cycles. (a) The persistence tree whose root is $(0.68, 1.98)$. (b) The volume optimal cycle of the root pair. (c) The volume optimal cycles of birth-death pairs in (i) which are descendants of the root pair. (d) The volume optimal cycles of birth-death pairs in (ii) which are descendants of the root pair. } \label{fig:fcc-tree-1} \end{figure} \subsection{Computation performance comparison with optimal cycles} \label{subsec:performance} We compare the computation performance between optimal cycles and volume optimal cycles. We use OptiPers for the computation of optimal cycles for persistent homology, which is provided by Dr. Escolar, one of the authors of \cite{Escolar2016}. OptiPers is written in C++ and our software is mainly written in python, and python is much slower than C++, so the comparison is not fair, but suggestive for the readers. We use two test data. One test data is the atomic configuration of amorphous silica used in the above example. The number of points is 8100. Another data is the partial point cloud of the amorphous silica. The number of points is 881. We call these data the large data and the small data. Table~\ref{tab:performance} shows the computation time of optimal cycles/volume optimal cycles for all birth-death pairs in the 1st PD by OptiPers/Homcloud. \begin{table}[thbp] \centering \begin{tabular}{c|cc} & optimal cycles (OptiPers) & volume optimal cycles (Homcloud) \\ \hline the small data & 1min 17sec & 3min 9sec \\ the large data & 5hour 46min & 4hour 13min\\ \end{tabular} \caption{Computation time of optimal cycles and volume optimal cycles on the large/small data.} \label{tab:performance} \end{table} For the small data, OptiPers is faster than Homcloud, but to the contrary, for the large data, Homcloud is faster than OptiPers. This is because the performance improvement technique using the locality of the optimal volume works fine for the large data, but for the small data the technique is not so effective and the overhead cost using python is dominant for Homcloud. This benchmark shows that the volume optimal cycles have an advantage about the computation time when an input point cloud is large. \section{Conclusion}\label{sec:conclusion} In this paper, we propose the idea of volume optimal cycles to identify good geometric realizations of homology generators appearing in persistent homology. Optimal cycles are proposed for that purpose in \cite{Escolar2016}, but our method is faster for large data and gives better information. Especially, we can reasonably compute children birth-death pairs only from a volume optimal cycle. Volume optimal cycles are already proposed under the limitation of dimension in \cite{voc}, and this paper generalize the idea. Our idea and algorithm are widely applicable to and useful for the analysis of point clouds in $\mathbb{R}^n$ by using the (weighted) alpha filtrations. Our method gives us intuitive understanding of PDs. In~\cite{PDML}, such inverse analysis from a PD to its original data is effectively used to study many geometric data with machine learning on PDs and our method is useful to the combination of persistent homology and machine learning. In this paper, we only treat simplicial complex, but our method is also applicable to a cell filtration and a cubical filtration. Our algorithms will be useful to study sublevel or superlevel filtrations given by 2D/3D digital images.
{ "timestamp": "2017-12-15T02:04:01", "yymm": "1712", "arxiv_id": "1712.05103", "language": "en", "url": "https://arxiv.org/abs/1712.05103" }
\section{Introduction} \label{sec:intro} The \emph{worst-case execution time (WCET)} problem~\cite{WilhelmEtAl:2008,PuschnerBurns:2000} is an important research area within the context of real-time systems. There exist many tools and techniques for static WCET analyis~\cite{GustafssonEtAl:2006,ColinPuaut:2000,FerdinandWilhelm:1999,LiLiangMitra:2007,LiMalik:1997,Ballabriga:2010}, for measurement-based and probabilistic approaches~\cite{BernatEtAl:2003,DavidPuaut:2004}, and alternative approaches that are based on simplified hardware~\cite{PuautPais:2007,KimEtAl:2014,ZimmerEtAl:2014,RochangeEtAl:2014,AxerEtAl:2014,SchoeberlEtAl:2010,KimEtAl:2017}. Recent work also targets the challenging problem of multicore WCET analysis ~\cite{ChattopadhyayEtAl:2014,LiMitraEtAl:2009,TanMooney:2007,KastnerEtAl:2012,PellizzoniEtAl:2010,MancusoEtAl:2015}. Although several of the state-of-the-art tools can estimate a safe WCET bound at the function level, there is currently no existing tool that can provide guaranteed optimal WCET values between specific program points. The aim of the KTA tool is to provide such optimal and guaranteed fine-grained analysis. The toolchain is available as open source\footnote{\url{https://github.com/timed-c/kta}}. This short paper gives a brief overview of the key ideas and history behind the KTA tool. Section~\ref{sec:overview} gives an overview of the main use cases and objectives. Section~\ref{sec:architecture} describes the main architectural components, and Section~\ref{sec:futurework} discusses some future research directions. \section{Background and Objectives} \label{sec:overview} The early work of the toolchain started in 2013 during the time when the author of this paper worked at UC Berkeley within the PRET project~\cite{EdwardsLee:2007,LiuEtAl:2012,ZimmerEtAl:2014}. As part of the vision of a toolchain~\cite{BromanEtAlPretInf:2013}, the objective was to support WCET analysis for the RISC-V instruction set architecture (ISA) and to perform parts of the analysis within the LLVM~\cite{LattnerAdve:2004} toolchain. However, when the author moved to KTH, the focus shifted to low-level analysis at the machine code level. For this purpose, the MIPS ISA was used instead, partially because of the need to support the analysis of off-the-shelf hardware. The MIPS architecture was also chosen because of its rather simple structure and its common use in education. Today, there are two separate timing analysis methods, with the following separate objectives. \begin{enumerate} \item \textbf{Exhaustive fine-grained timing analysis.} The objective of this fine-grain\-ed analysis is to enable both WCET analysis and best-case execution time (BCET) analysis between arbitrary program points within a function. The current version of the work is primarily used in the context of interactive timing analysis~\cite{FuhrmannEtAl:2016}, where a graphical modeling tool can be used to identify hotspots of the model that are contributing significantly to the WCET path. The current version of the fine-grained analysis is based on exhaustively searching all paths between programming points. As a consequence, this approach is not scalable, but it has been very useful for identifying the fine-grained analysis methodology. We see it as future work to combine this fine-grained timing-point methodology with the next approach that is based on abstract interpretation~\cite{CousotCousot:1977,Cousot:2001}. \item \textbf{Abstract search-based timing analysis.} The objective of the abstract search-based WCET analysis is to perform highly scalable WCET analysis that returns optimal WCET values. By optimal we mean WCET estimates that are sound and equal to the actual WCET. Note that the estimated WCET value is only optimal with respect to the \emph{model} of the hardware platform, and not necessarily with respect to the hardware itself. That is, we must assume that the model of the hardware is sound, but this is typically hard to actually prove in practice. The key aspects of this analysis are i) the analysis is performed using a technique based on abstract interpretation, ii) it performs a combined-phase analysis, where program-flow analysis, microarchitecture analysis, and global-bound analysis are combined into one global phase, and iii) the optimal WCET value is computed using an abstract search-based method, which is based on a divide-and-conquer approach. \end{enumerate} \noindent Between the years 2013 and 2016, the work on the KTA tool was performed by David Broman. In the year 2017, the Master's student Rodothea-Myrsini Tsoupidi started to extend the KTA tool with the support for cache analysis and pipeline analysis. The design and implementation described in this paper only reflects the work done by David prior and during 2017. The microarchitecture extensions developed in Tsoupidi's Master's thesis are briefly mentioned as future work in Section~\ref{sec:futurework}. \begin{figure*}[!b] \center \includegraphics[width=1.0\textwidth]{architecture.pdf} \caption{An architectural overview of the KTA tool.} \label{fig:arch} \end{figure*} \section{Architecture Overview} \label{sec:architecture} Fig.~\ref{fig:arch} depicts the main components and flow of information within the KTH tool. The picture shows the two main analysis flows: i) \emph{exhaustive fine-grained timing analysis} (top part of the figure), and ii) \emph{abstract search-based timing analysis} (bottom part). The boxes represent processing components, and the rounded boxes represent \emph{data}. The main input to the tool is a C program (left part of the figure), together with a set of parameters (not shown in the figure). The C code is first compiled using a standard \emph{cross compiler} for the C programming language. In our case, we used a \verb|gcc| variant that targets the MIPS instruction set architecture. The cross compiler generates an ELF (executable and linkable format) file. The different sections (\verb|.text|, \verb|.data|, etc.) of the ELF file are decoded. In particular, the MIPS machine code is decoded into an internal format. All code is written in OCaml and compiled using the OCaml compiler version 4.05.0. The rest of this section describes the main ideas of the two different analyses. \subsection{Exhaustive Fine-Grained Timing Analysis} The exhaustive fine-grained timing analysis (top part of Fig.~\ref{fig:arch}) takes as input the decoded machine code and returns a sound and optimal WCET result, if the search terminates within a specific time limit. The exhaustive search consists conceptually of a loop where a \emph{cycle accurate simulation} is first performed with some selected \emph{concrete input}. The output from the simulation is a \emph{concrete execution time} value, that is then used by the \emph{exhaustive search} component to select the next concrete input. This procedure continues until all program inputs have been explored. The procedure stores timing information at predefined timing points, which are then later used for computing the WCET and BCET values between timing points. Please see the paper by Fuhrmann \emph{et al.}\cite{FuhrmannEtAl:2016} for more information. \subsection{Abstract Search-Based Timing Analysis} The main flow of the abstract search-based timing analysis is shown in the bottom part of Fig.~\ref{fig:arch}. In the first step, the control flow graph (CFG) of the machine code is reconstructed. The CFG is then the input to a \emph{CPS OCaml code generator} that outputs OCaml code in continuous-passing style (CPS). This staged CPS machine code is then compiled (using an \emph{OCaml compiler}) into a \emph{staged abstract interpreter}. This is one of the key ideas of KTH: the machine code of the program that is going to be analyzed is in fact translated into a program that performs abstract interpretation by executing the machine code abstractly. Note that the data item \emph{staged abstract interpreter} is depicted both as a normal box (a process component) and a rounded box (a data item). This means that the staged interpreter is compiled into a binary artifact that is then executed in the circular control-flow graph (shown at the bottom of the figure). The abstract-search phase is performed as follows. First, the component called \emph{abstract search} selects some abstract input. By abstract input we mean a set of values (typically an interval) that represents a subset of the input space. The \emph{staged abstract interpreter} performs an analysis phase based on this input, and generates a sound (but not necessarily optimal) WCET result\footnote{Note that the tool can potentially generate BCET values as well, but it is not completely implemented in the current version.}. The WCET result is then used as input again to the abstract search component, that selects another relevant \emph{abstract input}. The abstract search algorithm performs a divide-and-conquer analysis to enable faster search of the optimal WCET value. Note that the abstract search is \emph{bounded}, which means that the abstract interpretation terminates if a simulated max-time value is reached. This is actually natural in a real-time scheduling setting because we can often assume that the maximal reasonable WCET value is the period of a task. Hence, we get a termination definition that is not directly dependent on the analysis time. \section{Future Research} \label{sec:futurework} As stated before, this KTA tool can so far be seen as work in progress. However, we are currently extending the tool in a number of aspects. More specifically, the following can be seen as prioritized ongoing and future work: \begin{itemize} \item The tool is currently being extended to include more complicated micro architectures. In particular, Tsoupidi's ongoing Master's thesis is focusing on extending the KTA tool with sound cache analysis and sound pipeline analysis. \item We would like to inspect if the tool can be extended with the non-relational Polyhedra domain~\cite{SinghEtAl:2017}. \item We will investigate how the tool can be extended to also support multicore analysis. \item An interesting problem would be to combine the fine-grained timing analysis, with the above presented abstract-search based method. \end{itemize} \section{Conclusions} \label{sec:conclusion} In this paper, we give a brief overview of the KTA tool. In particular, the paper describes two main approaches of timing analysis that are available in KTA: i) exhaustive fine-grained timing analysis, and ii) abstract search-based timing analysis. We content that the latter approach---where all phases in traditional WCET analysis are combined into one pass---can be a serious alternative approach to traditional WCET analysis. \section*{Acknowledgments} This project is financially supported by the Swedish Foundation for Strategic Research (FFL15-0032). The research work has previously been funded by the Swedish Research Council (\#623-2011-955 and \#623-2013-8591). I would like to thank Rodothea-Myrsini Tsoupidi and Saranya Natarajan for comments on the final version of this paper. \newpage \bibliographystyle{plain}
{ "timestamp": "2017-12-15T02:08:52", "yymm": "1712", "arxiv_id": "1712.05264", "language": "en", "url": "https://arxiv.org/abs/1712.05264" }
\section{Introduction}\label{sec:1} A standard option (also called plain vanilla) is a financial contract which gives the owner of the contract the right, but not the obligation, to buy or sell a specified asset to a prespecified price (strike price) at a prespecified time (maturity). The specified asset (underlying asset) can be for example stocks, indexes, currencies, bonds or commodities. The option can be either a call option, which gives the owner the right to buy the underlying asset, or it can be a put option, which gives the owner the right to sell the underlying asset. Moreover, the option can either only be exercised at maturity, European option, or it can be exercised at any time before maturity, American option. Path dependent options are options whose payoffs are affected by how the price of the underlying stock at maturity was reached, and the price path of the underlying stock. One particular path dependent option, called Asian option, will be of main focus throughout this research. The average price of the underlying asset can either determine the underlying settlement price (average price Asian options) or the option strike price (average strike Asian options). Furthermore, the average prices can be calculated using either the arithmetic mean or the geometric mean. The type of Asian option that will be examined throughout this research is geometric Asian option. Over the past three decades, academic researchers and market practitioners have developed and adopted different models and techniques for option valuation. The path-breaking work on option pricing was undertaken by Black and Scholes $(BS)$ \cite{black1973pricing} in 1973. In the $BS$ model has been assumed that the asset price dynamics are governed by a geometric Brownian motion. However, in the last few years based on some empirical studies, it has been shown that the geometric Brownian motion model cannot capture many of the characteristic features of prices, such as: heavy tailed, long-range correlations, lack of scale invariance, periods of constant values, and etc. Fractional Brownian motion has been suggested to display the long-range dependence and fluctuation observed in the empirical data \cite{wang2012pricing,zhang2009equity,necula2002option}. Since fractional Brownian motion is neither a Markov process nor a semi-martingale, then we cannot use the usual stochastic calculus to analyze it. Further, fractional Brownian motion admits arbitrage in a complete and frictionless market. To get around this problem and to take into account the long memory property, it has been proposed that it is reasonable to use the mixed fractional Brownian motion $(mfBm)$ to capture the fluctuations of financial asset \cite{el2003fractional,mishura2008stochastic,cheridito2001mixed}. The $mfBm$ is a linear combination of the Brownian motion and fractional Brownian motion with Hurst index $H\in(\frac{1}{2}, 1)$, defined on the filtered probability $(\Omega, \F, \P)$ for any $t\in \R^+$ by: \begin{eqnarray} M_t^H(a, b)=aB(t)+bB^H(t), \label{eq:1} \end{eqnarray} where $B(t)$ is a Brownian motion, and $B^H(t)$ is a independent fractional Brownian motion with Hurst index $H$. Cheridito \cite{cheridito2001mixed} proved that, for $H\in(\frac{3}{4}, 1)$, the mixed model is equivalent to the Brownian motion and hence it is also arbitrage free. For $H\in(\frac{1}{2}, 1)$, Mishura and Valkeila \cite{mishura2002absence} demonstrated that the mixed model is arbitrage free. Rao \cite{rao2016pricing} discussed geometric Asian power option under $mfBm$. To see more about the mixed model, one can refer to Refs \cite{mishura2008stochastic,cheridito2001mixed,shokrollahi2016pricing}. In order to describe properly financial data exhibiting periods of constant values, Magdziarz \cite{magdziarz2012anomalous} introduced subdiffusive geometric Brownian motion \begin{eqnarray} X_\alpha(t)=X(T_\alpha(t)), \label{eq:2} \end{eqnarray} where $X(t)$ is a geometric Brownian motion, $T_\alpha(t)$ is the inverse $\alpha$-stable subordinator with parameter $\alpha\in (0, 1)$. Magdziarz pointed out that this model is arbitrage free but incomplete, and based on the subdiffusive geometric Brownian motion obtained the corresponding subdiffusive $BS$ formula for the fair price of European options. Within the framework of subdiffusive theory, numerous scholars continue to investigate financial problems identified considered in Magdziarz’s pioneer work of subdiffusion finance in 2009. These include the pricing formulas of European option and European currency option under subdiffusive fractional $BS$ and subdiffusive mixed fractional $BS$ models \cite{guo2014pricing,shokrollahi2016pricing,gu2012time}. In this research, inspired by the works \cite{guo2014pricing} and \cite{shokrollahi2016pricing}, we introduce a pricing formula for geometric Asian options under time changed mixed fractional $BS$ model. We then apply the result to price geometric Asian power options that pay constant dividends when the payoff is a power function. We also provide some special cases and lower bound for the Asian option price. The rest of the paper is organized as follows. In Section \ref{sec:2}, some useful concepts and theorems of time changed mixed fractional process are introduced. In Section \ref{sec:3}, a brief introduction of Asian options is given. Analytical valuation formula for geometric Asian options is derived in Section \ref{sec:4} and then applied to geometric Asian power options in Section \ref{sec:5}. The lower bound on the price of the Asian option is proposed in Section \ref{sec:6}. \section{Auxiliary facts}\label{sec:2} In this section, we recall some definitions and results about mixed fractional time changed process. More information about mixed fractional process can be found in \cite{guo2014pricing,shokrollahi2016pricing}. The time-changed process $T_\alpha(t)$ is the inverse $\alpha$-stable subordinator defined as below \begin{eqnarray*} T_\alpha(t)=\inf\{\tau>0, U_\alpha(t)\geq t\}. \label{eq:5} \end{eqnarray*} here $U_\alpha(\tau)_{\tau\geq 0}$ is a strictly increasing $\alpha$-stable Lévy process \cite{sato1999levy} with Laplace transform: $\E(e^{-uU_\alpha(\tau)})=e^{-\tau u^\alpha}$, $\alpha\in(0, 1)$. $U_\alpha(t)$ is $\frac{1}{\alpha}$ self-similar and $T_\alpha(t)$ is $\alpha$ self-similar, that is, for every $h>0$, $U_\alpha(ht)\triangleq h^{\frac{1}{\alpha}}U_\alpha(t)$ $T_\alpha(ht)\triangleq h^{\alpha}T_\alpha(t)$, here $\triangleq$ indicates that the random variables on both sides have the same distribution. Specially, when $\alpha\uparrow 1$, $T_\alpha(t)$ reduces to the physical time $t$. You can find more details about subordinator and its inverse processes in \cite{janicki1993simulation,piryatinska2005models}. Consider the subdiffusion process \begin{eqnarray*} M_{\alpha}^ H(t)(a,b)=aW_{\alpha}(t)+bW_{\alpha}^ H(t)=aB(T_\alpha(t))+bB^H(T_\alpha(t)), \label{eq:6} \end{eqnarray*} where $B(\tau)$ is a Brownian motion, $B^H(\tau)$ is a fractional Brownian motion with Hurst index $H$ and $T_\alpha(t)$ is inverse $\alpha$-subordinator which are supposed to be independent. When $a=0, b=1$, the results represented in \cite{gu2012time} and if $b=0, a=1$, then it is the process considered in \cite{magdziarz2009black}. In this research, we assume that $H\in(\frac{3}{4}, 1)$ and $(a, b)=(1, 1)$. \begin{rem} When $\alpha\uparrow 1$, the processes $W_{\alpha}(t)$ and $W_{\alpha}^ H(t)$ degenerate to $B(t)$ and $B^H(t)$, respectively. Then, $M_{\alpha}^ H(t)(a,b)$ reduces to the $mfBm$ in Eq. (\ref{eq:1}). \end{rem} \begin{rem} From \cite{gu2012time,magdziarz2009black}, we know that $\E(T_\alpha(t))=\frac{t^\alpha}{\Gamma(\alpha+1)}$. Then, by applying $\alpha$-self-similar and non-decreasing sample path of $T_\alpha(t)$, we have \begin{eqnarray} \E[(B(T_\alpha(t)))^2]&=&\frac{t^\alpha}{\Gamma(\alpha+1)}\\ \E[(B^H(T_\alpha(t)))^2]&=&\left(\frac{t^\alpha}{\Gamma(\alpha+1)}\right)^{2H}. \label{eq:7} \end{eqnarray} \end{rem} \section{Asian options}\label{sec:3} The payoff of an Asian option is based on the difference between an asset's average price over a given time period, and a fixed price called the strike price. Asian options are popular because they tend to have lower volatility than options whose payoffs are based purely on a single price point. It is also harder for big traders to manipulate an average price over an extended period than a single price, so Asian option offers further protection against risk. The Asian call and put options have a payoff that is calculated with an average value of the underlying asset over a specific period. The payoff for an Asian call and put option with strike price $K$ and expiration time $T$ is $(\bar{S}(T)-K)_+$ and $(K-\bar{S}(T))_+$ respectively, where $\bar{S}(T)$ is the average price of the underlying asset over the prespecified interval. Since Asian options are less expensive than their European counterparts, they are attractive to many different investors. Apart from the regular Asian option there also exists Asian strike option. An Asian strike call option guarantees the holder that the average price of an underlying asset is not higher than the final price. The option will not be exercised if the average price of the underlying asset is greater than the final price. The holder of an Asian strike put option makes sure that the average price received for the underlying asset is not less than what the final price will provide. The payoff for an Asian strike call and put option is $(\bar{S}(T)-S(T))_+$ and $(S(T)-\bar{S}(T))_+$ respectively, where $S(T)$ is the value of underlying stock at maturity date $T$. Asian options are divided into two different types, when calculating the average, the geometric Asian option \begin{eqnarray*} G(T)=\exp\left\{\frac{1}{T}\int_0^T\ln S(t)dt\right\}, \label{eq:3} \end{eqnarray*} and the arithmetic Asian option. \begin{eqnarray*} A(T)=\frac{1}{T}\int_0^T S(t)dt. \label{eq:4} \end{eqnarray*} We assume that the prespecified interval $[0, T]$ is fixed, then will price the geometric Asian option in the continuous average case under time changed mixed fractional Brownian motion environment. \section{Pricing model of geometric Asian option}\label{sec:4} In order to derive an Asian option pricing formula in a time changed mixed fractional market, we make the following assumptions: \begin{enumerate} \item[(i)] the price of underlying stock at time $t$ is given by \begin{eqnarray} S_t&&=S_0\exp\Big\{(r-q)T_\alpha(t)+\sigma W_{\alpha}(t)+\sigma W_{\alpha}^ H(t)\nonumber\\ &&-\frac{1}{2}\sigma^2\frac{t^\alpha}{\Gamma(\alpha+1)}-\frac{1}{2}\sigma^2\left(\frac{t^\alpha}{\Gamma(\alpha+1)}\right)^{2H}\Big\},\quad 0<t<T, \label{eq:8} \end{eqnarray} where $H\in(\frac{3}{4}, 1)$, $\alpha\in (\frac{1}{2}, 1)$ and $\alpha H>1$. \item[(ii)] there are no transaction costs in buying or selling the stocks or option. \item[(iii)] the risk free interest rate $r$ and dividend rate $q$ are known and constant through time. \item[(iv)] the option can be exercised only at the maturity time. \end{enumerate} From Eq. (\ref{eq:8}), we know that $\ln S_t\simeq N(u, v)$, where \begin{eqnarray} u&=&\ln S(0)+(r-q)T_\alpha(t)-\frac{1}{2}\sigma^2\frac{t^\alpha}{\Gamma(\alpha+1)}-\frac{1}{2}\sigma^2\left(\frac{t^\alpha}{\Gamma(\alpha+1)}\right)^{2H}\\ v&=&\sigma^2\frac{t^\alpha}{\Gamma(\alpha+1)}+\sigma^2\left(\frac{t^\alpha}{\Gamma(\alpha+1)}\right)^{2H}. \label{eq:9} \end{eqnarray} Let $C(S(0), T)$ be the price of a European call option at time $0$ with strike price $K$ and that matures at time $T$. Then, from \cite{guo2014pricing}, we can get \begin{eqnarray*} C(S(0), T)=S(0)e^{-qT}\phi(d_1)-Ke^{-rT}\phi(d_2), \label{eq:10} \end{eqnarray*} where \begin{eqnarray*} d_1&=&\frac{\ln\frac{S_0}{K}+(r-q+\frac{\hat{\sigma}^2}{2})T}{\hat{\sigma}\sqrt{T}},\quad d_2=d_1-\hat{\sigma}\sqrt{T},\\ \hat{\sigma}^2&=&\sigma^2\frac{T^{\alpha-1}}{\Gamma(\alpha)}+\sigma^2\left(\frac{T^{\alpha-1}}{\Gamma(\alpha)}\right)^{2H}, \label{eq:11} \end{eqnarray*} and $\phi(.)$ denotes cumulative normal density function. Under the above assumptions (i)-(iv), we obtain the value of the geometric Asian call option by the following theorem \begin{thm} Suppose the stock price $S_t$ satisfied Eq. (\ref{eq:8}). Then, under the risk-neutral probability measure, the value of geometric Asian call option $C(S(0), T)$ with strike price $K$ and maturity time $T$ is given by \begin{eqnarray} C(S(0), T)&=&S(0)\exp\Bigg\{-rT+(r-q)\frac{T^{\alpha}}{\Gamma(\alpha+2)}+\frac{\sigma^2(-T)^{\alpha}}{2\Gamma(\alpha+3)}\nonumber\\ &&-\frac{\sigma^2T^{2\alpha H}}{4(2\alpha H+1)(\alpha H+1)(\Gamma(\alpha+1))^{2H}}\Bigg\}\phi(d_1)-Ke^{-qT}\phi(d_2), \label{eq:12} \end{eqnarray} where \begin{eqnarray*} d_2&=&\frac{\mu_G-\ln K}{\sigma_G},\quad d_1=d_2+\sigma_G,\\ \mu_G&=&\ln S(0)+(r-q-\frac{\sigma^2}{2})\frac{T^{\alpha}}{\Gamma(\alpha+2)}-\frac{\sigma^2T^{2\alpha H}}{2(2\alpha H+1)(\Gamma(\alpha+1))^{2H}},\\ \sigma_G^2&=&\frac{\sigma^2T^{\alpha}}{\Gamma(\alpha+2)}+\frac{\sigma^2(-T)^{\alpha}}{\Gamma(\alpha+3)}+\frac{\sigma^2T^{2\alpha H}}{(2\alpha H+2)(\Gamma(\alpha+1))^{2H}}, \label{eq:13} \end{eqnarray*} the interest rate $r$ and the dividend rate $q$ are constant over time and $\phi(.)$ denotes cumulative normal density function. \label{thm:1} \end{thm} \begin{proof} Suppose \begin{eqnarray*} L(T)=\frac{1}{T}\int_0^T\ln S(t)dt. \label{eq:14} \end{eqnarray*} Then \begin{eqnarray} G(T)=e^{L(T)}. \label{eq:15} \end{eqnarray} We know that $\ln S_t\simeq N(u, v)$, then it is clear that the random variable $L(T)$ has Gaussian distribution under the risk-neutral probability measure. We will now compute its mean and variance under the risk-neutral probability measure. Let $\E$ denote the expectation and, $\mu_G$ and $\sigma_G^2$ denote the mean and the variance of the random variable $\E$ under the risk-neutral probability measure. Note that \begin{eqnarray*} \mu_G&=&\E[L(T)]=\frac{1}{T}\int_0^T\E[\ln S(t)]dt\nonumber\\ &=&\ln S(0)+\frac{1}{T}\int_0^T(r-q)\frac{t^{\alpha}}{\Gamma(\alpha+1)}dt-\frac{\sigma^2}{2T}\int_0^T\left[\frac{t^{\alpha}}{\Gamma(\alpha+1)}+\frac{t^{2\alpha H}}{(\Gamma(\alpha+1))^{2H}}\right]dt\nonumber\\ &=&\ln S(0)+(r-q)\frac{T^{\alpha}}{\Gamma(\alpha+2)}-\frac{\sigma^2T^\alpha}{2\Gamma(\alpha+2)}-\frac{\sigma^2T^{2\alpha H}}{(4\alpha H+2)(\Gamma(\alpha+1))^{2H}}, \label{eq:16} \end{eqnarray*} and \begin{eqnarray*} \sigma_G^2&=&Var[L(T)]=\E[(L(T)-\mu_G)^2]\nonumber\\ &=&\frac{\sigma^2}{T^2}\int_0^T\int_0^T\left(\E[W_\alpha(t)W_\alpha(\tau)]+\E[W_\alpha^H(t)W_\alpha^H(\tau)]\right)dtd\tau, \label{eq:17} \end{eqnarray*} by independence of the processes $B(t), B^H(t)$ and $T_\alpha(t)$, we obtain \begin{eqnarray*} &=&\frac{\sigma^2}{T^2}\int_0^T\int_0^T\left(|\frac{t^{\alpha}}{\Gamma(\alpha+1)}|+|\frac{\tau^{\alpha}}{\Gamma(\alpha+1)}|-|\frac{(t-\tau)^{\alpha}}{\Gamma(\alpha+1)}|\right)dtd\tau\\ &+&\frac{\sigma^2}{T^2}\int_0^T\int_0^T\left(|\frac{t^{\alpha}}{\Gamma(\alpha+1)}|^{2H}+|\frac{\tau^{\alpha}}{\Gamma(\alpha+1)}|^{2H}-|\frac{(t-\tau)^{\alpha}}{\Gamma(\alpha+1)}|^{2H}\right)dtd\tau\\ &=&\frac{\sigma^2T^{\alpha}}{\Gamma(\alpha+2)}+\frac{\sigma^2(-T)^{\alpha}}{\Gamma(\alpha+3)}+\frac{\sigma^2T^{2\alpha H}}{(2\alpha H+2)(\Gamma(\alpha+1))^{2H}}. \label{eq:18} \end{eqnarray*} From (\ref{eq:15}), we know that the random variable $G(T )$ is log-normally distributed, then $\ln G(T )\simeq N(\mu_G, \sigma_G^2)$. Let $I=\{x:e^x>K\}$ and $\phi(.)$ be the probability density function of a standard normal distribution, then the price of geometric Asian call option is given by the following computations \begin{eqnarray*} C(S(0), T)&=& e^{-rT}\E[(G(T)-K)^+]\\ &=&e^{-rT}\int_I(e^x-K)\frac{1}{\sqrt{2\pi}\sigma_G}\exp\left\{-\frac{(x-\mu_G)^2}{2\sigma_G^2}\right\}dx\\ &=&e^{-rT}\int_I(e^{\mu_G+z\sigma_G }-K)\frac{1}{\sqrt{2\pi}\sigma_G}\exp\left\{-\frac{(x-\mu_G)^2}{2\sigma_G^2}\right\}\varphi(z)dz\\ &=&e^{-rT+\mu_G+\frac{1}{2}\sigma_G^2}\int_{-d_2}^{\infty}e^{-\frac{1}{2}(z-\sigma_G)^2}dz-Ke^{-rT}\int_{-d_2}^{-\infty}\varphi(z)dz\\ &=&e^{-rT+\mu_G+\frac{1}{2}\sigma_G^2}\int_{-d_2-\sigma_G}^{\infty}\varphi(z)dz-Ke^{-rT}\int_{-\infty}^{d_2}\varphi(z)dz\\ &=&e^{-rT+\mu_G+\frac{1}{2}\sigma_G^2}\int_{-\infty}^{d_2+\sigma_G}\varphi(z)dz-Ke^{-rT}\int_{-\infty}^{d_2}\varphi(z)dz\\ &=&e^{-rT+\mu_G+\frac{1}{2}\sigma_G^2}\phi(d_1)-Ke^{-rT}\phi(d_2),\\ &=&S(0)\exp\Bigg\{-rT+(r-q)\frac{T^{\alpha}}{\Gamma(\alpha+2)}+\sigma^2\frac{(-T)^{\alpha}}{2\Gamma(\alpha+3)}\nonumber\\ &&-\sigma^2\frac{T^{2\alpha H}}{4(2\alpha H+1)(\alpha H+1)(\Gamma(\alpha+1))^{2H}}\Bigg\}\phi(d_1)-Ke^{-qT}\phi(d_2), \label{eq:19} \end{eqnarray*} here \begin{eqnarray*} I&=&\{x:e^x>K\}=\{z:e^{\mu_G+z\sigma_G}>K\}\\ &=&\{z:\mu_G+z\sigma_G>\ln K\}=\{z:z>-d_2\}, \label{eq:20} \end{eqnarray*} thus we obtain the pricing formula. \end{proof} Moreover, using the put–call parity, the valuation model for a geometric Asian put option under time changed mixed fractional $BS$ model can be written \begin{eqnarray} P(S(0), T)&=&Ke^{-qT}\phi(-d_2)-S(0)\exp\Bigg\{-rT+(r-q)\frac{T^{\alpha}}{\Gamma(\alpha+2)}+\frac{\sigma^2(-T)^{\alpha}}{2\Gamma(\alpha+3)}\nonumber\\ &&-\frac{\sigma^2T^{2\alpha H}}{4(2\alpha H+1)(\alpha H+1)(\Gamma(\alpha+1))^{2H}}\Bigg\}\phi(-d_1), \label{eq:21} \end{eqnarray} where $d_1$ and $d_2$ are defined previously. Letting $\alpha\uparrow 1$, then the stock price follows the $mfBm$ shown below \begin{eqnarray} S_t&&=S_0\exp\Big\{(r-q)T+\sigma B(t)+\sigma B^ H(t)\nonumber\\ &&-\frac{1}{2}\sigma^2t-\frac{1}{2}\sigma^2t^{2H}\Big\},\quad 0<t<T, \label{eq:22} \end{eqnarray} and the result is presented below. \begin{cor} The value of geometric Asian call option with maturity $T$ and strike $K$, whose stock price follows Eq. (\ref{eq:22}), is given by \begin{eqnarray} &&C(S(0), T)=\nonumber\\ &&S(0)\exp\Bigg\{-\frac{1}{2}(r+q)T-\frac{\sigma^2T}{12}-\frac{\sigma^2T^{2H}}{4(2H+1)(H+1)}\Bigg\}\phi(d_1)-Ke^{-qT}\phi(d_2), \label{eq:23} \end{eqnarray} where \begin{eqnarray*} d_2&=&\frac{\mu_G-\ln K}{\sigma_G},\quad d_1=d_2+\sigma_G,\\ \mu_G&=&\ln S(0)+\frac{1}{2}(r-q-\frac{\sigma^2}{2})T-\frac{\sigma^2T^{2H}}{2(2 H+1)},\\ \sigma_G^2&=&\frac{\sigma^2T}{3}+\frac{\sigma^2T^{2H}}{(2H+2)}, \label{eq:24} \end{eqnarray*} which is consistent with result in \cite{rao2016pricing}. \end{cor} \section{Pricing model of Asian power option}\label{sec:5} In this section, we consider the pricing model of Asian power call option with strike price $K$ and maturity time $T$ under time changed mixed fractional $BS$ model where the payoff function is $(G^n(T)-K)^+$ for some constant integer $n\geq 1$. \begin{thm} Suppose the stock price $S_t$ satisfied Eq. (\ref{eq:8}). Then, under the risk-neutral probability measure the value of geometric Asian power call option $C(S(0), T)$ with strike price $K$, maturity time $T$ and payoff function $(G^n(T)-K)^+$ is given by \begin{eqnarray} C(S(0), T)&=&S(0)\exp\Bigg\{-rT+(r-q)\frac{nT^{\alpha}}{\Gamma(\alpha+2)}-\frac{(n-n^2)\sigma^2T^\alpha}{2\Gamma(\alpha+2)}+\frac{n^2\sigma^2(-T)^{\alpha}}{2\Gamma(\alpha+3)}\nonumber\\ &&-\frac{n\sigma^2T^{2\alpha H}}{(4\alpha H+2)(\Gamma(\alpha+1))^{2H}}-\frac{n^2\sigma^2T^{2\alpha H}}{(4\alpha H+4)(\Gamma(\alpha+1))^{2H}}\Bigg\}\phi(f_1)\nonumber\\ &-&Ke^{-qT}\phi(f_2), \label{eq:25} \end{eqnarray} where \begin{eqnarray*} f_2&=&\frac{\mu_G-\frac{1}{n}\ln K}{\sigma_G},\quad f_1=f_2+n\sigma_G,\\ \mu_G&=&\ln S(0)+(r-q-\frac{\sigma^2}{2})\frac{T^{\alpha}}{\Gamma(\alpha+2)}-\frac{\sigma^2T^{2\alpha H}}{2(2\alpha H+1)(\Gamma(\alpha+1))^{2H}},\\ \sigma_G^2&=&\frac{\sigma^2T^{\alpha}}{\Gamma(\alpha+2)}+\frac{\sigma^2(-T)^{\alpha}}{\Gamma(\alpha+3)}+\frac{\sigma^2T^{2\alpha H}}{(2\alpha H+2)(\Gamma(\alpha+1))^{2H}}, \label{eq:26} \end{eqnarray*} the interest rate $r$ and the dividend rate $q$ are constant over time and $\varphi(.)$ denotes cumulative normal density function. \label{thm:2} \end{thm} \begin{proof} The payoff function for Asian power option is $(G^n(T)-K)^+=(e^{nL(T)}-K)^+$, then applying similar computation in Theorem \ref{thm:1}, we obtain \begin{eqnarray*} C(S(0), T)&=& e^{-rT}\E[(G^n(T)-K)^+]\\ &=&e^{-rT}\int_I(e^{nx}-K)\frac{1}{\sqrt{2\pi}\sigma_G}\exp\left\{-\frac{(x-\mu_G)^2}{2\sigma_G^2}\right\}dx\\ &=&e^{-rT}\int_I(e^{n(\mu_G+z\sigma_G) }-K)\frac{1}{\sqrt{2\pi}\sigma_G}\exp\left\{-\frac{(x-\mu_G)^2}{2\sigma_G^2}\right\}\varphi(z)dz\\ &=&e^{-rT+n\mu_G+\frac{1}{2}n^2\sigma_G^2}\int_{-f_2}^{\infty}e^{-\frac{1}{2}(z-n\sigma_G)^2}dz-Ke^{-rT}\int_{-f_2}^{-\infty}\varphi(z)dz\\ &=&e^{-rT+n\mu_G+\frac{1}{2}n^2\sigma_G^2}\int_{-f_2-n\sigma_G}^{\infty}\varphi(z)dz-Ke^{-rT}\int_{-\infty}^{f_2}\varphi(z)dz\\ &=&e^{-rT+n\mu_G+\frac{1}{2}n^2\sigma_G^2}\int_{-\infty}^{f_2+n\sigma_G}\varphi(z)dz-Ke^{-rT}\int_{-\infty}^{f_2}\varphi(z)dz\\ &=&e^{-rT+n\mu_G+\frac{1}{2}n^2\sigma_G^2}\phi(f_1)-Ke^{-rT}\phi(f_2),\\ &=&S(0)\exp\Bigg\{-rT+(r-q)\frac{nT^{\alpha}}{\Gamma(\alpha+2)}-\frac{(n-n^2)\sigma^2T^\alpha}{2\Gamma(\alpha+2)}+\frac{n^2\sigma^2(-T)^{\alpha}}{2\Gamma(\alpha+3)}\nonumber\\ &&-\frac{n\sigma^2T^{2\alpha H}}{(4\alpha H+2)(\Gamma(\alpha+1))^{2H}}-\frac{n^2\sigma^2T^{2\alpha H}}{(4\alpha H+4)(\Gamma(\alpha+1))^{2H}}\Bigg\}\phi(f_1)\nonumber\\ &-&Ke^{-qT}\phi(f_2), \label{eq:27} \end{eqnarray*} here \begin{eqnarray*} I&=&\{x:e^{nx}>K\}=\{z:e^{n(\mu_G+z\sigma_G)}>K\}\\ &=&\{z:\mu_G+z\sigma_G>\frac{1}{n}\ln K\}=\{z:z>-f_2\}, \label{eq:28} \end{eqnarray*} thus the proof is completed. \end{proof} \section{Lower bound of the Asian option price}\label{sec:6} The aim of this section is to obtain the lower bound on the price of the Asian option. The next theorem shows that the normal distribution is stable when the random variables are jointly normal. \begin{thm} (\cite{hoffman1994probability}) The conditional distribution of $\ln S_{t_i}$ given $\ln G(T)$ is a normal distribution \begin{eqnarray*} (\ln S_{t_i}|\ln G(T)=z)\simeq N(\mu_i+(z-\mu_G)\frac{\lambda_i}{\sigma_G^2}, \sigma_i^2-\frac{\lambda_i^2}{\sigma_G^2}),\quad i=1,...,n, \label{eq:29} \end{eqnarray*} where \begin{eqnarray*} \mu_i&=&\ln S(0)+(r-q)T_\alpha(t_i)-\frac{1}{2}\sigma^2\frac{t_i^\alpha}{\Gamma(\alpha+1)}-\frac{1}{2}\sigma^2\left(\frac{t_i^\alpha}{\Gamma(\alpha+1)}\right)^{2H}\\ \sigma_i^2&=&\sigma^2\frac{t_i^\alpha}{\Gamma(\alpha+1)}+\sigma^2\left(\frac{t_i^\alpha}{\Gamma(\alpha+1)}\right)^{2H}, \label{eq:30} \end{eqnarray*} $\lambda_i=Cov(\ln S_{t_i}, \ln G(T))$, $0\leq t_1<t_2<...<t_n\leq T$, $T_\alpha(t)$ is inverse $\alpha$-stable subordinator and, $\mu_G$ and $\sigma_G^2$ are defined in Theorem \ref{thm:1}. Moreover, $(S_{t_i}|\ln G(T))$ has a lognormal distribution and \begin{eqnarray} &&\E\left[S_{t_i}|\ln G(T)=z\right]\nonumber\\ &&=\exp\left\{\mu_i+(z-\mu_G)\frac{\lambda_i}{\sigma_G^2}+\frac{1}{2} (\sigma_i^2-\frac{\lambda_i^2}{\sigma_G^2})\right\}\quad i=1,...,n. \label{eq:31} \end{eqnarray} \label{thm:3} \end{thm} Now, we condition on the geometric average $G(T)$ in the pricing expresion of the Asian option \begin{eqnarray*} C(S(0), T)&=&e^{-rT}\E[(A(T)-K)^+]=e^{-rT}\E[\E[(A(T)-K)^+|G(T)]\\ &=&e^{-rT}\int_0^\infty\E[(A(T)-K)^+|G(T)=z]g(z)dz, \label{eq:32} \end{eqnarray*} where $g$ is the lognormal density function of $G$. Let \begin{eqnarray*} C_1&=&\int_0^K\E[(A(T)-K)^+|G(T)=z]g(z)dz,\\ C_2&=&\int_K^\infty\E[(A(T)-K)^+|G(T)=z]g(z)dz, \label{eq:33} \end{eqnarray*} then $C(S(0), T)=e^{-rT}(C_1+C_2)$. Sine the geometric average is less than arithmetic average $A(T)\geq G(T)$, \begin{eqnarray} C_2&=&\int_K^\infty\E[A(T)-K|G(T)=z]g(z)dz, \label{eq:34} \end{eqnarray} from Theorem \ref{thm:3}, we can calculate $C_2$. Applying Jensen's inequality we obtain a lower bound on $C_1$ \begin{eqnarray} C_1&=&\int_0^K\E[(A(T)-K)^+|G(T)=z]g(z)dz\nonumber\\ &\geq &\int_0^K\left(E[A(T)-K|G(T)=z]\right)^+g(z)dz\nonumber\\ &=&\int_{\tilde{K}}^K\E[A(T)-K|G(T)=z]g(z)dz=\tilde{C}_1. \label{eq:35} \end{eqnarray} where $\tilde{K}=\left\{z|\E[A(T)|G(T)=z]=K\right\}$. Eq. (\ref{eq:31}) enables us to obtain $\tilde{K}$, then we calculate the following expectation \begin{eqnarray*} \E[A(T)|G(T)=z]&=&\E\left[\frac{1}{n}\sum_{i=1}^nS_{t_i}|G(T)=z\right]=\frac{1}{n}\sum_{i=1}^n\E\left[S_{t_i}|G(T)=z\right]\\ &&=\frac{1}{n}\sum_{i=1}^n\exp\left(\mu_i+(\log z-\mu_G)\frac{\lambda_i}{\sigma_G^2}+\frac{1}{2} (\sigma_i^2-\frac{\lambda_i^2}{\sigma_G^2})\right). \label{eq:36} \end{eqnarray*} \begin{thm} A lower bound on the price of the Asian option with strike price $K$ and maturity time $T$ is given by \begin{eqnarray*} \tilde{C}(S(0), T)&=&e^{-rT}(\tilde{C}_1+C_2)\\ &=&e^{-rT}\Big\{\frac{1}{n}\sum_{i=1}^n\exp(\mu_i+\frac{1}{2}\sigma_i^2)\phi\left(\frac{\mu_G-\ln \tilde{K}+\gamma_i}{\sigma_G}\right)\\ &&-K\phi\left(\frac{\mu_G-\ln \tilde{K}}{\sigma_G}\right)\Big\}, \label{eq:37} \end{eqnarray*} where all parameters are defined previously. \label{thm:4} \end{thm} \begin{proof} Collecting Eqs. (\ref{eq:34}) and (\ref{eq:35}) gives \begin{eqnarray*} \tilde{C}_1+C_2&=&\int_{\tilde{K}}^\infty\E[A(T)-K|G(T)=z]g(z)dz\\ &=&\int_{\tilde{K}}^\infty\E[A(T)|G(T)=z]g(z)dz-K\int_{\tilde{K}}^\infty g(z)dz\\ &=&\int_{\tilde{K}}^\infty\E\left[\frac{1}{n}\sum_{i=1}^nS_{t_i}|G(T)=z\right]g(z)dz-K\int_{\tilde{K}}^\infty g(z)dz\\ &=&\int_{\tilde{K}}^\infty\frac{1}{n}\sum_{i=1}^n\E\left[S_{t_i}|G(T)=z\right]g(z)dz-K\int_{\tilde{K}}^\infty g(z)dz\\ &=&\frac{1}{n}\sum_{i=1}^n\int_{\tilde{K}}^\infty\E\left[S_{t_i}|\ln G(T)=\ln z\right]g(z)dz-K\int_{\tilde{K}}^\infty g(z)dz. \label{eq:38} \end{eqnarray*} From the proof of Theorem \ref{thm:1}, we obtain \begin{eqnarray*} K\int_{\tilde{K}}^\infty g(z)dz=K\phi\left(\frac{\mu_G-\ln \tilde{K}}{\sigma_G}\right), \label{eq:39} \end{eqnarray*} and from Eq. (\ref{eq:31}) \begin{eqnarray*} &&\int_{\tilde{K}}^\infty\E\left[S_{t_i}|\ln G(T)=\ln z\right]g(z)dz\\ &&=\int_{\tilde{K}}^\infty\exp\left(\mu_i+(\ln z-\mu_G)\frac{\lambda_i}{\sigma_G^2}+\frac{1}{2} (\sigma_i^2-\frac{\lambda_i^2}{\sigma_G^2})\right)g(z)dz\\ &&=\exp\left(\mu_i+\frac{1}{2} \sigma_i^2\right)\int_{\tilde{K}}^\infty\exp\left((\ln z-\mu_G)\frac{\lambda_i}{\sigma_G^2}-\frac{1}{2}\frac{\lambda_i^2}{\sigma_G^2}\right)g(z)dz. \label{eq:40} \end{eqnarray*} Using the density of the lognormal distribution, we have \begin{eqnarray*} \int_{\tilde{K}}^\infty\frac{1}{\sqrt{2\pi}\sigma_G z}\exp\left((\ln z-\mu_G)\frac{\lambda_i}{\sigma_G^2}-\frac{1}{2}\frac{\lambda_i^2}{\sigma_G^2}-\frac{1}{2}(\frac{\mu_G-\ln z}{\sigma_G})^2\right)dz. \label{eq:41} \end{eqnarray*} Making the change of variables $y=\frac{\mu_G-\ln z +\lambda_i}{\sigma_G}$ and $\frac{dy}{dz}=-\frac{1}{\sigma_G z}$, then we have \begin{eqnarray*} &&\int_{\frac{\mu_G-\ln z +\lambda_i}{\sigma_G}}^{-\infty}-\frac{1}{\sqrt{2\pi}}\exp\left((\frac{\lambda_i}{\sigma_G}-y)\frac{\lambda_i}{\sigma_G}-\frac{1}{2}\frac{\lambda_i^2}{\sigma_G^2}-\frac{1}{2}(y-\frac{\lambda_i}{\sigma_G})^2 \right)dy\\ &&=\int_{-\infty}^{\frac{\mu_G-\ln z +\lambda_i}{\sigma_G}}\frac{1}{\sqrt{2\pi}}\exp\left(-y\frac{\lambda_i}{\sigma_G} +\frac{1}{2}\frac{\lambda_i^2}{\sigma_G^2}-\frac{1}{2}y^2-\frac{1}{2}\frac{\lambda_i^2}{\sigma_G^2}+y\frac{\lambda_i}{\sigma_G} \right)dy\\ &&=\int_{-\infty}^{\frac{\mu_G-\ln z +\lambda_i}{\sigma_G}}\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}y^2\right)dy=\phi\left(\frac{\mu_G-\ln \tilde{K}+\gamma_i}{\sigma_G}\right), \label{eq:41} \end{eqnarray*} by collecting $\tilde{C}_1$ and $C_2$ the proof is completed. \end{proof}
{ "timestamp": "2017-12-15T02:08:44", "yymm": "1712", "arxiv_id": "1712.05254", "language": "en", "url": "https://arxiv.org/abs/1712.05254" }
\section{Density-Functional Theory calculations} \label{dft} In order to compute binding energies of CO$_2$~gas molecules, we optimize mmen-M$_2$(dobpdc) MOFs without CO$_2$~molecules (E$_{\rm{mmen-MOF}}$), CO$_2$ in the gas phase (E$_{\rm{CO}_2}$) within a $15\AA \times 15 \AA \times 15 \AA$ cubic supercell, and mmen-M$_2$(dobpdc) MOFs with CO$_2$ molecules (E$_{\rm{CO_2-mmen-MOF}}$) using vdW-corrected DFT. The binding energies (E$_{\rm B}$) are obtained via the difference \begin{equation} \label{eq:1} E_{\rm B} = E_{\rm{CO_2-mmen-MOF}} - (E_{\rm{mmen-MOF}} + E_{\rm{CO_2}}). \end{equation} We also consider zero-point energy (ZPE) and thermal energy (TE) corrections to compare computed binding energies with experimentally determined CO$_2$ heats of adsorption, following a previous DFT study\c{kyuho2015}. We calculate vibrational frequencies of bound mmen and CO$_2$-mmen in the framework and free mmen and CO$_2$-mmen molecules within a $15\AA \times 15 \AA \times 15 \AA$ cubic supercell. Here we assume that phonon mode changes of the framework are small relative to those in molecular modes. All ZPE and TE corrections are obtained at 298 K. We have estimated the CO$_2$~binding energies for chains of different lengths: (i) two, (ii) three, and (iii) four in the $1\times 1\times 4$ supercell of mmen-$\mbox{Mg}_2$(dobpdc). In the case of length four, it is very close to a fully-occupied chain (periodic) since the calculations are performed in the $1\times 1\times 4$ supercell. In fact the CO$_2$~ binding energy ($-65$ kJ/mol) in a chain of length four is about $10$ kJ/mol smaller than that of the unit-cell ($-75$ kJ/mol; without ZPE and TE corrections). This is because we fully relaxed the volume of the unit-cell while we did not relax the supercell when we computed the CO$_2$~binding energies within short chains. In addition, we merely consider one channel of mmen ligands in the $1\times 1\times 4$ supercell of mmen-$\mbox{Mg}_2$(dobpdc). If we consider other channels of mmen ligands and relax the volume we get the same value as that of the unit-cell. As listed in Table~\ref{fig_table}, the average CO$_2$~binding affinity increases with the chain length. We also compute the binding energy of a single bound CO$_2$~within mmen-$\mbox{Mg}_2$(dobpdc), as shown in Table~\ref{fig_table} a. If the CO$_2$~molecule is not directly bound to the metal, its binding energy should not depend on the metal-type. To quantitatively understand the cooperative CO$_2$~capture mechanism, we perform ab-initio density functional theory (DFT) calculations within the generalized-gradient approximation (GGA) of Perdew, Burke, and Ernzerhof (PBE)\c{perdew1996}. We use a plane-wave basis and projector augmented wave (PAW)\c{blochl1994,kresse1999} pseudopotentials with the Vienna ab initio Simulation Package (VASP)\c{kresse1993,kresse1996,kresse_1996,hafner1994}. To assess the effect of the van der Waals (vdW) interaction on binding energies, we perform structural relaxations with corrections (vdW-DF2) for the vdW dispersion interaction as implemented in VASP\c{lee2010}. For all unit-cell calculations, Brillouin zone integrations are approximated using the $\Gamma$-point only, and we truncate the plane-wave basis using a 600 eV kinetic energy cut off. We explicitly treat $2$ valence electrons for each Mg ($3s^2$), 7 for Mn ($3d^5 4s^2$), 8 for Fe ($3d^6 4s^2$), 9 for Co ($3d^7 4s^2$), 12 for Zn ($3d^{10} 4s^2$), $6$ for O ($2s^2 2p^4$), $5$ for N ($2s^2 2p^3$), $4$ for C ($2s^2 2p^2$), and $1$ for H($1s^1$). To compute the CO$_2$~binding energies for chains of different lengths, we use a plane-wave cut off of 500 eV and adopt a $1\times 1\times 4$ supercell of mmen-$\mbox{Mg}_2$(dobpdc). We only consider one channel of mmen ligands and relaxed the ions until the forces on them are less than $0.04~ \mbox{eV/\AA}$ while fixing the lattice parameters. The lattice parameters of the supercell are obtained from the fully-relaxed mmen-$\mbox{Mg}_2$(dobpdc) unit-cell. \section{Binding energies and other model-parameterization data} \label{data} \begin{table}[h] \begin{minipage}[b]{\linewidth}\centering \centering \begin{tabular}{c} (a) Single bound CO$_2$~(this work): -22.6 \end{tabular} \label{tab:PPer} \end{minipage} \\ \vspace{0.25 cm} (b) Pair\c{smit2015} \\ \vspace{0.10 cm} \begin{minipage}[b]{\linewidth}\centering \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline M& Mg & Mn & Fe & Co & Zn & Ni \\ \hline \hlin DFT & $-45.8$ & $-42.5$ & $-43.1$ & $-46.5$ & $-42.6$ & $-47.2$ \\ \hline \end{tabular} \label{tab:PPer} \end{minipage} \\ \vspace{0.25 cm} (c) Interior of a chain \\ \vspace{0.10 cm} \begin{minipage}[b]{\linewidth}\centering \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline M& Mg & Mn & Fe & Co & Zn & Ni \\ \hline \hlin Experiment\c{david2015}& $-71.0$ & $-67.0$ & $-58.0$ & $-52.0$ & $-57.0$ & $ - $ \\ \hline DFT\c{smit2015}& $-69.4$ & $-66.8$ & $-57.7$ & $-50.8$ & $-50.8$ & $-46.4$ \\ \hline DFT (this work)& $-73.2$ & $-67.5$ & $-55.5$ & $-52.4$ & $-60.1$ & $ - $ \\ \hline \end{tabular} \end{minipage} \\ \vspace{0.25 cm} (d) Chain with endpoints (this work) \\ \vspace{0.10 cm} \begin{minipage}[b]{\linewidth}\centering \centering \begin{tabular}{|c|c|c|c|} \hline Chain length & $2$ & $3$ & $4$ \\ \hline \hlin Binding energy& $-35.0$ & $-44.6$ & $-65.1$ \\ \hline \end{tabular} \end{minipage} \caption{Binding affinity (in kJ/mol) per CO$_2$~molecule (a) of a single molecule bound at the free end of an mmen ligand, (b) of a carbamic acid pair, (c) in the interior of a chain, and (d) for short chains in case of mmen-Mg$_2$(dobpdc) (including chain end-points, e.g. a chain of length four has two end monomers and two internal monomers). By comparing (c) and (d), we set the binding enthalpy of CO$_2$~molecules at the end of a chain as $E_{\rm end} \approx 0.8 E_{\rm int}$. The question whether chain conformation in mmen-Ni$_2$(dobpdc) is stable requires further investigation. Zero-point energy (ZPE) and thermal energy (TE) corrections of CO$_2$-mmen and mmen ligands are considered (for our measurements) in case of chain-interior sites. All ZPE and TE values are obtained at 298 K.} \label{fig_table} \end{table} \begin{center} \begin{table}[h] \begin{tabular}{|c|c|c|c|c|c|c|} \hline M&Mg & Mn & Fe & Co & Zn & Ni\\ \hline \hline Best fit & 11.0& 5.1 & 6.8 &27.4 &9.8 & 11.0 \\ \hline \end{tabular} \caption{Best-fit value of $V_{\rm int}$ ($\AA^3$) for different metal types.} \label{table02} \end{table} \end{center} \begin{figure} \includegraphics[width=0.65\columnwidth]{fig_SI01} \caption{Occupancy of different species as a function of pressure for mmen-$\mbox{Co}_2$(dobpdc) at $313$ K derived from the 6-lane model. Here, we consider adsorption at secondary binding sites, and use experimental binding enthalpy for the chain conformation along with the best-fit value of $V_{\rm int}$ (see Sec.~\ref{details}): $E_1=-40.5$ kJ/mol, $E_{\rm d}=-46.5$ kJ/mol, $E_{\rm int}=52.0$ kJ/mol, and $V_{\rm int}=27.4~\AA^3$.} \label{comp} \end{figure} \begin{figure} \includegraphics[width=0.42\columnwidth]{fig_SI02} \caption{Part of the $6\times L$ lattice. The dashed lines cross the edges which define the states of the transfer matrix.} \label{6_L} \end{figure} \begin{figure} \includegraphics[width=0.65\columnwidth]{fig_SI03} \caption{Dependence of the step-position on $V_{\rm int}$: Isotherms obtained from the 1-lane model for mmen-$\mbox{Mn}_2$(dobpdc) at $313$ K for different values of $V_{\rm{int}}$. Here, $E_1$ and $E_{\rm{int}}$ are fixed to our calculated values (Table~\ref{fig_table}).} \label{vint} \end{figure} \begin{figure} \includegraphics[width=0.65\columnwidth]{fig_SI04} \caption{Inflection point in the adsorption isotherm of mmen-Ni$_2$(dobpdc) disappears bellow a certain value of $E_1$: Isotherms for mmen-$\rm{Ni}_2$(dobpdc) when the binding energy of the single-bound CO$_2$~conformation is varied at $313$ K. Theres exists a inflection point in the isotherm when $|E_1| \lesssim 40.5$ kJ/mol. Here, $E_{\rm int}=-46.4$ kJ/eV ($V_{\rm init}=11 \AA^3$), $E_{\rm d}=-44.9$ kJ/eV, and $E_{\rm sec}=-42.0$ kJ/eV.} \label{inflection} \end{figure} \section{The 2-lane model and the 6-lane model} \label{2_6lane} First, we start with the construction of the transfer matrix for the 2-lane system. Denote left and right lanes by $L$ and $R$. Define the restricted partition function $Z_N^{L,s_1;R,s_2}$ for two lanes of length $N$ with an external bond in each lane, specified to be $s_1,~s_2= u~\mbox{(unpolymerized) or}~p~\mbox{(polymerized)}$. Then we can write, \begin{equation} \label{t2} \left( \begin{array}{c} Z^{L,u;R,u}_{N+1} \\ \\ Z^{L,u;R,p}_{N+1} \\ \\ Z^{L,p;R,u}_{N+1} \\ \\ Z^{L,p;R,p}_{N+1}\end{array} \right) = \begin{pmatrix} 1+2 K_1+K_1^2+K_{\rm d}^2 && K_{\rm end}(1+K_1) && K_{\rm end}(1+K_1) && K_{\rm end}^2\\ \\ K_{\rm end}(1+K_1) && K_{\rm int}(1+K_1) && K_{\rm end}^2 && K_{\rm int} K_{\rm end} \\ \\ K_{\rm end}(1+ K_1) && K_{\rm end}^2 && K_{\rm int} (1+K_1) && g^2 K_{\rm int} K_{\rm end} \\ \\ K_{\rm end}^2 && K_{\rm int} K_{\rm end} &&K_{\rm int} K_{\rm end} && K_{\rm int}^2 \end{pmatrix} \left( \begin{array}{c} Z^{L,u;R,u}_{N} \\ \\ Z^{L,u;R,p}_{N} \\ \\ Z^{L,p;R,u}_{N} \\ \\ Z^{L,p;R,p}_{N} \end{array} \right), \end{equation} where the statistical weights of a CO$_2$~molecule in the single-bound conformation, in the pair (or dimer) conformation, as a part of chain-interior and chain end-points are given by $K_1$, $K_{\rm d}$, $K_{\rm int}$, and $K_{\rm end}$ respectively. In the thermodynamic limit the free energy is given by the logarithm of the largest eigenvalue of the matrix shown in \eqq{t2}. In this case, we can solve the largest eigen value and hence the free energy exactly. From the free energy, we calculate all the important thermodynamic quantities. For the 6-lane model with periodic boundary condition along the transverse direction (which behaves in all important respects like the 2-lane model) the calculation proceeds in an analogous way, yielding a $2^6 \times 2^6$-dimensional transfer matrix. Here we explain the elements of the transfer matrix. Chains are placed on the edges along the horizontal direction and pairs or dimers have transverse orientation, thus connecting two adjacent lanes (see \f{6_L}). The states which define the transfer matrix are given by the configurations of sets of horizontal edges, such as the ones crossed by the dashed lines in \f{6_L}, which may be empty or occupied by a bond of a chain. In the figure, the state of the left set of horizontal edges crossed by the first dashed line is $<i|=(0,0,1,1,0,0)$, while the corresponding state of the next set of horizontal edges crossed by the second dashed line is $<i+1|=(0,1,0,1,0,0)$. Here, 0 denotes an edge that is not a part of a chain, 1 denotes that the edge is a part of a chain. The contribution to the transfer matrix element ($T_{i,i+1}$) due to this particular pair or dimer configuration (one pair) is $(1 + K_1) K^2_{\rm d} K_{\rm int} K^2_{\rm end}$, since the remaining site may either be empty or occupied by a single-CO$_2$. Thus, each element of the transfer matrix is a polynomial in the statistical weights. The particular element we are considering ($T_{i,i+1}$) has another contribution, with no dimer, which is $(1+K_1)^3 K_{\rm int} K_{\rm end}^2$. Thus, $<i|T|i+1>=T_{i,i+1}=2(1 + K_1) K^2_{\rm d} K_{\rm int} K^2_{\rm end}+(1+K_1)^3 K_{\rm int} K_{\rm end}^2$ (total six sites-- three of them are occupied by chains; the remaining three sites we can either have one pair and one single-CO$_2$~or no pairs and three single-CO$_2$~molecules; the factor of $2$ is the multiplicity of the configuration with a single pair). In a compact form, the contribution from a configuration with $j$ number of pairs, $k$ number of chain-interior sites, and $\ell$ number of chain end-points is $i (1+K_1)^{6-2j-k-\ell} K_{\rm d}^{2j} K^k_{\rm int} K^{\ell}_{\rm end}$, where $i$ is the multiplicity, and $(6-2j-k-\ell)$ is the number of remaining sites that may either be empty or occupied by single-CO$_2$~molecules. Hence, there are total $2^6$ possible states--$(\eta_1,\eta_2,\dots,\eta_6)$ where $\eta_i=0~\rm{or}~1$, so that $<1|=(0,0,0,0,0,0),~<2|=(0,0,0,0,0,1),~<3|=(0,0,0,0,1,0),\dots,<64|=(1,1,1,1,1,1)$. This leads to a transfer matrix of size $2^6 \times 2^6$. One can compute all the matrix elements exactly, and calculate the largest eigenvalue numerically. For example, $T_{1,1}=(1+K_1)^6+6(1+K_1)^4 K^2_{\rm d}+9(1+K_1)^2 K^4_{\rm d}+2 K^3_{\rm d}$. \section{Estimation of free volumes} \label{free_vol} As discussed in the main text, CO$_2$~can be adsorbed in three different conformations-- as a single-bound molecule, as a part of a bound (carbamic acid) pair, or as a part of a (ammonium carbamate) chain. Here, we estimate the free volume accessible to a CO$_2$~molecule in each conformation using very crude geometric arguments. In the single-bound conformation, the ligand along with the physisorbed CO$_2$~molecule possesses orientational and conformational degrees of freedom. The ligand with a bound CO$_2$~can roughly access a free volume of a hemispherical shell with radius $\sim 2.3-6.3 \AA$ (typical euclidean distance between the metal site and CO$_2$~considering various conformations). Thus we set, $V_1=\frac{1}{2}\frac{4}{3} \pi (6.3^3-2.3^3)\approx 500 \AA^3$. In the pair conformation, each CO$_2$~molecule along with the ligand can access roughly the volume of half of a ring with radius $5.5-6.5 \AA$ (typical distance of the pair from the surface of the framework) and width $\sim 4 \AA$ (wiggle room of CO$_2$~molecules with in the pair conformation). We set, $V_{\rm d}=\frac{1}{2} \pi (6.5^2-5.5^2)\times 4 \approx 75 \AA^3$. The chain conformation is almost frozen and allows a very small free volume. A chemisorbed CO$_2$~molecule within a chain can access roughly the volume of half of a ring with radius $1.8-2.8 \AA$ and width $1.5 \AA$ (wiggle room of a CO$_2$~molecule with in a chain). We set $V_{\rm{int}}=V_{\rm{end}}=\frac{1}{2} \pi (2.8^2-1.8^2)\times 1.5 \approx 11 \AA^3$. It is worth noting that, for a step-like isotherm, $V_{\rm{int}}$ plays the most significant role in determining the step-position (see \f{vint}). The values of $V_1$ and $V_{\rm d}$ only affect the rise of the isotherm at low pressure before the step. Their effect is negligible if the chain conformation is energetically much more favorable. In the case of physisorbed CO$_2$~molecules at the secondary binding sites, each molecule can access roughly the volume of a hemispherical shell with radius $\sim 1.9-3.8 \AA$ (distance between the binding site and CO$_2$). Thus, we choose, $V_{\rm{sec}}=\frac{1}{2}\frac{4}{3} \pi (3.8^3-1.9^3)\approx 100 \AA^3$. These measurements do not take into account all the possible conformations of the ligands, and likely underestimate the conformational entropy, but overestimate the free volume, due to the neglect of strong steric effects present in the real system. Deviations from these estimated values do not change the qualitative feature of the isotherms. \section{Capturing fine details of isotherms} \label{details} In the main text we focus on describing the basic physics of cooperative binding, which is captured by a simple one-lane statistical mechanical model. Here we show, in addition, that fine features of isotherms, such as the rise before and after the sharp step can be captured by considering the 2- or 6- lane models. For some metals, in addition, we show that reproduction of these features requires inclusion of secondary binding sites and a different mode of monomer binding within the model. For mmen-M$_2$(dobpdc) built with the metals M = Fe, Co, or Zn, the bound-pair conformation is energetically significant, and thus we solve the 6-lane (and the 2-lane) system by generalizing the transfer matrix formalism presented in the main text. Although the basic physics of cooperative adsorption in these frameworks is captured by the 1-lane model, which ignores the pair conformation, the experimental system is more closely represented by the 2- or 6-lane model (the 1-lane model ignores the pair conformation). Here, we introduce statistical weights of a CO$_2$~in the pair (dimer) conformation, $K_{\rm d}=g_{\rm d} W_{\rm d}=g_{\rm d} e^{-\beta (E_{\rm d}-\mu)}$. $E_{\rm d}$ represents the binding energy of CO$_2$~in the pair conformation, and $g_{\rm d}=V_{\rm d}/\Lambda^3$, where $V_{\rm d} (\approx 75 \AA^3)$ is the free volume accessible to a CO$_2$~molecule in that conformation. For an $n$-lane system, the transfer matrix is of dimension $2^n\times 2^n$, and the largest eigenvalue can be solved exactly or numerically (see Sec.~\ref{2_6lane}). The 6-lane model behaves in all important respects like the 2-lane model. \begin{figure} \includegraphics[width=0.65\columnwidth]{fig_SI05} \caption{Variation of the slope of a isotherm as a function of $E_{\rm end}$: isotherms obtained from the 1-lane model for mmen-$\mbox{Mn}_2$(dobpdc) at $313$ K for different chosen values of the binding enthalpy of chain end-points, where $E_1$ and $E_{\rm{int}}$ are fixed to our calculated values (Table~\ref{fig_table}) and $V_{\rm{int}}=11\AA^3$.} \label{endpoints} \end{figure} Isotherms derived from the 6-lane model are shown in \f{isotherm} (c--f) in the main text, and match key features (shape, scaling of the step-position with temperature) of the experimental data. The step-shaped isotherms are similar to those of the 1-lane model, with additional corrections at low pressure (before the step) that results from pair-binding (noticeable in case of Co: \f{isotherm} d left panel in the main text). The 1-lane model fails to capture this feature when the single-CO$_2$~molecules have significantly low binding affinity. Within the current DFT-based energetics, we do not see a prominent slow rise of the isotherm before the step feature for Fe, and Zn in contrast to the experimental data. We find that the step-position is sensitive to statistical weights of chain-interior sites (see \f{vint}) and we cannot get quantitative agreement with the experiments using a unique value of $V_{\rm{int}}$ given the energetics. We shall come back to this issue in the next paragraphs. The case of Ni is slightly more subtle. Using the DFT energetics shown in Table~\ref{fig_table} we find that the Ni-based framework shows no sharp step in its isotherm, reproducing the experimental observation\c{david2015}. Here the chain conformation is no longer statistically the most favorable one. However, we see an inflection point at low pressure in the isotherm for Ni that seems to be absent in experiment (see~\f{isotherm} f in the main text). The microscopic origin of this inflection point is the very low probability of the single-bound molecules compared to the pairs. The experimental heat of adsorption curve for Ni exhibits a very different feature than the other metals-- it shows a single plateau at $\approx -40$ kJ/mol at high loading\c{david2015}. Interestingly, we find that only with a larger binding affinity for the single-CO$_2$~conformation ($\approx -40.5$ kJ/mol) does the inflection point disappear \f{inflection}. Here, we propose that there might exist some other single-bound species (e.g. chains of length one) that are energetically comparable with pairs. This does not have any noticeable effect on the position or slope of the step if it exists. Alternatively, the step-like feature may also disappear if the statistical weight of the adsorbate~at the chain-interior and at the chain end-points are similar (see \f{1d_isotherm} d). Our model can be used to determine how to use microscopic parameters to control gas-uptake isotherms. For instance, one can enhance cooperativity by making the chain end-points less favorable as this leads to a sharper step (see \f{endpoints}). The isotherm for mmen-Ni$_2$ (dobpdc) shown in \f{isotherm} f (in the main text) is non-cooperative. It can be made cooperative by enhancing the effective binding affinity of CO$_2$~within a chain, as shown in \f{multi}. We check this by scaling the statistical weights for chain binding, $K_{\rm end}$ and $K_{\rm int}$, by a factor $1+K_{\rm sec}$, leading to energetic stabilization of chains through an additional species that doesn't compete with CO$_2$~for primary binding sites. We can write $K_{\rm sec}= g_{\rm sec} e^{-\beta (\Delta E-\mu)}$, where $g_{\rm sec}=V_{\rm sec}/\Lambda^3$ ($V_{\rm sec}$ is the free volume accessible to that additional species in the bound state) and predict the required binding-affinity enhancement $\Delta E$ to induce cooperativity (i.e., a step) at a suitable pressure. There may be several possible ways of engineering such enhancement. For instance, mmen-M$_2$(dobpdc) is known to exhibit {\em enhanced} CO$_2$~uptake in the presence of H$_2$O\c{long_jacs2015}, which is striking as water competes with CO$_2$~for binding sites in most MOFs\c{long_jacs2015,piero2013,kundu2016}. This factor of $1+K_{\rm sec}$ can also be regarded as a way of modeling the existence of adsorption at secondary binding sites. In experiments (see \f{isotherm} in the main text), a slow rise of the isotherms after the step is sometimes observed with increasing pressure, which indicates the existence of secondary binding sites. Finally, considering a different mode of monomer binding as discussed and adsorption at the secondary binding sites, we can reproduce all the features of the experimental isotherms: the rise of the isotherms at low pressures (even in the case of Fe, Zn); the step; followed by the rise of the isotherm at high pressures due to physisiorption of CO$_2$~at secondary binding sites (we set the CO$_2$-binding energy at the secondary binding sites as $E_{{\rm sec}} \approx -42.0$ kJ/mol, and the corresponding free volume $V_{\rm{sec}}\approx 100 \AA^3$)-- see \f{isotherm} (in the main text) right panels for all metals. Here, we use the experimental $E_{\rm{int}}$ values and set the values of $V_{\rm{int}}$ such that the step-position (if it exists) matches with that of the experimental isotherms (see Tables~\ref{fig_table} and~\ref{table02}). The discrepancy of the maximum uptake (after the step) may result from the defects in the experimental system where all the primary binding sites are inaccessible to CO$_2$\c{queen2014}. For Ni, we use the DFT-based binding affinity (as the experimental data is not available) for the chain conformation; tune the pair binding energy and set $E_{{\rm d}} \approx -44.9$ kJ/mol as the best-fit value. In conclusion, we attribute the rise of the isotherm at low pressures to the pairs or the predicted single-CO$_2$~conformation; the existence of the step to the formation of long chains (the step position is controlled by the binding energy of chain-interior sites); and the slow rise of the isotherm at very high pressures to adsorption at secondary binding sites. There exists no direct proof of the existence of the pair conformation. Even without the pairs, however, the existence of a possible different mode of single-CO$_2$~binding is sufficient to reproduce all the fine features of the experimental isotherms with the $1$-lane system (see \f{1d_isotherm}). The existence of this predicted new mode of monomer binding can be tested with extensive DFT calculations. \begin{figure} \includegraphics[width=0.85\columnwidth]{fig_SI06} \caption{Induced cooperativity by an agent species: (a) Isotherms for our (6-lane) model of mmen-Ni$_2$(dobpdc) in the presence of an agent that doesn't compete with CO$_2$~but enhances the binding affinity of a polymerized CO$_2$~molecule. This is achieved by multiplying a factor $(1+K_{\rm sec})$ to $K_{\rm int}$ and $K_{\rm {end}}$. For a binding enhancement $\Delta E$ exceeding about $\sim 6$ kJ/mol, we predict that the non-cooperative Ni isotherm can be made cooperative around $1$ bar. For clarity, curves have been shifted along the $y$-axis by increments of 0.5, 1.0, 1.5 and 2.0 mmol/gm, respectively. (b) Isotherms in the presence of an agent that doesn't compete with CO$_2$~but enhances the binding affinity of CO$_2$~in all the three conformations- single, pair, and chain. This is achieved by multiplying a factor $(1+K_{\rm sec})$ uniformly with $K_{\rm int}$, $K_{\rm {end}}$, $K_{\rm d}$ and $K_1$. In this case, the step-like feature doesn't appear.} \label{multi} \end{figure} \begin{figure} \includegraphics[width=0.7\columnwidth]{fig_SI07} \caption{Isotherms (lines) derived from the 1-lane model of mmen-$\mbox{M}_2$(dobpdc), where M = Fe, CO, Zn, or Ni, agrees well with the experiments. The binding enthalpies of the chain conformation for different metals are set by the experimental values (see Table~\ref{fig_table}). For Fe, Co, and Zn the values of $V_{\rm{int}}$ are taken from Table~\ref{table02}. For Ni, we set $E_{\rm{int}}=E_{\rm end}=-46.4$ kJ/mol (see Table.~\ref{fig_table}) and $V_{\rm{int}}=23 \AA^3$ to get the best fit. Here, we take into account the predicted new mode of single-CO$_2$~binding and adsorption at the secondary binding sites. Data points are from experiments\c{david2015}.} \label{1d_isotherm} \end{figure} \section{Chain-length distribution for the 1-lane model} \label{cld} \begin{figure}[b] \includegraphics[width=0.7\linewidth]{fig_SI08} \caption{Construction used to calculate the chain-length distribution for the 1-lane model. The figure shows a chain of length $8$ with two end-points and $6$ internal monomers. The numbers over the bonds are the variables $\eta$, mentioned in the text} \label{chain_config} \end{figure} The grand partition function of the system is given by \begin{eqnarray} \mathcal{Z}=\sum_{\{n_1,n_{\rm int},n_{\rm end}\}} K_1^{n_1} K_{\rm end}^{n_{\rm end}} K_{\rm int}^{n_{\rm int}} \Gamma (n_1,n_{\rm int},n_{\rm end}), \end{eqnarray} where \begin{eqnarray} \Gamma(n_1,n_{\rm int},n_{\rm end}) &=& \frac{(N-n_{\rm int}-n_{\rm end}/2)!}{(N-n_1-n_{\rm int}-n_{\rm end})! (n_{\rm end}/2)!n_1! } \frac{(n_{\rm end}/2+n_{\rm int}-1)!}{(n_{\rm end}/2-1)! n_{\rm int}!} \label{eq_gamma} \end{eqnarray} is the number of ways of arranging $n_1$ single CO$_2$~molecules, $n_{\rm int}$ internal chain molecules and $n_{\rm end}$ chain end-points on a $1d$ lattice with $N$ sites. Here, we do not distinguish between different chain-interior sites. To determine the chain-length distribution for the 1-lane model, we introduce position-dependent statistical weights for internal chain monomers, as shown in \f{chain_config}. The weights of the internal chain monomers starting from the left-hand side are denoted $K_{{\rm int};1}$, $K_{{\rm int};2}$, etc. The terminal chain monomers have weight $K_{\rm end}$. Bonds internal to the chain are denoted $\eta=1,2,\dots$, counting from the left. For bonds not occupied by chains we set $\eta=0$. To write down the transfer matrix for this case, we again define the restricted partition functions $Z^{\eta}_N$. This is the partition function for a system of $N$ sites, with an edge added to the outside of the $N^{\rm th}$ site with bond variable $\eta$. The restricted partition functions satisfy \begin{equation} \left( \begin{array}{c} Z^0_{N+1} \\ \\ Z^1_{N+1} \\ \\ Z^3_{N+1} \\ \\ Z^4_{N+1} \\ \\ \vdots \end{array} \right) = \begin{pmatrix} 1+K_1 && K_{\rm end} && 0 && 0 && \dots\\ \\ K_{\rm end} && 0 && K_{{\rm int};1} && 0 && \dots \\ \\ K_{\rm end} && 0 && 0 && K_{{\rm int};2} && \dots \\ \\K_{\rm end} && 0 && 0 && 0 && \dots \\ \\ \vdots && \vdots && \vdots && \vdots && \vdots \end{pmatrix} \left( \begin{array}{c} Z^0_N \\ \\ Z^1_N \\ \\ Z^3_N \\ \\ Z^4_N \\ \\ \vdots \end{array} \right), \end{equation} where $T$ is the matrix shown. The secular equation can be obtained by calculating the determinant $|T-\lambda I|$, and is \begin{equation} -1+\frac{1}{\lambda}+\frac{K_1}{\lambda}+\left(\frac{K_{\rm end}}{\lambda}\right)^2 \left( 1+\sum_{i=1}^{\infty}\frac{1}{\lambda^i}\displaystyle\prod_{j=1}^{i} K_{{\rm int};j}\right)=0. \label{secular} \end{equation} The density of internal monomers in the $m^{\rm th}$ position is \begin{equation} \rho^{{\rm ch}}_m=\frac{K_{{\rm int};m}}{\lambda} \frac{\partial \lambda}{\partial K_{{\rm int};m}}. \end{equation} Differentiating both sides of \eqq{secular} and setting $K_{{\rm int};1} =K_{{\rm int};2}=\dots = K_{\rm int}$ gives \begin{equation} \rho^{{\rm ch}}_m=\frac{K_{\rm end}^2 \omega (1-\omega K_{\rm int})}{(1+K_1)(1-\omega K_{\rm int})^2+K_{\rm end}^2 \omega (2-\omega K_{\rm int})} (\omega K_{\rm int})^m, \label{rho_m} \end{equation} where $\omega\equiv 1/\lambda_+$ (see \eqq{ev}). When all internal monomers are equivalent, the densities of end-point chain monomers ($\rho_{\rm end}$), internal chain monomers ($\rho_{\rm int}$) and single bound CO$_2$~molecules ($\rho_1$) may be obtained taking the derivative of \eqq{secular} with respect to $K_{\rm end}$, $K_{\rm int}$ and $K_1$ respectively. Doing so gives \begin{eqnarray} \rho_{\rm end}=\frac{2 K_{\rm end}^2 \omega (1-\omega K_{\rm int})}{(1+K_1)(1-\omega K_{\rm int})^2+K_{\rm end}^2 \omega (2-\omega K_{\rm int})}, \label{rho_e} \\ \rho_{\rm int}=\frac{K_{\rm end}^2 \omega (\omega K_{\rm int})}{(1+K_1)(1-\omega K_{\rm int})^2+K_{\rm end}^2 \omega (2-\omega K_{\rm int})}, \label{rho_in} \\ \rho_1=\frac{K_1 (1-\omega K_{\rm int})^2}{(1+K_1)(1-\omega K_{\rm int})^2+K_{\rm end}^2 \omega (2-\omega K_{\rm int})}. \label{rho_1} \end{eqnarray} Using \eq{rho_e} and~\eq{rho_in}, one may rewrite \eq{rho_m} in terms of the densities of terminal and internal chain monomers as \begin{equation} \rho^{{\rm ch}}_m=\frac{\rho_{\rm end}}{2} \left(\frac{2 \rho_{\rm int}}{\rho_{\rm end}+2\rho_{\rm int}}\right)^m. \end{equation} Now we can compute the fraction of chains of length $\ell$ ($\ell-2$ internal monomers): \begin{equation} \begin{split} r_\ell&=\frac{2}{\rho_{\rm end}} \left(\rho^{{\rm ch}}_{\ell-2}-\rho^{{\rm ch}}_{\ell-1}\right) \\ &=(\omega K_{\rm int})^{\ell-2} (1-\omega K_{\rm int}) \\ &=\left(\frac{K_{\rm int}}{\lambda_+}\right)^{\ell-2} \left(1-\frac{K_{\rm int}}{\lambda_+}\right) \\ &=\frac{(2 K_{\rm int})^{\ell-2}\left[1+K_1-K_{\rm int}+\sqrt{(1+K_1-K_{\rm int})^2+4 K_{\rm end}^2}\right]}{\left[1+K_1+K_{\rm int}+\sqrt{(1+K_1-K_{\rm int})^2+4 K_{\rm end}^2}\right]^{\ell-1}}. \end{split} \label{dist_m} \end{equation} We may also rewrite \eqq{dist_m} in terms of $\rho_{\rm end}$ and $\rho_{\rm int}$ as \begin{equation} r_\ell=\frac{\rho_{\rm end}}{\rho_{\rm end}+2 \rho_{\rm int}}\left(\frac{2 \rho_{\rm int}}{\rho_{\rm end}+2 \rho_{\rm int}}\right)^{\ell-2}. \end{equation} In the limit $P \to \infty$ the mean chain length approaches the asymptotic value \begin{eqnarray} \label{li} \av{\ell}_{\infty}=\frac{2 V_{\rm int} e^{-\beta E_{\rm int}}} {V_1 e^{-\beta E_1} -V_{\rm int} e^{-\beta E_{\rm int}}+\sqrt{(V_1 e^{-\beta E_1} -V_{\rm int} e^{-\beta E_{\rm int}})^2+4V_{\rm int}^2 e^{-2\beta E_{\rm end}}}}+2; \end{eqnarray} note that $K_\alpha=\beta P V_\alpha e^{-\beta E_{\alpha}}$. \section{Correlations in $1 \times L$ model} \label{s1} The transfer matrix for a single lane is \begin{equation} T=\begin{pmatrix}1+K_1&K_{\rm end}\\K_{\rm end}&K_{\rm int}\end{pmatrix}. \end{equation} When the weight of endpoint monomers $K_{\rm end}$ vanishes, the matrix is diagonal and the eigenvalues are $\lambda_1=1+K_1$ and $\lambda_2=K_{\rm int}$. The eigenvectors are: \begin{eqnarray} \phi_1&=&\begin{pmatrix}1\\0\end{pmatrix},\\ \phi_2&=&\begin{pmatrix}0\\1\end{pmatrix}, \end{eqnarray} respectively. If the lattice edge $i$ is occupied by a bond, the state variable $\eta_i=1$, otherwise we have $\eta_i=0$. We will consider periodic boundary conditions here. We may then define the expectation values $\langle \eta_i \rangle$ and $\langle \eta_i \eta_j \rangle$, where we assume $j>i$. The bond-bond correlation function will the be $c_{i,j}=\langle \eta_i\eta_j\rangle-\langle \eta_i \rangle \langle \eta_j \rangle$. Since we have translational symmetry, $\langle \eta_i \rangle=\langle \eta \rangle$, independent of $i$, and $c_{i,j}$ is a function of the distance $j-i$. For vanishing $K_{\rm end}$, we have $\langle \eta \rangle=0$ if $1+K_1>K_{\rm int}$ and $\langle \eta \rangle=1$ if $1+K_1<K_{\rm int}$, while $c_{i,j}$ vanishes identically. When the weight of endpoint monomers does not vanish, the eigenvalues are $\lambda_1=(K+2K_{\rm int}+\sqrt{K^2+4K_{\rm end}^2})/2$ and $\lambda_2=(K+2K_{\rm int}-\sqrt{K^2+4K_{\rm end}^2})/2$, where $K=1+K_1-K_{\rm int}$. The eigenvectors are \begin{equation} \phi_{1,2}=\frac{1}{\sqrt{1+a_{1,2}^2}}\begin{pmatrix}1\\a_{1,2}\end{pmatrix}, \end{equation} where $a_1=2K_{\rm end}/\left(K+\sqrt{K^2+4K_{\rm end}^2}\right)$ and $a_2=2K_{\rm end}/\left(K-\sqrt{K^2+4K_{\rm end}^2}\right)$. The partition function for a lattice with $N$ sites is: \begin{equation} Z_N=\sum_{\{\eta\}}\prod_{i=1}^NT(\eta_i,\eta_{i+1})=\sum_{\eta_1=0,1}T^N(\eta_1,\eta_1)=\lambda_1^N + \lambda_2^N, \end{equation} where $\eta_{N+1}=\eta_1$ because of the boundary conditions. The density of bonds per site is: \begin{equation} \langle \eta \rangle=\frac{1}{Z_N}\sum_{\eta_1=0,1}\eta_1T^N(\eta_1,\eta_1)=\frac{1}{Z_N} T^N(1,1). \end{equation} Now, we have: \begin{equation} T^s(\eta,\eta^\prime)=\sum_{i=1,2}\lambda_i^s\phi_i(\eta)\phi_i(\eta^\prime), \end{equation} so that: \begin{equation} \langle \eta \rangle=\frac{1}{\lambda_1^N+\lambda_2^N}\sum_{i=1,2}\lambda_i\frac{a_i^2}{1+a_i^2}, \end{equation} and \begin{eqnarray} \langle\eta_1 \eta_k \rangle&=&\frac{1}{Z_N}\sum_{\{\eta\}}\eta_1T(\eta_2,\eta_2)\ldots T(\eta_{k-1},\eta_k)\eta_kT(\eta_k,\eta_{k+1})\ldots T(\eta_N\eta_1) = \nonumber \\ &&\frac{1}{Z_N}T^{k-1}(1,1)T^{N-k+1}(1,1)= \nonumber \\ &&\frac{1}{\lambda_1^N+\lambda_2^N} \left( \sum_{i=1,2} \lambda_i^{k-i}\frac{a_i^2}{1+a_i^2}\right) \left( \sum_{i=1,2} \lambda_i^{N-k+1}\frac{a_i^2}{1+a_i^2}\right). \end{eqnarray} In the thermodynamic limit $N \to \infty$, since $\lambda_1>\lambda_2$, we have \begin{equation} \langle \eta \rangle=\frac{a_1^2}{1+a_1^2}, \end{equation} and \begin{equation} \langle \eta_1\eta_k\rangle=\left(\frac{a_1^2}{1+a_1^2}\right)^2+\frac{a_1^2a_2^2}{(1+a_1^2) (1+a_2^2)}\left(\frac{\lambda_2}{\lambda_1}\right)^{k-1}. \end{equation} Therefore, the correlation function is: \begin{equation} c_{i,j}=\frac{a_1^2a_2^2}{(1+a_1^2) (1+a_2^2)}\exp \left(-\frac{j-i}{\xi}\right), \end{equation} where the correlation length is: \begin{equation} \xi=\frac{1}{\ln(\lambda_1/\lambda_2)}. \end{equation} \begin{center} \begin{table} \begin{tabular}{ |c|c|} \hline M & CO$_2$~capacity (mmol/g) \\ \hline Mg & $4.040 $ \\ \hline Mn & $3.595 $ \\ \hline Fe & $3.583 $ \\ \hline Co & $3.544 $ \\ \hline Zn & $3.465 $ \\ \hline Ni & $3.547 $ \\ \hline \end{tabular} \caption {Maximum CO$_2$~uptake capacity of mmen-$\mbox{Mg}_2$(dobpdc), assuming one CO$_2$~per metal-diamne\c{david2015}.} \label{table01} \end{table} \end{center} \end{document} \begin{center} \begin{table} \begin{tabular}{ |c|c|c|c|c|} \hline chain length & two & three & four &periodic \\ \hline binding energy( kJ/mol) per CO$_2$ & $-35$ & $-45$ &$-65$ & $-75$ \\ \hline \end{tabular} \caption {DFT calculations of binding energies per CO$_2$~molecule in the chain conformation for mmen-$\mbox{Mg}_2$(dobpdc).} \label{table01} \end{table} \end{center} \begin{center} \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Chain & Mg & Mn & Fe & Co & Zn & Ni\\ \hline \hline Experiment & - 71& -67 &-58 &-52 &-57 & \\ \hline We used & -72 & -66 &-58 &-55 &-57.5&-46.4\\ \hline DFT & - 69.4 & -66.8 &-57.7 &-50.8 &-50.8 &-46.4 \\ \hline \end{tabular} \caption{Binding energies of CO$_2$~molecules as internal chain monomers in kJ/mol\c{david2015}. The binding energy of a single-CO$_2$ conformation is -17 kJ/mol\c{mcdonald2012,smit2015}. The binding energy of an endpoint chain monomer is set as $E_{\rm end}=0.8 E_{\rm int}$.} \label{table02} \end{table} \end{center} \begin{center} \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Pair & Mg & Mn & Fe & Co & Zn &Ni\\ \hline \hline DFT & - 45.8 & -42.5 &-43.1 &-46.5 &-42.6& -47.2 \\ \hline We used & - 45.8 & -42.5 &-43.1 &-46.5 &-42.6& -45.4 \\ \hline \end{tabular} \caption{Binding energies of bound-CO$_2$~pair in kJ/mol as informed by DFT calculations\c{smit2015}.} \label{table03} \end{table} \end{center}
{ "timestamp": "2017-12-15T02:02:49", "yymm": "1712", "arxiv_id": "1712.05061", "language": "en", "url": "https://arxiv.org/abs/1712.05061" }
\section{Definitions} \section{Introduction and preliminaries} We will use standard set-theoretic notation following e.g. \cite{Jech}. For a set $X$, $P(X)$ denotes the power set of $X$ and $|X|$ denotes the cardinality of $X$. If $\kappa$ is a cardinal number then we denote: \begin{itemize} \item $[X]^\kappa \hspace{0.23cm}=\{A\subseteq X:\ |A|=\kappa\}$, \item $[X]^{<\kappa}=\{A\subseteq X:\ |A|<\kappa\}$, \item $[X]^{\le\kappa}=\{A\subseteq X:\ |A|\le\kappa\}$. \end{itemize} Let $X$ be an uncountable Polish space and $\mathcal{I}\subseteq P(X)$ be a $\sigma$-ideal. Let us recall some cardinal coefficients from Cichoń's Diagram: \begin{itemize} \item $\add(\mathcal{I})=\min \{ |\mathcal{A}|:\; \mathcal{A}\subseteq \mathcal{I}\land \bigcup\mathcal{A}\notin \mathcal{I} \}$, \item $\non(\mathcal{I})=\min \{ |A|:\; A\subseteq X\land A\notin \mathcal{I}\}$, \item $\cov(\mathcal{I})=\min \{ |\mathcal{A}|:\; \mathcal{A}\subseteq \mathcal{I}\land \bigcup\mathcal{A} = X\}$, \item $\cof(\mathcal{I})=\min \{ |\mathcal{A}|:\; \mathcal{A}\subseteq \mathcal{I}\land (\forall A\in \mathcal{I})(\exists B\in\mathcal{A}) (A\subseteq B)\}$, \item $\mathfrak{b}=\min\{|\mathcal{F}|: \mathcal{F}\subseteq\omega^\omega \land (\forall x\in\omega^\omega)(\exists f\in\mathcal{F})(\exists^\infty n)(x(n)<f(n))\}$, \item $\mathfrak{d}=\min\{|\mathcal{F}|: \mathcal{F}\subseteq\omega^\omega \land (\forall x\in\omega^\omega)(\exists f\in\mathcal{F})(\forall^\infty n)(x(n)<f(n))\}$. \end{itemize} We call $\mathfrak{b}$ a bounding number and $\mathfrak{d}$ a dominating number. A family $\mathcal{F}\subseteq\omega^\omega$ is dominating, if $\mathcal{F}$ has a property described in the definition of domintaing number (it doesn't have to be of minimal cardinality). \\ We say that $T$ is a tree on a set $A$ if $T\subseteq A^{<\omega}$ and whenever $\tau\in T$ then $\tau\upharpoonright n\in T$ for each natural $n$. \begin{definition} Let $T$ be a tree on a set $A$. Then \begin{itemize} \item for each $t\in T$ $succ(t)=\{a\in A: t^\frown a\in T\}$; \item $split(T)=\{t\in T: |succ(t)|\geq 2\}$; \item $\omega$-$split(T)=\{t\in T: |succ(t)|=\aleph_0\}$; \item for $s\in T$ $Succ_T(s)=\{t\in split(T): s\subsetneq t, (\forall t'\in T)(s\subsetneq t' \subsetneq t \longrightarrow t'\notin split(T) )\}$; \item for $s\in T$ $\omega$-$Succ_T(s)=\{t\in \omega$-$split(T): s\subsetneq t, (\forall t'\in T)(s\subsetneq t' \subsetneq t \longrightarrow t'\notin \omega$-$split(T) )\}$; \item $stem(T)\in T$ is a node $\tau$ such that for each $s\subsetneq\tau$ $|succ(s)|=1$ and $|succ(\tau)|>1$. \end{itemize} \end{definition} Let us now recall definitions of families of trees. \begin{definition} A tree $T$ on $\omega$ is called \begin{itemize} \item Sacks tree or perfect tree, denoted by $T\in\mbb{S}$, if for each node $s\in T$ there is $t\in T$ such that $s\subseteq t$ and $|succ(t)|\geq 2$; \item Miller tree or superperfect tree, denoted by $T\in\mbb{M}$, if for each node $s\in T$ exists $t\in T$ such that $s\subseteq t$ and $|succ(t)|=\aleph_0$; \item Laver tree, denoted by $T\in\mbb{L}$, if for each node $t\supseteq stem(T)$ we have $|succ(t)|=\aleph_0$; \item complete Laver tree, denoted by $T\in\mbb{CL}$, if $T$ is Laver and $stem(T)=\emptyset$; \item Hechler tree, denoted by $T\in\mbb{H}$, if for each node $t\supseteq stem(T)$ we have that a set $\{n\in\omega: t^\frown n\notin T\}$ is finite; \item complete Hechler, denoted by $T\in\mbb{CH}$ tree, if $T$ is Hechler and $stem(T)=\emptyset$. \end{itemize} \end{definition} The notion of complete Laver trees was defined and investigated in \cite{R1}, although Miller in \cite{Miller1} defines Laver trees \textit{de facto} as complete Laver trees and Hechler trees as complete Hechler trees. For every tree $T\subseteq \omega^{<\omega}$ let $[T]$ be the set of all infinite branches of $T$, i.e. $$ [T]=\{ x\in \omega^\omega:\; (\forall n\in\omega)\; x\upharpoonright n\in T\}. $$ \begin{definition}[Tree ideal] Let $\mathbb{T}$ be a family of trees. We say that $A\in P(\omega^\omega)$ is in $t_0$ iff $$ (\forall P\in \mathbb{T})(\exists Q\in \mathbb{T})\; Q\subseteq P\land [Q]\cap A=\emptyset. $$ \end{definition} \begin{definition}[$t$-measurability] Let $\mathbb{T}$ be a family of trees. We say that $A\in P(\omega^\omega)$ is $t$-measurable iff $$ (\forall P\in \mathbb{T})(\exists Q\in \mathbb{T})\; Q\subseteq P\land ([Q]\subseteq A\lor [Q]\cap A=\emptyset). $$ \end{definition} $s_0$ tree ideal is simply a classic Marczewski ideal (see \cite{Marczewski}). It is well known due to Judah, Miller, Shelah (see \cite{JMS}) and {Repick{\'y} } (see \cite{Rep}) that $add(s_0)\le cov(s_0)\le cof(\mathfrak{c})\le non(s_0)=\mathfrak{c}<cof(s_0)\le 2^\mathfrak{c}$. Moreover, in \cite{BKW} Brendle, Khomskii and Wohofsky have shown that also $\mathfrak{c}<cof(m_0)$ and $\mathfrak{c}<cof(l_0)$. Clearly $\omega_1\le \add(l_0)\le \cov(l_0)\le \mathfrak{c}$ holds. In \cite{Goldstern}, Goldstern, {Repick{\'y} }\!\!, Shelah and Spinas showed that it is relatively consistent with $\ZFC$ that $\add(l_0)<\cov(l_0)$. Let us notice that the families $s_0, l_0, m_0$ form $\sigma$-ideals. On the other hand $cl_0$ is not a $\sigma$-ideal. To see that it is enough to consider sets of the form $C_n=\{x\in\omega^\omega:\ x(0)=n \}.$ Then $C_n\in cl_0$ for each $n$, but $\bigcup_n C_n=\omega^\omega$. Using the fact that $s_0$ is a $\sigma$-ideal we may give another proof of the following well known result. \begin{proposition}[Essentially a joke] $cf(\mathfrak{c})>\aleph_0$. \end{proposition} \begin{proof} Suppose that $cf(\mathfrak{c})=\aleph_0$ and let $\mbb{R}=\bigcup_{n\in\omega}A_n$, $|A_n|<\mathfrak{c}$ for each n$\in\omega$. Sets of cardinality lesser than $\mathfrak{c}$ belong to $s_0$, so $\mbb{R}=\bigcup_{n\in\omega}A_n\in s_0$, a contradiction. \end{proof} \section{Tree ideals and measurability} In \cite{Brendle} the following result was obtained. \begin{theorem}[Brendle] If $i_0, j_0\in \{s_0, l_0, m_0\}$ and $i_0\neq j_0$ then $i_0\not\subseteq j_0$. \end{theorem} First we will compare the ideal $cl_0$ with ideals $s_0, m_0, l_0$. \begin{fact} $cl_0\not\subseteq (l_0\cup m_0\cup s_0)$. \end{fact} \begin{proof} To show the assertion let us take $C_0=\{x\in\omega^\omega:\ x(0)=0\}$. By $\mathbb{CL}\subseteq\mathbb{L}\subseteq\mathbb{M}\subseteq\mathbb{S}$, $[C_0]\notin l_0\cup m_0\cup s_0$. On the other hand $[C_0]\in cl_0$, which finishes the proof. \end{proof} \begin{theorem}\label{cl0 a inne} The following statements are true: \begin{enumerate}[(i)] \item $m_0\not\subseteq cl_0$. \item $s_0\not\subseteq cl_0$. \end{enumerate} \end{theorem} \begin{proof} To prove that $m_0\setminus cl_0\neq\emptyset$ we will slightly modify the proof of Theorem 2.1 from \cite{Brendle}. We will use the notions of apple trees and pear trees. \\ First, let us recall that each Miller tree contains an apple tree and each apple tree is a special kind of a Miller tree (apple trees forms a dense subfamily in all Miller trees). \\ Second, each complete Laver tree $C$ contains a pear tree $P_C$. A pear tree is not a complete Laver tree, it is only a special kind of Sacks tree. Pear trees $P_C$ have the following property: for every apple tree $A$ and pear tree $P_C$ $|[A]\cap [P_C]|\le 1$. \\ Let us now enumerate all apple trees $\{A_\alpha:\ \alpha<{\mathfrak c}\}$ and all complete Laver trees $\{C_\alpha:\ \alpha<\mathfrak c\}$. Having the above two propositions we can proceed by induction and construct a sequence $(x_\alpha)_{\alpha<\mathfrak{c}}$ such that for every $\alpha<\mathfrak c$: \[ x_\alpha\in [P_{C_\alpha}]\setminus\bigcup_{\beta<\alpha}[A_\beta] \] Finally, we set $X=\{x_\alpha:\ \alpha<\mathfrak c\}.$ Let us notice that $X\in m_0\setminus cl_0$, which finishes the first part of the proof. To prove that $s_0\setminus cl_0\neq\emptyset$ we use slight modification of the proof of Theorem 2.2 from \cite{Brendle}, which fits a similar pattern from the first case. \end{proof} \begin{problem}\label{Question cl0 a l0} Is it true that $l_0\not\subseteq cl_0$? \end{problem} As a consequence we obtain the following result. \begin{cor} The following statements are true: \begin{enumerate}[(i)] \item There exists a $cl$-nonmeasurable set which is $m$-measurable. \item There exists a $cl$-nonmeasurable set which is $s$-measurable. \end{enumerate} \end{cor} Let us introduce a notion of $\mbb{T}$-Bersntein sets. \begin{definition} Let $\mbb{T}$ a family of trees. We say that a set $B$ is an $\mbb{T}$-Bernstein set if for every $T\in\mbb{T}$ $B\cap [T]\neq\emptyset$ and $B\backslash [T]\neq \emptyset$. \end{definition} Observe that a classic Bernstein set is an $\mbb{S}$-Bernstein set. If $\mbb{T}\subseteq \mbb{T}'$ are families of trees, then $\mbb{T}'$-Bersntein sets are $\mbb{T}$-Bernstein sets. No $\mbb{T}$-Bernstein set is in $t_0$ (or $t$-measurable), and if $\mbb{T}\subseteq \mbb{T}'$ then $\mbb{T}'$-Bernstein sets don't belong to $t_0$. Also note that if $\mbb{T}\subsetneq \mbb{T}'$ then a $\mbb{T}$-Bernstein set may be not a $\mbb{T}'$-Bernstein set (e.g. one may fix a tree from $\mbb{T}'\backslash\mbb{T}$ which body will be always omitted). The following theorem slightly generalizes Theorems 2.1 and 2.2 from \cite{Brendle}. \begin{theorem} The following statements are true: \begin{enumerate}[(i)] \item There exists an $\mbb{L}$-Bernstein set which belongs to $m_0$. \item There exists an $\mbb{M}$-Bernstein set which belongs to $s_0$. \end{enumerate} \end{theorem} \begin{proof} As in in the proof of Theorem \ref{cl0 a inne} we will use notions established in \cite{Brendle}. To prove (i) let us enumerate all Laver trees $\{L_\alpha : \alpha<\mathfrak{c}\}$ and all apple trees $\{A_\alpha: \alpha<\mathfrak{c}\}$. Let us construct two sequences: $(b_\alpha)_{\alpha<\mathfrak{c}}$ and $(x_\alpha)_{\alpha<\mathfrak{c}}$ such that for each $\alpha<\mathfrak{c}$: \begin{align*} b_\alpha\in & [L_\alpha]\backslash(\bigcup_{\beta<\alpha}[A_\beta]\cup \{x_\xi: \xi<\alpha\}), \\ x_\alpha\in &[L_\alpha]\backslash(\{b_\beta : \beta\leq \alpha\}\cup\{x_\beta : \beta<\alpha\}). \end{align*} It can be done, since for each Laver tree $L_\alpha$ there is a pear tree $P_{L_\alpha}$ for which $|[P_{L_\alpha}]\cap [A]|\leq 1$ for every apple tree $A$, so the set $[L_\alpha]\backslash(\bigcup_{\beta<\alpha}[A_\beta]\cup \{x_\xi: \xi<\alpha\})$ is nonempty at each step $\alpha$. Then $B=\{b_\alpha: \alpha<\mathfrak{c}\}$ is the desired set. \\ To prove (ii) we use a similar modification of Theorem 2.2 from \cite{Brendle}. \end{proof} Analogously to the Question \ref{Question cl0 a l0} we may ask the following question. \begin{problem} Is there a $\mbb{CL}$-Bernstein set which belongs to $l_0$? \end{problem} Let us invoke a theorem by Miller from \cite{Miller1}. \begin{theorem}[Miller]\label{Laver lub Hechler} Let $A\in\Sigma^1_1$. Either $A$ contains body of some complete Laver tree or $A^c$ contains a body of some complete Hechler tree. \end{theorem} \begin{theorem}\label{intersection of Borel and tree ideal} The following is true: \begin{enumerate}[(i)] \item $\mathcal{B}\cap s_0$ is an ideal of Borel sets that don't contain a perfect subset (so it's an ideal of countable sets). \item $\mathcal{B}\cap m_0$ is an ideal of Borel sets which don't contain a body of any Miller tree. \item $\mathcal{B}\cap l_0$ is an ideal of Borel sets that don't contain a body of any Laver tree. \end{enumerate} \end{theorem} \begin{proof} (i) is evident. \\ (ii) follows by the fact that any analytic set is either $\sigma$ - bounded or contains a superperfect set. If a Borel set contains a superfect set then clearly it's not in $m_0$. On the other hand, if for some Miller tree $T$ and $\sigma$ - bounded Borel a set $B$ $[T]\backslash B$ didn't contain a superperfect set, then $[T]$ would be $\sigma$ - bounded too. A contradiction. \\ (iii): If a Borel set $B$ contains a body of some Laver tree, then clearly $B\notin l_0$. If it doesn't contain a Laver tree, but there is a Laver $L$ for which each body of Laver subtree of $L$ has a nonempty intersection with $B$, then let us trim $B$ and $L$ in the following way: \begin{eqnarray*} B'&=&\{x\in\omega^\omega: stem(L)^\frown x\in B\}, \\ L'&=&\{x\in\omega^\omega: stem(L)^\frown x\in L\}. \end{eqnarray*} A function $f: \omega^\omega\rightarrow \omega^\omega$ given by the formula $f(x)=stem(L)^\frown x$ is continuous. Clearly, $B'=f^{-1}[B]$, so $B'$ is Borel, and $[L']=f^{-1}[[L]]$ is a body of a complete Laver tree $L'$. $B'$ still doesn't contain a body of any Laver tree, so by Theorem \ref{Laver lub Hechler} there is a Hechler tree $H$ which body is contained in $B'^c$. $H\cap L'$ contains (in fact - is) a Laver tree, body of which $B'$ should intersect - a contradiction. \end{proof} \begin{definition} We say that a set $A$ is $\mathcal{I}$-nonmeasurable if $A\notin \sigma(\mathcal{B}\cup \mathcal{I})$. $A$ is completely $\mathcal{I}$-nonmeasurable if $A\cap B$ is $\mathcal{I}$-nonmeasurable for each Borel set $B\notin \mathcal{I}$, or equivalently - $A$ intersects each, but doesn't contain any, Borel $\mathcal{I}$-positive set. \end{definition} \begin{cor} Let $(\mbb{T}, t_0)\in\{(\mbb{S}, s_0), (\mbb{M}, m_0), (\mbb{L}, l_0)\}$. Then a set $B$ is a $\mbb{T}$-Bernstein iff it is completely $t_0\cap\mathcal{B}$-nonmeasurable. \end{cor} \begin{proof} By Theorem \ref{intersection of Borel and tree ideal} a set $A$ is $t_0\cap\mathcal{B}$-positive Borel set if and only if it contains a body of some tree from $\mbb{T}$, so a set $B$ is $\mbb{T}$-Bernstein if and only if it intersects each each, but does not contain any, Borel set containing a body of a tree from $\mbb{T}$. \end{proof} \section{$\mathcal{I}$-Luzin sets and algebraic properties} Let us recall the notion of $\mathcal I$-Luzin sets. Let $X$ be a Polish space and $\mathcal{I}$ be an ideal. \begin{definition} We say that a set $L$ is an $\mathcal I$-Luzin set if $(\forall A\in\mathcal I)(|A\cap L|<|L|)$. \end{definition} For classic ideals of Lebesgue measure zero sets $\mathcal{N}$ and meager sets $\mathcal{M}$ we will call $\mathcal{M}$-Luzin sets generalized Luzin sets and $\mathcal{N}$-Luzin sets generalized Sierpiński sets. In \cite{Wohofsky} the following result was proven. \begin{theorem}[Wohofsky] There is no $s_0$-Luzin set. \end{theorem} We will show that similar results can be obtained for other tree ideals. \begin{theorem} The following statements are true. \begin{enumerate}[(i)] \item There is no $l_0$-Luzin set. \item There is no $cl_0$-Luzin set. \item There is no $m_0$-Luzin set. \end{enumerate} \end{theorem} \begin{proof} Let us consider $l_0$ case. We will prove that for every set $X$ of cardinality $\mathfrak c$ there exists a set $A\subseteq X$ such that $A\in l_0$ and $|A|=\mathfrak c$. Indeed, let us assume that $X\notin l_0$. Then there exists $L\in\mathbb{L}$ such that for every $L'\subseteq L$, $L'\in\mathbb{L}$ we have $|[L]\cap X|=\mathfrak c$. Let us now fix a maximal antichain $\{L_\alpha:\alpha<\mathfrak c\}$ of Laver trees contained in $L$ such that $|[L_\alpha]\cap X|=\mathfrak c$. Let us construct a sequence $(a_\alpha)_{\alpha<\mathfrak{c}}$ such that for each $\alpha<\mathfrak{c}$: \[ a_\alpha \in X\setminus \bigcup_{\xi<\alpha}[L_\alpha]. \] Then $A=\{a_\alpha:\ \alpha<\mathfrak c\}$ is the set. Proofs of the other cases are almost identical. \end{proof} Now we will consider $\mathcal{I}$-Luzin sets in a context of algebraic properties and tree ideals. We will work on the real line $\mbb{R}$ with addition. Since $\mbb{R}$ is $\sigma$-compact, it does not contain even superperfect sets. We will tweak the definition a bit by saying that $A\subseteq \mbb{R}$ belongs to $t_0$ if $h^{-1}[A]$ belongs to $t_0$ in $\omega^\omega$, where $h$ is a homeomorphism between $\omega^\omega$ and a subspace of irrational numbers (see \cite{KyWe} for a similar modification in the case of $2^\omega$). Having this in mind we will usually mean by $[\tau]$, $\tau\in\omega^{<\omega}$, an open interval of rational endpoints on $\mbb{R}$. Before we proceed let us define a non-standard kind of fusion of Miller and Laver trees, that we will use later. Let $T$ be a Miller tree. Let $\tau_\emptyset\in \omega$-$split(T)$ and let $T_0$ be any Miller subtree of $T$ such that $\tau_\emptyset$ remains an infinitely splitting node in $T_0$. Suppose we have a Miller subtree $T_n$ and a set of nodes $B_n=\{\tau_\sigma: \sigma\in n^{\leq n}\}$ such that \begin{enumerate}[(i)] \item $\tau_\sigma\in\omega$-$split(T_n)$ for every $\sigma\in n^{\leq n}$; \item $\tau_{\sigma^\frown k}\supseteq\tau_\sigma$ for every $k<n$ and $\sigma\in n^{<n}$; \item $\tau_{\sigma^\frown k}\cap\tau_{\sigma^\frown j}=\tau_\sigma$ for every $\sigma\in n^{<n}$ and distinct $k, j<n$. \end{enumerate} We extend the set of nodes $B_n$ to $B_{n+1}=\{\tau_\sigma: \sigma\in (n+1)^{\leq n+1}\}$ in a way that preserves above conditions, so we gonna have n+1 levels of infinitely splitting nodes with fixed n+1 splits. The only $\sigma\in (n+1)^0$ is $\emptyset$, and $\tau_\emptyset$ is an old node. It is $\omega$-$splitting$ in $T_n$ and $T_n$ is a Miller tree, so we may find $\tau_n\supseteq \tau_\emptyset$, which is $\omega$-$splitting$ and $\tau_n\cap \tau_j=\tau_\emptyset$ for $j<n$. If we already have $\tau_\sigma$'s with desired properties for $\sigma\in (n+1)^{\leq k}$, $k<n+1$, then for $\tau_\sigma$, $\sigma\in n^k$ (old node), we add $\tau_{\sigma^\frown n}$ such that conditions (i) - (iii) are still met. For a new node $\tau_\sigma$, $\sigma\in (n+1)^k\backslash n^k$, we find $\tau_{\sigma^\frown j}$ for each $j<n+1$ such that conditions (i) - (iii) are satisfied too. Then let $T_{n+1}$ be any Miller subtree of $T_n$ for which nodes from $B_{n+1}$ are still infinitely splitting. \\ We will call a sequence of trees $(T_n)_{n\in\omega}$ (or, interchangeably, their bodies $[T_n]$) derived that way a \textit{Miller fusion sequence}. \\ Similarly we define a \textit{Laver fusion sequence}. The only difference would be that if $\tau_\sigma\subseteq\tau_{\sigma^\frown k}$, then actually $\tau_{\sigma^\frown k}={\tau_{\sigma}}^\frown j$ for some $j\in\omega$. \begin{proposition} For every Miller (resp. Laver) fusion sequence $(T_n)_{n\in\omega}$ a set $\bigcap_{n\in\omega}T_n$ is a Miller (resp. Laver) tree. \end{proposition} \begin{lemma}\label{Fusion lemma for intervals} For every sequence of intervals $(I_n)_{n\in\omega}$ and a Miller (resp. Laver) tree $T$ there is a Miller (resp. Laver) fusion sequence $(T_n)_{n\in\omega}$ such that for all $n>0$: \[ \lambda([T_n]+I_n)< (1+\Sigma_{k=0}^{n-1}(n-1)^k)\lambda(I_n). \] \end{lemma} \begin{proof} Let us focus on a little more complicated "Miller" case. Let $I_0$ be an interval, $\lambda(I_0)=\epsilon_0$, $T$ a Miller tree. We proceed by induction on $n$. Let $\tau_\emptyset\in\omega$-$split(T)$ such that $\lambda([\tau_\emptyset])<\epsilon_0$. Then $\lambda([\tau_\emptyset]+I_0)=\lambda([\tau_\emptyset])+\lambda(I_0)<2\epsilon_0$. Let $T_0$ be Miller subtree of $T$ such that $\tau_\emptyset=stem(T_0)$ and $\tau_\emptyset\in\omega$-$split(T_0)$. Clearly, we have $\lambda([T_0]+I_0)<2\epsilon_0$. \\ Now assume that we have a tree $T_n$ that is an element of the emerging Miller fusion sequence, and associated with it set $B_n$ of fixed nodes satisfying conditions (i) - (iii). Let $\lambda(I_{n+1})=\epsilon_{n+1}$. Let us denote for each $\sigma\in\omega^{<\omega}$ and interval $I_\sigma$ a set \[ N(I_\sigma)=\{{\tau_\sigma}^\frown k\in T_n: [{\tau_\sigma}^\frown k]\subseteq I_\sigma\land(\forall j<n)(\tau_{\sigma^\frown j}\not\supseteq {\tau_\sigma}^\frown k)\}. \] At each level $k<n$ for every $\sigma\in n^k$ let $I_\sigma$ be an interval with $\lambda(I_\sigma)<\frac{\epsilon_{n+1}}{(n+1)^{n}}$ such that a set $N(I_\sigma)$ is infinite and choose $\tau_{\sigma^\frown n}\in\omega$-$split(T_n)$ such that $\tau_{\sigma^\frown n}\supseteq {\tau_\sigma}^\frown l$ for some ${\tau_\sigma}^\frown l\in N(I_\sigma)$. At the level $n$ let us fix an intervals $I_\sigma$, $\lambda(I_\sigma)<\frac{\epsilon_{n+1}}{(n+1)^{n}}$, for $\sigma\in n^n$ such that sets $N(I_\sigma)$ are infinite and pick $\tau_{\sigma^\frown 0}, \tau_{\sigma^\frown 1}, ..., \tau_{\sigma^\frown n}$ which are extensions of some nodes ${\tau_\sigma}^\frown k_0, {\tau_\sigma}^\frown k_1, ..., {\tau_\sigma}^\frown k_n\in N(I_\sigma)$ respectively. Finally we pick remaining nodes to complete a set $B_{n+1}$ in the gist of our definition of Miller fusion sequence however we like. We take as $T_{n+1}$ any Miller subtree of $T_n$ for which nodes from $B_{n+1}$ are infinitely splitting and which body is covered by intervals $I_\sigma,\, \sigma\in n^{\leq n}$ (which is possible by infiniteness of each $N(I_\sigma)$). \\ Let us approximate $\lambda([T_{n+1}]+I_{n+1})$: \begin{align*} \lambda([T_{n+1}]+I_{n+1})&\leq \lambda(\bigcup\{I_\sigma+I_{n+1}: \sigma\in n^{\leq n}\}\leq\Sigma_{\sigma\in n^{\leq n}}(\lambda(I_\sigma)+\lambda(I_{n+1}))< \\ &< \Sigma_{\sigma\in n^{\leq n}}(\frac{\epsilon_{n+1}}{(n+1)^{n}}+\epsilon_{n+1}), \end{align*} and since the count of intervals $I_\sigma$ is $|n^{\leq n}|=\Sigma_{k=0}^{n}n^k\leq (n+1)^{n}$, we have: \begin{align*} \lambda([T_{n+1}]+I_{n+1})&\leq \Sigma_{k=0}^{n}n^k(\frac{\epsilon_{n+1}}{(n+1)^{n}}+\epsilon_{n+1})\leq (n+1)^{n}\frac{\epsilon_{n+1}}{(n+1)^{n}}+\Sigma_{k=0}^{n}n^k\epsilon_{n+1}= \\ &=\epsilon_{n+1}+\Sigma_{k=0}^{n}n^k\epsilon_{n+1}=(1+\Sigma_{k=0}^{n}n^k)\epsilon_{n+1}. \end{align*} \end{proof} \begin{remark}\label{Remark on freezing stem} In the above Lemma in the case of a Laver tree we may demand that $stem(T)=stem(\bigcap_{n\in\omega}T_n)$, if $stem(T)$ is nonempty. \end{remark} \begin{proof} The major difference is at the first step of the induction. Instead of picking a suitable "far enough" node $\tau_\emptyset\in T$ such that $\lambda([\tau_\emptyset]+I_0)<2\lambda(I_0)$, we already restrict the choice of nodes at the stem level by picking an interval $I_\emptyset$ of measure $\lambda(I_\emptyset)<\lambda(I_0)$ such that a set \[ N(I_\emptyset)=\{stem(T)^\frown k\in T: [stem(T)^\frown k]\subseteq I_\emptyset\} \] is infinite. It can be done since $stem(T)\neq\emptyset$, so all clopens $[stem(T)^\frown k]$, $k\in\omega$, are contained in an interval. We take a Laver subtree $T_0$ of $T$ for which $[T]\subseteq I_\emptyset$ and $stem(T)=stem(T_0)$ (so all nodes extending $stem(T_0)$ come from $I_\emptyset$). Then we continue analogously to the proof of the Lemma \ref{Fusion lemma for intervals}. \end{proof} \begin{lemma}\label{Miller fusion lemma for G delta} There exists a dense $G_\delta$ set $G$ such that for each Miller (resp. Laver or complete Laver) tree $T$ there exists a Miller (resp. Laver or complete Laver) subtree $T'\subseteq T$ such that $G+[T']\in\mathcal{N}$. \end{lemma} \begin{proof} Let $D=\{d_n: n\in\omega\}$ be a countable dense set, $G=\bigcap_{n\in\omega}\bigcup_{k>n}I_k$, where $I_k$ is an interval with center $d_k$ and $\lambda(I_k)<\frac{1}{(k)^{k-1}2^k}$. Proofs are almost identical in cases of Miller and Laver trees so let $T$ be a Miller tree. By the Lemma \ref{Fusion lemma for intervals} there is a Miller fusion sequence $(T_n)_{n\in\omega}$ such that \[ \lambda([T_n]+I_n)<(1+\Sigma_{k=0}^{n-1}(n-1)^k)\lambda(I_n)\leq n^{n-1}\frac{1}{n^{n-1}2^n}=\frac{1}{2^n}. \] $T'=\bigcap_{n\in\omega}T_n$ is a Miller tree containing all $T_n$'s, so we may replace $[T_n]$ with $[T']$ in the above formula and it still holds. Then for fixed $n\in\omega$: \[ \lambda(\bigcup_{k>n}I_k+[T'])= \lambda(\bigcup_{k>n}([T']+I_k))\leq\Sigma_{k>n}\lambda([T']+I_k)\leq\Sigma_{k>n}\frac{1}{2^k}=\frac{1}{2^n}, \] so, given that $[T']+\bigcap_{n\in\omega}\bigcup_{k>n}I_k\subseteq \bigcap_{n\in\omega}\bigcup_{k>n}([T']+I_k)$, we have: \[ \lambda(G+[T'])\leq \lambda(\bigcap_{n\in\omega}\bigcup_{k>n}([T']+I_k))\leq \lim_{n\rightarrow\infty}\frac{1}{2^n}=0. \] In the case of a complete Laver tree $T$ let us observe that $T=\bigcup_{n\in\omega}T_n$, where $T_n=\{\sigma\in T: (n)\subseteq\sigma\lor\sigma\subseteq (n)\}$ is a Laver tree with a nonempty stem. Let us notice that $[T]=\bigcup_{n\in\omega}[T_n]$. By the Lemma \ref{Fusion lemma for intervals}, Remark \ref{Remark on freezing stem}, and using the first part of the proof we find for each (nonempty) $T_n$ a Laver subtree $T_n'$ which shares the stem with $T_n$ and for which we have: \[ [T_n']+G\in\mathcal{N}. \] Then $T'=\bigcup_{n\in\omega}T_n'$ is a complete Laver subtree of $T$ and: \[ [T']+G=[\bigcup_{n\in\omega}T_n']+G=\bigcup_{n\in\omega}[T_n']+G=\bigcup_{n\in\omega}([T_n']+G)\in\mathcal{N} \] as a countable union of null sets. \end{proof} Before we proceed to the main theorem of this section let us recall a generalized version of Rothberger's theorem (see \cite{Roth}). \begin{theorem}(Essentially Rothberger)\label{Rothberger theorem} Assume that generalized Luzin set $L$ and generalized Sierpiński set $S$ exist. Then, if $\kappa=max\{|L|, |S|\}$ is a regular cardinal, $|L|=|S|=\kappa$. \end{theorem} \begin{proof} Assume that $\kappa=|L|>|S|$ and $\kappa$ is a regular cardinal. Let $M$ be a meager set of full measure (the Marczewski decomposition). Then \[ \kappa=|L\cap\mbb{R}|=|L\cap (M+S)|=|\bigcup_{s\in S}(L\cap(M+s))|<\kappa, \] by regularity of $\kappa$. In the case of $\kappa=|S|>|L|$ the proof is almost the same. \end{proof} The following theorem extends the result obtained in \cite{MZ}. \begin{theorem} Let $\mathfrak{c}$ be a regular cardinal and $t_0\in\{s_0, m_0, l_0, cl_0\}$. Then for every generalized Luzin set $L$ and generalized Sierpiński set $S$ we have $L+S\in t_0$. \end{theorem} \begin{proof} Let $L$ and $S$ be a generalized Luzin set and generalized Sierpiński set respectively. If $|L|<\mathfrak{c}$ and $|S|<\mathfrak{c}$, then $L+S\in t_0$, since every set of cardinality less than $\mathfrak{c}$ belongs to $t_0$. So, without a loss of generality (Theorem \ref{Rothberger theorem}), let us assume that $|L|=|S|=\mathfrak{c}$. \\ We will proceed with the proof in the case $t_0=m_0$, the other cases are almost identical. Let $T$ be a Miller tree. By the virtue of Lemma \ref{Miller fusion lemma for G delta} let $G$ be a dense $G_\delta$ set and $T'\subseteq T$ a Miller tree such that $[T']+G\in \mathcal{N}$. Let $A=-G$ and $B=([T']+G)^c$. Then $[T']\subseteq(A+B)^c$. We will show that there is a Miller tree $T''\subseteq T'$ which body is contained in $(L+S)^c$. We have: \begin{align*} L+S&=((L\cap A)\cup(L\cap A^{c}))+((S\cap B)\cup(S\cap B^{c})) \\ & = ((L\cap A)+(S\cap B))\cup((L\cap A)+(S\cap B^{c}))\cup \\ &\cup((L\cap A^{c})+(S\cap B))\cup((L\cap A^{c})+(S\cap B^{c})). \end{align*} $(L\cap A)+(S\cap B)\subseteq A+B$ and sets $(L\cap A)+(S\cap B^{c})$, $(L\cap A^{c})+(S\cap B)$ and $(L\cap A^{c})+(S\cap B^{c})$ are generalized Luzin, generalized Sierpi\'nski and of size less than $\mathfrak{c}$, so their intersection with $[T']$ has a cardinality less than $\mathfrak{c}$. It follows that indeed there exists a Miller tree $T''\subseteq T'$ such that $(L+S)\cap [T'']=\emptyset$ and therefore $L+S$ belongs to $m_0$. \end{proof} Let us remark that the assumption that $\mathfrak{c}$ is regular cannot be omitted due to the following result (\cite{MZ}). \begin{theorem} It is consistent that there exist generalized Luzin set $L$ and generalized Sierpi{\'n}ski set $S$ such that $L+S=\mbb{R}^n$, and $\mathfrak{c}=\aleph_{\omega_1}$. \end{theorem} \section{Eventually different families and t-measurablity} Two members $f,g\in\omega^\omega$ of the Baire space are \textit{eventually different} (briefly: e.d.) iff $f\cap g$ is a finite subset of $\omega\times\omega$. Maximal eventually different families with respect to inclusion are called {\it { m.e.d.\,} families}. Every e.d. family is a meager subset of the Baire space. It is natural to ask whether the existence of m.e.d. families that are either $s$-measurable or $s$-nonmeasurable can be proven in $\ZFC$. It is relatively consistent with ZFC that there is a m.e.d. family $\mathcal{A}$ of cardinality smaller then $\mathfrak{c}$ (see \cite{Kunen}). In such a case $\mathcal{A}\in s_0$. On the other hand there exists a perfect { e.d.\,} family and therefore not all m.e.d. families are in $s_0$. The following two theorems answer this question positively. \begin{theorem}\label{nonmeasurable_mad} There exists an $s$-nonmeasurable m.e.d. family in the Baire space. \end{theorem} \begin{proof} Let us fix a perfect tree $T\subseteq \omega^{<\omega}$ such that $[T]$ is e.d. in $\omega^\omega$. Let $\{ T_\alpha: \alpha<\mathfrak{c}\}$ be an enumeration of $\Sacks(T)$ - a family of all perfect subtrees of $T$. By transfinite reccursion we define: $$ \{ (a_\alpha,d_\alpha,x_\alpha)\in [T]\times [T]\times \omega^\omega:\alpha<\mathfrak{c}\} $$ such that for any $\alpha<\mathfrak{c}$ we have: \begin{enumerate}[\hspace{0.5cm}(1)] \item $a_\alpha,d_\alpha\in [T_\alpha]$, \item $\{ a_\xi:\xi<\alpha\}\cap \{ d_\xi:\xi<\alpha\}=\emptyset$, \item $\{ a_\xi:\xi<\alpha\}\cup \{ x_\xi:\xi<\alpha\}$ is { e.d.\,}, \item $\forall^\infty n\; x_\alpha(n)=d_\alpha(n)$ but $x_\alpha\ne d_\alpha$. \end{enumerate} Assume that we are at the step $\alpha<\mathfrak{c}$ of the construction and we have already defined the sequence: $$ \{ (a_\xi,d_\xi,x_\xi)\in [T]^2\times \omega^\omega:\xi<\alpha\}. $$ We can choose $a_\alpha,d_\alpha\in [T_\alpha]$ ($[T_\alpha]$ has cardinality $\mathfrak{c}$) which fulfills conditions $(1),(2)$. Then choose any $x_\alpha\in \omega^\omega$ distinct from $d_\alpha$ but $(\forall^\infty n) d_\alpha(n)=x_\alpha(n)$. Then $x_\alpha\in\omega^\omega\setminus [T]$ and \[ \{ a_\xi:\xi<\alpha\}\cup \{ x_\xi:\xi<\alpha\} \] forms an { e.d.\,} family in $\omega^\omega$. This completes the construction.\ \\ Now let us set $A_0 = \{ a_\alpha:\alpha < \mathfrak{c}\}\cup \{ x_\alpha:\alpha<\mathfrak{c}\}$ and let us extend it to m.e.d. family $A$. It is easy to check that $A$ is the desired $s$-nonmeasurable m.e.d. family. \end{proof} In \cite{R1} it was shown that if $\mathfrak{d} = \omega_1$ then there exists a $s$-nonmeasurable m.e.d. family $\mathcal{A}$ and $\mathcal{A}'\in [\mathcal{A}]^{\omega_1}$ which is dominating in $\omega^\omega$. Here $s$-nonmeasurability can be replaced by $l$-, $m$- or $cl$-nonmeasurability. In the same paper it was proved that the following statement is relatively consistent with ZFC: "$\omega_1 < \mathfrak{d}$ and there exists $cl$-nonmeasurable m.e.d. family $\mathcal{A}$ and a dominating family $\mathcal{A}'\subseteq \mathcal{A}$ of the cardinality equal to $\mathfrak{d}$". The next theorem generalizes the result obtained in \cite{R1}. \begin{theorem}\label{d_l_mad} There exists a m.e.d. family $\mathcal{A}\subseteq \omega^\omega$ such that $\mathcal{A}$ is not $s$-, $l$- and $m$-measurable, with a dominating subfamily $\mathcal{D}\in [\mathcal{A}]^{\le \mathfrak{d}}$. \end{theorem} \begin{proof} By definition there is a dominating family $\mathcal{D}_0\subseteq \omega^\omega$ of size $\mathfrak{d}$. We will show that there is an a.d. dominating family $\mathcal{D}$ of the same size. Let $\mathcal{P} = \{ A_m\in [\omega]^\omega:\; m\in\omega \}$ be a partition of $\omega$ into infinite subsets. Let us construct a tree as follows: $T_{-1}=\{ \emptyset \}$, next $T_0=\{ (0,n): n\in\omega\}$. Now assume that we have defined $T_n$ for a fixed $n\in\omega$ and let us enumerate $T_n=\{ s_k: k\in\omega\}$ then for every $m\in\omega$ let us set $A_m=\{ k_{m, i}: i\in\omega\}$ as an increasing sequence with $i$ running through $\omega$ and $m$ fixed. Define $T_{n+1,m} = \{ s_m\cup \{(n+1, k_{m, i})\}:i\in\omega\}$ and then let $T_{n+1}=\bigcup_{m\in\omega} T_{n+1,m}$ and finally $T=\bigcup_{n\in\omega\cup\{ -1\}} T_n$. It is easy to observe that $[T]$ forms an a.d. family in $\omega^\omega$. Now let us define an embedding $f:\mathcal{D}_0\to [T]$ as follows: pick an arbitrary element $d\in\mathcal{D}_0$ which is an union $\bigcup\{ d\upharpoonright n: n\in\omega\}$ then assign to $d\upharpoonright 0=\emptyset\in T_{-1}$ and to $d\upharpoonright 1$ $t_0=d\upharpoonright 1=\{ (0,d(0))\}$. Now let us assume that we have assigned for a fixed $d\upharpoonright n$ $t_n\in T_n$ for $n\in\omega$. Then there is unique $m\in\omega$ such that $t_n\in T_{n,m}$ but $A_m=\{ k_{m, i}: i\in\omega\}$ is represented by the increasing sequence $(k_{m, i})_{i\in\omega}\in\omega^\omega$ then $d\upharpoonright n+1$ is assigned to $t_{n+1} = t_n\cup \{ (n+1,w)\}$ where $w=k_{m, d(n+1)}$ which is a greater than $d(n+1)$. From the construction we see that $t_{n+1}\in T_{n+1}$ and for any $n\in\omega$ $t_n\subseteq t_{n+1}$. Now let $f(d)=\bigcup\{ t_n\in T_n: n\in\omega:\}\in [T]$. It easy to see that this ensures that $f$ is one to one mapping and for any $d\in \mathcal{D}_0$ $d\le f(d)$. Now let $\mathcal{D}=\{ 4f(d): d\in\mathcal{D}_0\}\subseteq (4\mathbb{N})^\omega$ which forms a dominating family in $\omega^\omega$ of size equal to $\mathfrak{d}=|\mathcal{D}_0|$. Now let us choose a.d. trees $S\subseteq (4\mathbb{N}+1)^{<\omega}$, $M\subseteq (4\mathbb{N}+2)^{<\omega}$ and $L\subseteq (4\mathbb{N}+3)^{<\omega}$ where $S$ is a perfect tree, $M$ is Miller and $L$ is Laver. Let us enumerate $\mbb{S}(S)=\{ S_\alpha: \alpha<\mathfrak{c}\}$ - a family of all perfect subtrees of $S$, analogously $\mbb{M}(M)=\{ M_\alpha:\alpha<\mathfrak{c}\}$, and $\mbb{L}(L)=\{ L_\alpha:\alpha<\mathfrak{c}\}$. By transfinite reccursion let us define \[ \{ w_\alpha \in [S]^2\times \omega^\omega\times [M]^2\times \omega^\omega\times [L]^2\times \omega^\omega:\alpha<\mathfrak{c}\} \] where $w_\alpha=(a^s_\xi,d^s_\xi,x^s_\xi,a^m_\xi,d^m_\xi,x^m_\xi,a^l_\xi,d^l_\xi,x^l_\xi,)$ for any $\alpha<\mathfrak{c}$, and such that for any $\alpha<\mathfrak{c}$ we have: \begin{enumerate} \item $a^s_\alpha,d^s_\alpha\in [S_\alpha]$, \item $\{ a^s_\xi:\xi<\alpha\}\cap \{ d^s_\xi:\xi<\alpha\}=\emptyset$, \item $\{ a^s_\xi:\xi<\alpha\}\cup \{ x^s_\xi:\xi<\alpha\}$ is e.d., \item $\forall^\infty n\; x^s_\alpha(n)=d^s_\alpha(n)$ but $x^s_\alpha\ne d^s_\alpha$. \item $a^m_\alpha,d^m_\alpha\in [M_\alpha]$, \item $\{ a^m_\xi:\xi<\alpha\}\cap \{ d^m_\xi:\xi<\alpha\}=\emptyset$, \item $\{ a^m_\xi:\xi<\alpha\}\cup \{ x^m_\xi:\xi<\alpha\}$ is e.d., \item $\forall^\infty n\; x^m_\alpha(n)=d^m_\alpha(n)$ but $x^m_\alpha\ne d^m_\alpha$. \item $a^l_\alpha,d^l_\alpha\in [L_\alpha]$, \item $\{ a^l_\xi:\xi<\alpha\}\cap \{ d^l_\xi:\xi<\alpha\}=\emptyset$, \item $\{ a^l_\xi:\xi<\alpha\}\cup \{ x^l_\xi:\xi<\alpha\}$ is e.d., \item $\forall^\infty n\; x^l_\alpha(n)=d^l_\alpha(n)$ but $x^l_\alpha\ne d^l_\alpha$. \end{enumerate} Now assume that we are at the step $\alpha<\mathfrak{c}$ of the construction and we have a partial sequence: \[ \{ w_\alpha :\; \xi<\alpha\} \] which has a length at most $\omega\cdot |\alpha|<\mathfrak{c}$. In the case of the perfect part we can choose in $[S_\alpha]$ (of size $\mathfrak{c}$) $a^s_\alpha,d^s_\alpha\in [S_\alpha]$ which fulfills the first condition. Then choose any $x^s_\alpha\in \omega^\omega$ different than $d^s_\alpha$ but $(\forall^\infty n) d_\alpha(n)=x_\alpha(n)$ then $x^s_\alpha\in\omega^\omega\setminus [S]$ and \[ \{ a_\xi:\xi \le \alpha\}\cup \{ x_\xi:\xi \le \alpha\} \] forms an e.d. family in $\omega^\omega$. In the same way we can choose other points of our tuple for Miller and Laver trees. The construction is complete. Now let us set: \[ \mathcal{A}_s = \mathcal{D}\cup \{ a^s_\alpha:\alpha < \mathfrak{c}\}\cup \{ x^s_\alpha:\alpha<\mathfrak{c}\}, \] \[ \mathcal{A}_m = \mathcal{D}\cup \{ a^m_\alpha:\alpha < \mathfrak{c}\}\cup \{ x^m_\alpha:\alpha<\mathfrak{c}\} \] and \[ \mathcal{A}_l = \mathcal{D}\cup \{ a^l_\alpha:\alpha < \mathfrak{c}\}\cup \{ x^l_\alpha:\alpha<\mathfrak{c}\}. \] Let us extend the family $\mathcal{D}\cup\mathcal{A}_s\cup\mathcal{A}_m\cup\mathcal{A}_l$ to any m.e.d. family $\mathcal{A}$. It is easy to check that $\mathcal{A}$ is required $s$-, $m$- and $l$-nonmeasurable m.e.d. family in $\omega^\omega$ with a dominating subfamily of size $\mathfrak{d}$, which completes the proof. \end{proof}
{ "timestamp": "2017-12-15T02:07:29", "yymm": "1712", "arxiv_id": "1712.05212", "language": "en", "url": "https://arxiv.org/abs/1712.05212" }
\section{INTRODUCTION} \IEEEPARstart{T}{o} handle soft materials, such as clothes while performing a complex task, a robot is required to not only conduct multiple subtasks properly but also to adapt to the deformation and displacement of the materials. These subjects are very important from the scientific and practical viewpoints. Soft materials have been used as objects in manipulation tasks, and a few researchers have attempted to research \cite{towel1}\cite{towel2}. Further, in factories and workshops, workers often perform complicated tasks that involve combining appropriate operations according to the instructions from other workers and also based on the object. To accomplish an extensive range of tasks, robots are also expected to perform new tasks by combining subtasks as parts of a task sequence. \par However, model-based methods depict some limitations in terms of handling such objects. Given that human workers design motions corresponding to each task, it becomes increasingly difficult to design task motions suited to each situation as the number of situations and types of tasks increase. It is especially difficult to design flexible objects based on the conventional control theory. A significant cost is incurred with designing objects using simulators and using image processing for feature extraction \cite{towel1}\cite{towel2}\cite{SURF}. \par In addition, to switch and combine various subtasks, the method must be capable of motion branching. In a few situations, executable subtasks cannot be determined uniquely using sensory signals alone such as the camera image. In such situations, (i) other signals to instruct the subtasks and (ii) a switching system that accepts the signals are required. The robot receives an external instruction and switches between subtasks. However, as the task complexity increases, it becomes increasingly difficult to design all required patterns. Furthermore, creating signals in strict accordance with the circumstances degrades model versatility. Thus, the method requires a switching system that incorporates sensor- and instruction-driven switching to handle combinations of multiple types of signals. \par Learning-based methods are promising candidates to perform tasks that are difficult to design using model-based methods. They allow the automated acquisition of robotic motion skills. Recently, deep neural network (DNN) has been attracting considerable attention from the viewpoint of being applied to robot manipulation systems. DNN can self-organize and extract useful low-dimensional features from a large amount of high-dimensional data for diverse applications \cite{DL1}\cite{DL2}\cite{DL3}. These features compensate for the limitations of model-based methods by autonomously extracting features of diverse environments and making generalizations. Thus, various tasks can be learned with the same model because the need to strictly design task motions and objects is eliminated. Another advantage of DNN is the ability to handle high-dimensional sensory signals without preprocessing, which enables the robot to generate and adjust tasks based on feedback derived from the images captured in real-time. \par The effectiveness of the learning-based method for designing dynamical systems for the manipulation of soft materials has been confirmed as well \cite{koma}. The method can be used to generalize the object position and shape without any design effort by training DNNs based on the task experience of a robot. A task sequence comprising image and motion information can be embedded into a time-series DNN to serve as the dynamics of a sensory-motor sequence. In addition, the task operation can be repeated by acquiring the dynamics in a cyclic form. Further, the process of designing dynamical systems is applied to design a switching system herein. A dynamical system represents a space of time-series changes in the dynamics of a robot motion based on the environmental information obtained from the camera image. In this study, we propose a method to change the transition in the dynamics to deal with motion branching. \par To design a dynamical system with a switching system, it is important to acquire the network dynamics in a switchable form for each subtask. This indicates that the dynamics must comprise a common section in which the internal state of the network is identical among multiple dynamics since the network determines the subsequent output value depending on the internal state and the given input value. This section is called a "point attractor." Based on the aforementioned framework, \cite{koma}, \cite{kase} created a point attractor that switches between various subtasks by adding constraints to ensure that the internal state remains similar during the initial and final states. In these studies, the robot was made to complete a long task sequence by switching the dynamics of the trained subtask at the point attractor based on the sensory signal of the image from the mounted camera. We use this point attractor to switch between the subtask dynamics. \par We extend the work in \cite{kase} to propose a method to design dynamical systems with point attractors that accept (i) instruction signals for instruction-driven switching. To form such point attractors, we incorporate (ii) the instruction phase in task sequences, and it divides the task sequence dynamics into various subtasks. In this study, only the instruction signals that correspond to motion branching are used. These signals are provided as a condition to determine the next subtask in an ambiguous situation. The instruction phase is a part of the task sequence. We attempt to switch between subtask dynamics using a combination of the sensory and instruction signals at the aforementioned point attractor. The proposed method uses two DNNs for handling these signals. One DNN of the model autonomously extracts image features from raw images for performing sensory-driven switching, whereas the other DNN designs a dynamical system that self-organizes the relations of signals. To evaluate the switching and generalization ability of the proposed model, we apply it to a cloth-folding task as an example of a soft material manipulation task that includes motion branching. \section{Related Works} To ensure that robots can perform more advanced tasks, it is important to design a framework and task sequences to acquire the required dynamics. This is because combinations of dynamics while learning multiple tasks may be unintended even though optimizing the dynamics of the target task. \par Specifically, deep reinforcement learning is used to perform research on object manipulation owing to its generation precision for a specific task. Robots can learn a trajectory policy and the robot arm actuator torque signals based on the camera image \cite{DRL1}\cite{DRL2}. These approaches are promising from the viewpoint of performing a specific task with sufficient accuracy. Further, they can be extended for use in multi-task learning if the tasks involved are very similar \cite{DRL3}. However, because the optimized dynamics are limited to the set of learning objectives, the switching system for subtasks is not trained while searching for motion. Switching systems for separate subtasks are needed only in rare cases. \par In some cases, a dynamical system based on recurrent neural network (RNN) is used to perform object manipulation tasks. The model is built for each task. RNNs use an internal memory to calculate the subsequent output from prior inputs. For robot manipulation, this characteristic of RNN is effective from the viewpoint of processing sequential information and maintaining robustness against some noise. Moreover, a framework for combining multiple DNNs can be used to integrate multiple types of sensory information \cite{Noda}. Although such a framework can be used to capture signals for switching motions, the conventional method cannot deal with complicated combinations that involve motion branching. Motion branching occurs at a point where the internal state and sensory signals are identical in multiple subtask dynamics. At that point, the network cannot determine the dynamics to shift by sensory-driven switching. For example, in \cite{kase}, a method for automatically switching the subtask only from visual feedback using the aforementioned framework was proposed. However, it is not possible to handle more complicated motion switching such as motion branching. \par RNNs have been used to acquire specific languages of interactive systems in robotics through sensory-motor learning \cite{Heinrich}\cite{Yamada}; however, it is difficult to apply this method to actual tasks. In \cite{Yamada}, an RNN that could self-organize cyclic attractors reflecting the semantic structure and represent interaction flows using its internal dynamics was used. However, this RNN employed object color centroids as visual information. Thus, the method cannot handle complicated objects in a real environment, such as soft materials, based on the above information. In addition, the aforementioned method focused only on expressing the semantic relation acquired using a model, and they cannot be used to generate long sequential tasks for a robot with high-dimensional DoF. \par The key contribution of our method is that it designs sensory and instruction-driven switching systems for motion branching in dynamical systems formed by DNNs. Further, it enables the execution of flexible object manipulation tasks by training robots with sensory-motor experience comprising camera images and motions of high-dimensional DoF. \par We design point attractors for each subtask dynamics. To manipulate the point attractors with feedback from sensory and instruction signals, the model must provide these signals at that point. We divide the task sequences into multiple subtasks and incorporate a section (instruction phase) in which simple vectors (instruction signals) are provided from external sources. Further, we design an instruction phase such that the model depicts almost the same input value at the beginning of this phase. By designing the instruction phase as a point at which the internal state of the network is identical, the dynamics of each subtask is designed as a trajectory attractor with a switchable common section. \par To combine two different types of signals, we use a hierarchical RNN that can acquire long and short-term dynamics. Given that instruction signals represent abstract motion instruction in some cases, the instruction signals represent different subtask motions depending on the situation. Thus, the model must learn task transition from the time-series of sensory signals. If the model can learn the relations of these signals in different internal dynamics, the robot can switch to appropriate subtasks. In our experiment, we use the "direction of motion" as the instruction signals. We depict that the model acquires appropriate relations of these signals by visualizing the internal dynamics of the network during a task sequence. \section{METHOD} We propose a learning-based method consisting of two DNNs to achieve flexible object manipulation with the switching of multiple subtasks during motion branching. The proposed model is depicted in Fig. 1. It is constructed using the following two DNNs: (a) convolutional autoencoder (CAE) for extracting low-dimensional image features that represent the relationship between the object and robot arm in task manipulation from high-dimensional raw images, and (b) multiple time-scale RNN (MTRNN) to design dynamical systems for sensor- and instruction-driven switching and to generate the next motion based on the previous image features and motions. Our main ideas are as follows: \begin{itemize} \item Design an "instruction phase" in task sequence to form dynamical system for sensor- and instruction-driven switching using DNNs. \item Switch subtask dynamics based on "instruction signals" that represent the abstract motion instruction. \end{itemize} \par To handle instruction signals, we design the task sequence to consist of an instruction phase such that a robot waits for instruction signals at the beginning of each subtask. In this phase, signals other than the instruction signals do not change, their values remain almost constant across all the phases. Because the signals provided at the beginning and at the end of each subtask are almost identical, the internal state of the network converges to an almost certain state in the instruction phase. At this point attractor, we switch subtasks with certain signals: sensory signals from camera images representing the transition of task sequence and instruction signals explicitly indicating the direction of each subtask motion. We trained the MTRNN with a task sequence composed of multiple subtasks for switching from sensory and instruction signals. MTRNN embeds the instruction signals in the layer representing the fast changing dynamics, whereas the sensory signals in the layer represent the slow changing dynamics. \setlength\textfloatsep{5pt} \setlength\abovecaptionskip{0pt} \setlength\floatsep{0pt} \begin{figure}[htpb] \centering \includegraphics[width=7.9cm]{fig/fig1_3.eps} \caption{Overview of proposed method with two DNNs.} \end{figure} \begin{figure*}[thpb] \centering \includegraphics[width=15.5cm]{fig/fig2_2.eps} \caption{Overview of dynamics acquired by $C_f$ and $C_s$ neurons by designing instruction phase. }\end{figure*} \subsection{Convolutional Autoencoder To properly learn the sensory-motor information, appropriate feature extraction from a high-dimensional image is important. It is necessary that the relation between the robot arm and the object being manipulated is reflected in the extracted image features. This is because it affects the generalization performance in terms of generating tasks for unknown object positions or states. Therefore, it is desirable that the feature extractor should be able to handle high-resolution images to the maximum possible extent. \par Our model extracted low-dimensional image features from the camera images by using a CAE. A CAE comprises a deep autoencoder, convolutional layers, and deconvolutional layers \cite{CNN2}. The deep autoencoder, proposed by Hinton et al. \cite{AE}, defines a sandglass-type multilayered fully connected neural network. By training the autoencoder to provide output values that are equal to the input values, feature vectors can be extracted at a central hidden layer. These encoded feature vectors can be used to represent the state of input data and provide high-dimensional input information using fewer dimensions. In our model, we applied convolutional and deconvolutional layers near the input and the output layers, respectively. A convolutional layer can handle considerably more input dimensions than that can be handled by a fully connected DNN using fewer parameters \cite{DL1}. This enhances the image processing performance by extracting data to different levels of feature maps ranging from edges to partial parts of the image. Therefore, the CAE can reconstruct high dimensional inputs into low-dimensional image features. \subsection{Multiple Time-scale Recurrent Neural Network} In the proposed model, we implemented MTRNN \cite{MTRNN} to learn the relation between of sensory-motor signals (joint angles, gripper signals, and image features) and instruction signals. MTRNN is a neuro-dynamical model that is used in cognitive robotics as a generation mechanism to predict the subsequent state from the current state. It is composed of three types of neurons: input-output neurons ($IO$), fast context neurons ($C_f$), and slow context neurons ($C_s$). Each type of neuron has a different time constant value. Because of the difference between these values, the dynamics of trained sequences are effectively memorized as combinations of fast changing dynamics in the $C_f$ neurons that have smaller constant values and slow changing dynamics in the $C_s$ neurons that have larger constant values (Fig. 2). \par The propagation of the output of each neuron is limited by the time constants. In forward dynamics, the internal value of each $i$-th neuron at step $t$, $u_i(t)$ is calculated as follows: \setlength{\abovedisplayskip}{4pt} \setlength{\belowdisplayskip}{4pt} \begin{eqnarray} u_i(t)=\left(1-\frac{1}{\tau_i}\right)u_i(t-1)+\frac{1}{\tau_i}\left[\sum_{j \in N}w_{ij}x_j(t-1)\right] \end{eqnarray} where $\tau_i$ is the time constant of the $i$-th neuron, $x_j(t)$ is the input value of the $i$-th neuron from the $j$-th neuron, $w_{ij}$ is the weight value of the $i$-th neuron from the $j$-th neuron, and $N$ is the number of neurons connecting to the $i$-th neuron. The respective activation values of the context neurons $c_{i}(t)$ and the output neurons $y_{i}(t)$ are calculated using $sigmoid$ functions. During learning MTRNN, the weight $w$ and the initial value of the slow context neurons $C_s(0)$, are updated using a back propagation through time algorithm \cite{BPTT}. \par From the viewpoint of designing the internal dynamics of the time-series DNN, it is important that each group of neurons learns different dynamics. In this paper, the $C_f$ and $C_s$ neurons are assigned to learn different signal information from the time-series sequence. In the MTRNN, the $C_f$ neurons obtain more information from the current context, while the $C_s$ neurons from the previous context. By using this characteristic, in the proposed method, the $C_f$ neurons represent the dynamics in which it responds to temporary signals such as instruction signals. Moreover, $C_s$ neurons contain information about the transition of task sequences at the point where dynamics branching is involved. \subsection{Instruction Phase and Online Motion Generation} As mentioned in Section I, to switch the dynamics of the network, each sequence of subtask dynamics must contain a point attractor at which the internal states of the neurons are identical. In addition, the model must be provided with instruction signals at that point. In this study, we attempt to create that point by designing a task sequence that contains sections with regular intervals at which data input is limited. A visualization of our idea is shown in Fig. 2. \begin{figure*}[thpb] \centering \includegraphics[width=15.0cm]{fig/fig3_4.eps} \caption{ Before starting each subtask, the robot receives instruction signals, "Right," "Left," or "Up." The robot manipulates six object positions. We designate folding motions divided from the task sequence as subtask A-E. }\end{figure*} \par In the proposed method, we design the instruction phase such that it divides the original task sequence into subtasks and waits for instruction signal input at the beginning of each subtask. Our model represents the dynamics of each subtask as a trajectory attractor returning to a certain state and switches the subtask to perform the task. We use the trajectory attractor for smooth motion transition. It is difficult to perform smooth and continuous manipulations if the internal state of the MTRNN differs at the beginning of each subtask. This is because the model predicts the next state with strongly being affected from past contexts information. \par In the instruction phase, the model is provided with instruction signals that are simple vectors to the MTRNN in the same way as is done in the case of other signals. The input values of the instruction signals are zero in phases other than the instruction phase. The instruction signals are not designed individually for each subtask because they represent the motion "instruction." For example, in Fig. 2, the same instruction signal represents different motions depending on the transition of the task sequence. Therefore, the model must memorize a combination of sensory and instruction signals. \par During the instruction phase, the robot maintains a certain position. This is important for restricting the input of motion information and smoothing the transitions between subtasks. After providing the instruction signals, the task sequence transitions to the behavior phase, in which the robot performs subtask. Then, the robot return to a certain position to transition to the instruction phase of the next subtask. \par To form a point attractor, in addition to the constraints of the internal state of MTRNN \cite{kase}, we consider input values at the beginning and the end of each subtask. As mentioned above, motor and instruction signals have certain values at this time. Because the state of the robot arm in the camera image is constant, camera image in each instruction phase differs only in terms of the shape of the manipulated object. And most of the change in the camera image is caused by the manipulation of the robot arm. Thus, the model is provided with almost the same input values at the beginning and the end of a subtask. Instruction signals are distinguished from other signals, since they are explicitly provided during instruction phase. Because $C_f$ neurons memorize fast changing dynamics from the time-series of input data, the internal states of $C_f$ neurons converge to a certain state in the instruction phase and form an attractor point that accept instruction signals. Moreover, they are strongly affected by the instruction signals. \par By contrast, $C_s$ neurons expressing the long-term dynamics can learn the transition process of task operation through sensory and motor signals. These layers allow the model to learn the relationship between two different signals for switching: sensory signals representing the transitions of multiple subtasks and instruction signals explicitly indicating the instructions of subtask operation. By using these signals for dynamic switching, our model can create a switching system without strictly creating instruction signals. \par At the time of task execution, the robot generates motion while acquiring sensory and instruction signals online. When generating a motion, an image from the robot-mounted camera is provided to the CAE, which encodes images into image feature vectors. After combining these vectors with joint angles, gripper signals, and instruction signals, they are provided to the MTRNN. Because of the acquired dynamics that represent the relationship of sensory-motor information, the MTRNN predicts an appropriate output based on a real environment glanced from visual information. The joint angles and the gripper signals predicted by the MTRNN are provided to the robot as commands for the next position. MTRNN receives signals as feedback at every time step. Each time step is about 0.15 s. The model can adjust the robot motion in real-time by repeating this process. In the instruction phase, the robot is commanded by instruction signals. The $C_f$ neurons can switch their dynamics by accepting immediate input from instruction signals, and $C_s$ neurons can switch their dynamics based on combinations of sensory and instruction signals. \section{Experiment} To evaluate whether our model can complete flexible object manipulation tasks by switching subtasks based on sensory and instruction signals, we attempted to complete a garment-folding task with an industrial humanoid robot, Nextage \cite{Nextage}. In our experiment, Nextage was commanded to fold a short-sleeved shirt placed in front of it four times. \par Our model was evaluated from the perspective of generalization and interaction ability by using it to make the robot execute a task sequence consisting of untrained subtask combinations and untrained object positions. During the trial task, the robot acquired sensory and instruction signals for switching subtask motions. By visualizing the internal dynamics of the network, we examined the influence of the instruction phase on a dynamical system. \subsection{Design of Task Motion} The target task consists of the following five subtasks. The robot executed these tasks by switching among them. This task sequence is represented visually in Fig. 3. The robot folded the garment four times. The garment-folding task involves motion branching because the place where the robot folds the garment changes owing to the instruction signals. The instruction phase was designed at the beginning of each subtask, and the model was provided with instruction signals. Each instruction signal corresponded to a folding direction, "Right," "Left," and "Up." The task sequence has a few patterns because this instruction phase contains a few possible directions (Table I). In the designed task, the second and the third subtasks are uniquely determined, even if there is no instruction signal. Because the model is provided with instruction signals in every instruction phase in our experiment, it is expected that the dynamics execute only instruction-driven switching without sensor-driven switching. \par Because the subtask represented by the instruction signal changes depending on the situation, to complete the target task, the robot must appropriately switch among subtasks based on both sensory and instruction signals. Instruction signals are not designed individually for each subtask. They contain an abstract instruction about folding direction. Therefore, in few cases, it is not possible to specify the subtask using instruction signals only. For example, in our experiment, the subtask behavior indicated by the instruction signals "Right" of subtask A and subtask D is different. Therefore, depending on the task progress, the robot is required to perform sensory- and instruction-driven switching. \par The purpose of this experiment is to verify whether it can switch the subtask at the branching point. We prepared training task sequences as minimum required combination. The only difference between the test pattern 4 and the training pattern 3 is the last subtask, however, these patterns were predicted as different task since training sequences for MTRNN are series of subtasks. It shows that it is possible to extract and combine subtasks without training all patterns of combination of subtasks if test pattern can be executed. \begin{table}[hbtp] \centering \begin{tabular}{c||c|c|c|c} \multicolumn{5}{c}{TABLE I: Task Sequence Patterns} \\ \multicolumn{5}{c}{} \\ \hline Pattern 1 & subtask A & subtask B & subtask C & subtask D \\ (train) & Right & Left & Up & Right \\ \hline Pattern 2 & subtask A & subtask B & subtask C & subtask E \\ (train) & Right & Left & Up & Left \\ \hline Pattern 3 & subtask B & subtask A & subtask C & subtask D \\ (train) & Left & Right & Up & Right \\ \hline Pattern 4 & subtask B & subtask A & subtask C & subtask E \\ (test) & Left & Right & Up & Left \\ \hline \end{tabular} \begin{flushleft} \ \ \ \ subtask A: fold left sleeve of the garment toward the right direction. \\ \ \ \ \ subtask B: fold right sleeve of the garment toward the left direction. \\ \ \ \ \ subtask C: fold the garment in half from bottom to top direction. \\ \ \ \ \ subtask D: fold the garment in half toward the right direction. \\ \ \ \ \ subtask E: fold the garment in half toward the left direction. \\ \end{flushleft} \end{table} \subsection{Experimental Setup} We performed the experiments to evaluate the results from two viewpoints, namely, "switching ability" and "generalization ability," as described below. Interaction ability indicates whether sensory and instruction signals can switch the dynamics of the model appropriately. Generalization ability indicates whether our learning-based method can be generalized to any flexible object manipulation task. {\bf 1. Interaction Ability:} After providing instruction signals, if the model can generate untrained task sequences by switching the subtask of the trained task sequence, it can be said that the model can switch among subtasks according to the instruction signals. We trained the model with patterns 1-3, as listed in Table I, as the training task sequences. The robot executed pattern 4 as the test task sequence. \par In addition, because three types of instruction signals are used for five types of subtasks in the test sequence (pattern 4,) to combine the subtasks appropriately, it is necessary to integrate sensory and instruction signals. In our experiment, we examine whether the hierarchical structure of the MTRNN can acquire the dynamics that represents sensory and instruction signals for the intended motion switching. \par {\bf 2. Generalization Ability:} If the model can perform a given task for an untrained object position, it can be said that the model has good generalization ability. When we trained the model, it learned four object positions task for each training task sequence. The test task sequence was generated by placing the object at six positions, including two positions between the trained object positions, as shown in Fig. 3. Hence, we trained our model with 12 patterns (three training patterns $\times$ four object positions) \subsection{Training and Model Setup} We trained the proposed model with the training task sequences acquired by operating the robot and using motion capture. After completing the subtask operation, the robot automatically returns to the position it was in at the beginning of the subtask and subsequently starts the operation of the next subtask. Therefore, the motion of each subtask is slightly different but all subtasks have the same initial and final position. We smoothed the motions operated by humans as a pre-processing step to effectively train the model. Each subtask in the task sequence comprises 152 steps that consist of an instruction phase (20 steps) and a behavior phase (132 steps). The time required to complete the task in approximately 87 s. The training pattern described in our paper is simply a pattern of subtasks. While training the model, we increased the size of the training set by applying data augmentation and added Gaussian noise and color augmentation to increase the robustness of the CAE and provide a sufficient number of training sets to prevent overfitting. In addition, batch normalization was used in the CAE to improve the learning performance. \par We set the parameters of the proposed model to learn the acquired training task sequences. The robot has two non-backdrivable six-DoF arms and grippers. It captures 112$\times$112 pxs RGB images by using the mounted-camera (total 37,632 dims). The CAE extracts 10-dimensional image features from the raw images. The values of the instruction signal are [1,0,0], [0,1,0], [0,0,1] for Right, Left, and Up, respectively. Therefore, the MTRNN receives 27-dimensional input and output neurons. Both the CAE and the MTRNN were trained using mean squared error (MSE) along with the optimizer by Adam \cite{adam}. The detailed parameters of the CAE and MTRNN are listed in Table II. We searched parameters above by trial-and-error and eventually chose the parameter set that yielded the best results. \begin{table}[thbp] \centering \begin{tabular}{cc} \multicolumn{2}{c}{TABLE II: Structure of Networks} \\ \multicolumn{2}{c}{} \\ \hline Network & Dims \\ \hline \hline CAE* & input@3chs - conv@64chs - conv@32chs - \\ & conv@16chs - full@1000 - full@10 - \\ & full@1000 - deconv@16chs - deconv@32chs - \\ & deconv@64chs - output@3chs \\ \hline MTRNN & $IO$@27($\tau$:1) - $Cf$@80($\tau$:5) - $Cs$@20($\tau$:70) \\ & $IO$(Joint angles:12, Grippers:2, \\ & Image features:10, Instruction signals:3) \\ \hline \end{tabular} \begin{center} * all conv and deconv filter are stride 2, padding 1 \\ \end{center} \end{table} \section{Results and Discussion} \subsection{Generation of the Garment-Folding Task} \begin{figure}[htpb] \centering \includegraphics[width=6.35cm]{fig/fig4_5.eps} \caption{ Generated test sequences (pattern 4, object position 2). The top part of the figure of the generated test sequence shows motor signals of the right arm (a) and left arm (b). The broken lines indicate the generated output values, and the solid lines indicate the correct values. The bottom part indicates the instruction signals provided to the model (c). }\end{figure} First, we verified the performance of the trained model through online generation. Our model shows some extend of generalization to untrained object positions. We generated untrained sequences for each object position. As an example, the generated untrained pattern sequence (pattern 4) for an untrained object position (position 2) is shown in Fig. 4. In our method, the task performance speed was the same as that of the training sequence because the model needs only forward calculation during online generation. This is promising compared to the model-based method \cite{towel1}\cite{towel2} which tends to be slow when processing high-dimensional data. \par In this experiment, twenty-four trials were conducted with a range of untrained object positions and the robots never failed. This 100\% success rate shows that the robot was able to properly change subtask with instruction signals. MSE per joint angle per step averaged over all untrained pattern sequences for untrained object positions was 0.00331. This value corresponds to an 1.63 cm error in the arm-tip position when grabbing the object. Although this is the worst result among all generated sequences, it is almost the same as the correct behavior. Hence, the model can successfully extract subtasks from the training sequences and combine them. \par In our experiment, even if the robot fails to perform a given task, it tries to continue the task. This is because the training dataset do not include motions to recover from task failures. However, there are some ways to address this problem in our framework. One way is to train motions that recover failures, such as returning clothes to the original position and resuming the folding motion. One of the advantages of our method is that the model can learn multiple motions without specifically designing new behaviors. However, it is difficult to learn all possible failure examples so there is a limit to the error recovery with the above method. Another solution is to repeat a subtask in the case that the robot cannot grip the object because, in this scenario, the state of the object is almost the same as it was before the task was attempted. The internal state of $C_f$ neurons returns to the attractor point and the sensory signals are almost unchanged. Thus, the robot can repeat subtask by providing the instruction signals again. \par As mentioned above, the learning-based approach is effective for object manipulation tasks, which are difficult to design. However, we need to increase the training dataset for generalization performance over a wider range. \begin{figure}[thpb] \centering \includegraphics[width=5.9cm]{fig/fig5_5.eps} \caption{ Average trajectory of the internal state of $C_f$ neurons. The gray space indicates the instruction phase. The crosses indicate the time at which provision of the instruction signals is initiated. }\end{figure} \subsection{Switching Subtask Dynamics by Instruction Signals} To confirm the interaction ability of our model, we conducted principal component analysis (PCA) on the internal state of the $C_f$ neurons of the MTRNN (Fig. 5). We visualized the average trajectory of the untrained sequences (pattern 4) by projecting them onto the space spanned by the first and second the principal components (PCs). Their contribution ratios were 40.3\% and 24.5\%, respectively. \par The dynamics of the subtasks was formed as a trajectory attractor with branch points in the lower layer. After the instruction signals were provided, the dynamics of each subtask transitioned to the behavior phase. Finally, all dynamics converged on the state that represented the instruction phase. This indicates that point attractors consistent with the internal state of the trajectory attractor of each subtask were formed as intended. Therefore the model can handle motion branching based on instruction-driven switching. \par Manipulation of the trajectory attractor embedded in the dynamical systems through the sensory-motor experiment with a point attractor is possibly applicable to more complex tasks. In this experiment, only one switching phase was designed, but it is possible to design multiple point attractors by adding additional switching phases. Moreover, more complex instruction signals such as word vectors can possibly be accepted. The results showed that it is possible to explicitly express a complicated task as a combination of simple motion primitives by designing the robot motion as multiple trajectory attractors in dynamical systems. \subsection{Integration of Sensory and Instruction Signals} To confirm whether the proposed model learns the transition of task sequences and to check whether it can combine sensory and instruction signals, we continuously visualized the average value of the internal states of the context layers ($C_f$ and $C_s$ neurons) for each subtask. \begin{figure}[thpb] \centering \includegraphics[width=7.5cm]{fig/fig6_5.eps} \caption{ Average value of internal states of context layers (PC1-PC2). Each point indicates a subtask B, A, C, and E. The left part of the figure shows the $C_f$ space, and the right part shows $C_s$ the space. }\end{figure} \par Although the model acquired interaction ability, the $C_f$ neurons did not learn the relationships between sensory and instruction signals. The left part of Fig. 6 shows the average value of the internal states of the $C_f$ neurons. Each point represents a subtask of an untrained sequence for all object positions. These points were projected onto the same space as that shown in Fig. 5. Regardless of the object position or shape, three clusters corresponding to the instruction signals can be seen in the space. Here subtasks B and E are clustered in the same space despite being related to different motions. This means that the information from the instruction signals is embedded in the internal states of the $C_f$ neurons, and consequently, the dynamics respond immediately to the instruction signals. However, they could not learn subtle differences between the images of subtasks B and E. \par The $C_s$ neurons played a role in learning the task sequences from the sensory and motor signals, and then our model switched the dynamics with a combination of sensory and instruction signals. The center part of Fig. 6 shows the average value of the internal states of the $C_s$ neurons. The projected space is spanned by the first and the second PCs with contribution ratios of 49.0\% and 21.1\%, respectively. Four clusters corresponding to the sensory and motor signals appear in the space. The internal states of the $C_s$ neurons represent the entire transition process of the task sequence. This suggests that they learned subtle differences between the images of different subtasks for sensory-driven switching. Therefore, the model can adapt to the different situations based on visual feedback and different motions indicated by instruction signals. \par In addition, in subtasks A and C which do not require instruction signals, different clusters are projected. Although the model executed instruction-driven switching, the model recognized the camera image. Thus, the model can arbitrarily combine sensory- and instruction-driven switching. \par Our method hierarchically self-organized the relationship between sensory and instruction signals within the dynamics of each subtask and exhibited interaction ability. And it can acquire dynamical systems that can perform sensory- and instruction-driven switching. In this study, it was assumed that the instructor knows how to execute the task and always gives correct instruction signals (i.e., unilateral commands to the robot from the instructor and not considering interactions). To perform more complicated instruction signals, it may be necessary to assume a probabilistic model that allows mutual feedback to change the instruction content. \section{Conclusion} We applied a RNN, which can accept instruction signals to garment-folding task consisting of five subtasks, thereby switching the dynamics of each subtask at the point of motion branching. In the proposed method, we designed a trajectory attractor whose instruction phase is the point attractor for acquiring the dynamics of subtasks in a switchable form. We verified the generalization ability of the method as well as its interaction ability at the motion branching point by performing tasks with untrained object positions and by visualizing the internal state of the network, respectively. The result of the method showed that by applying the proposed method to a robot, we could successfully acquire the relationships between sensory and instruction signals in the hierarchical structure of a RNN and complete a target task by switching the associated subtasks interactively. \par In the future work, we would like to increase the variations and complexity of task sequences, and subsequently increase the variety of instruction signals.
{ "timestamp": "2018-07-06T02:06:56", "yymm": "1712", "arxiv_id": "1712.05109", "language": "en", "url": "https://arxiv.org/abs/1712.05109" }
\section{Background} There has been extensive research on hand gesture recognition system. \cite{Pavlovic_survey}, \cite{Mitra_survey}, \cite{Rautaray_survey} provide execellent surveys on vision-based gesture recognition systems. Convolutional neural networks (CNN) such as in \cite{Molchanov_2015_CVPR_Workshops} and recurrent neural networks (RNN) such as in \cite{Cui_2017_CVPR}\cite{Cao_2017_ICCV} have further pushed the boundary on gesture recognition results. Unfortunately none of them has shown real-time performance on mobile devices. Building large and diverse datasets for hand gestures remains challenging. Existing gestures datasets such as those from \cite{car-app} have less than 10,000 annotated frames. The closest datasets to our work are EgoFingers \cite{egofinger} where 93,729 frames are labeled, and EgoGesture \cite{Cao_2017_ICCV} where 3 million frames have gesture labels but no bounding boxes. Compare to these datasets, our dataset contains 406,581 frames with both gesture labels and bounding boxes. \section{Conclusion} In this work, we present a mobile egocentric gesture recognition pipeline. We built a mobile mixed-reality data capture tool, with which we can automatically annotate gestures and bounding box locations. We created the largest-to-date egocentric gesture and bounding boxes dataset. We trained a neural network based on the TensorFlow Object Detection API \cite{tf-object}, and achieved 76.41\% precision and real-time performance on mobile devices. As future work, our \textit{label as you go} approach can be adapted to other data collection tasks, such as keypoint and segmentation mask annotation. It can also be deployed on smartphones, where users could be asked to move their phone such that the object in the viewfinder fits into the rendered target. \section{Dataset} \label{dataset} \subsection{\textit{Label As You Go}} In order to scale data collection, we utilize mobile mixed reality headsets to collect automatically labeled data. Images in our dataset are labeled by the subjects as the images are recorded, instead of by annotators after the fact. We used a Daydream View, a Google Pixel XL smartphone, and a monochrome USB camera that is connected to the phone and faces the world. In the headset, users can see digital video passthrough of the outward facing camera, and hence see the real world while in VR. On top of video passthrough, we overlay a bounding box target on each frame in camera image space. For each gesture class, subjects are instructed to pose the requested gesture, and fit their hands tightly into the rendered bounding box target. With the mixed reality setting, this task is simply done through natural hand-eye coordination. We vary the location of the bounding box to increase coverage of the dataset. To further reduce time, we animate the bounding box target in a pre-defined zigzagging trajectory which sweeps across the whole frame. As the trajectory is predictable and easy to remember, the subjects are able to follow the box, even as it moves to a new location. Each subject participating in data collection was asked to pose 4 gestures on each hand: $\texttt{Thumbs\_Press}$, $\texttt{Thumbs\_Up}$, $\texttt{Thumbs\_Down}$, and $\texttt{Peace}$. For each gesture, we instrument 3 sequences of trajectories. The bounding box size stays the same for a single sequence, but varies from sequence to sequence. We provide a clicker to the subjects to signal the start of each data collection sequence. \begin{table}[t] \centering \begin{tabular}{lllll} \toprule & \makecell[c]{$\texttt{Thumbs\_Press}$} & \makecell[c]{$\texttt{Thumbs\_Up}$} & \makecell[c]{$\texttt{Thumbs\_Down}$} & \makecell[c]{$\texttt{Peace}$} \\ \midrule & \includegraphics[width=2.4cm,height=1.8cm]{press} & \includegraphics[width=2.4cm,height=1.8cm]{thumbs_up} & \includegraphics[width=2.4cm,height=1.8cm]{thumbs_down} & \includegraphics[width=2.4cm,height=1.8cm]{peace} \\ \midrule \makecell{\# of Frames \\ (\% of total)} & \makecell{113206 \\ (27.8\%)} & \makecell{120716 \\ (29.7\%)} & \makecell{55844 \\ (13.7\%)} & \makecell{116815 \\ (18.7\%)} \\ \bottomrule \end{tabular} \vspace{-5pt} \caption{Sample images and gesture distribution in our dataset} \vspace{-18pt} \label{gesture-table} \end{table} \begin{figure}[t] \vspace{-10pt} \centering \subfloat[]{\includegraphics[height=3cm]{bbox_heatmap_no_outliers} \label{fig:a}}\hspace*{-2em} \subfloat[]{\includegraphics[height=3cm]{bbox_width_hist_px} \label{fig:b}}\hspace*{-2em} \subfloat[]{\includegraphics[height=3cm]{pixel_intensity} \label{fig:c}}\hspace*{-2em} \subfloat[]{\includegraphics[height=3cm]{pixel_intensity_bbox} \label{fig:d}}\hspace*{-2em} \vspace{-5pt} \caption{Image statistics of the dataset. \protect\subref{fig:a} Distribution of bounding boxes; \protect\subref{fig:b} Histogram of bounding box sizes; \protect\subref{fig:c} Histogram of pixel intensity in all images; \protect\subref{fig:d} Histogram of pixel intensity inside all bounding boxes; } \label{image-stats} \vspace{-15pt} \end{figure} \subsection{Dataset Details} With the \textit{label as you go} approach, we built a dataset of 406,581 frames of egocentric hand gesture data. The full dataset creation process took only two days. Each frame is labeled with gesture class and the bounding box of hands. Our dataset contains data from 33 subjects and 4 gesture classes on each hand. Since the mixed reality setup is mobile, we are able to collect data in different locations. As a result, we have 30 scenes under varying lighting conditions in the dataset. Table \ref{gesture-table} has full breakdown of the dataset, and Figure \ref{image-stats} has image statistics of our dataset. \section{Applications to Other Data Colllection Tasks} Our \textit{label as you go} approach can be adapted to other data collection tasks. Below are a few potential extensions of our approach: \begin{itemize}[nosep] \item \textbf{Keypoint Annotation}: We could ask users to follow point trajectories with specific keypoints on tracked objects. For example, we could ask users to follow a trajectory with their fingers to create a fingertip dataset. \item \textbf{Segmentation Mask Annotation}: We could utilize synthetically rendered masks of objects in different poses, and ask users to align them to objects in real-life with a smartphone app. This could potentially help generate much larger segmentation mask datasets. \end{itemize} Note that our approach is not limited to the mobile mixed reality setup. Mixed reality provides natural \textit{hand}-eye coordination and is particularly effective when gathering hand datasets. The same approach can also be adapted and deployed on regular smartphones, where users could be asked to move their phone such that the object in the viewfinder fits into the rendered target. \section{Introduction} Mobile virtual reality (VR) head mounted displays (HMD), such as Daydream and GearVR, have made VR more accessible. Making users believe that they can interact with the virtual environment is critical to immersion. Since people interact with the real environment mostly with hands, we study how to bring hand presence to VR. In this work, we focus on hand gesture detection and localization. Our goal is to reliably recognize and localize hand gestures in real-time on mobile HMD systems. There are two main challenges to this problem: \begin{enumerate}[nosep] \item There is limited dataset available on egocentric hand gestures and bounding boxes. \item It is challenging for high-capacity machine learning models to run at interaction framerate on mobile devices \end{enumerate} To the first challenge, we propose to utilize mobile mixed reality headset as a tool to collect data and automatically label bounding box. With this method, we collected a large dataset of 33 people in 30 different scenes, with a total of 406,581 annotated frames. To download the dataset, please visit \url{https://sites.google.com/view/hmd-gesture-dataset}. To the second challenge, we trained a neural network based on the TensorFlow Object Detection API \cite{tf-object}. The network uses MobileNet \cite{mobilenet} as the feature extractor and SSD head \cite{SSD} to generate multibox predictions. When running on mobile, one forward pass of our model takes 31.85 milliseconds on one core mobile CPU, achieving real-time performance. \section{Mobile Object Detection Model for Gesture Recognition} \subsection{MobileNet SSD Architecture} We trained a gesture recognition CNN based on TensorFlow Object Detection API \cite{tf-object}. It is composed of two parts conceptually: a MobileNet \cite{mobilenet} feature extractor to produce feature maps, and a SSD \cite{SSD} multibox detector to predict bounding box location and gesture labels (Figure \ref{ssd-figure}). For each anchor box in the SSD head, the model predicts 4 offset values of the bouding box, and 9 class labels (4 distint gestures multiplexed with either \verb+left+ or \verb+right+ hand, and one $\texttt{None}$ class). We use a cross entropy loss for classification, and smooth $\textit{L}_1$ loss as in \cite{fast-rcnn} for bounding box localization loss. We add the two losses together as the final loss function. \begin{figure}[h] \centerline{\includegraphics[width=14cm]{mobilenet-ssd}} \vspace{-5pt} \caption{Illustration of the MobileNet SSD model.} \label{ssd-figure} \vspace{-5pt} \end{figure} After model inference, we pick the bounding box proposal with the highest confidence in label prediction, as we only expect one label per image. SSD model natively supports multi-class and multi-instance prediction too. \subsection{Experiments and Results} \label{section:augmentation} Our models are trained with TensorFlow. Our training set contains 342,227 frames, and evaluation set contains 64,354 frames. The same person only appears in one split. For training, we use a batch size of 32. Image sizes are 320$\times$240, and of one single color channel. We add random data augmentation to the training dataset, including brightness and contrast perturbation, and random crop and padding. On our evaluation dataset, we test the top confidence bounding box prediction against the groundtruth label. Detailed results of the model performance can be found in Table \ref{results-table}. \subsection{Mobile Inference} The trained TensorFlow models can be exported to run on mobile devices. We benchmarked them on SnapDragon 821 chipset, which is common among Android devices since 2016. All results in Table \ref{results-table} reflect model inference time on \textbf{one} Big CPU core on device. \begin{table}[t] \centering \begin{tabular}{lllll} \toprule Model & Depth multiplier & Precision & \makecell[l]{Inference latency\\(ms)} & \makecell[l]{Total latency\\(ms)} \\ \midrule MobileNetSSD-25\% & 0.25 & 76.15\% & 31.8504 & 36.1658 \\ MobileNetSSD-50\% & 0.5 & 77.43\% & 77.4913 & 81.6922 \\ MobileNetSSD-100\% & 1 & 80.94\% & 265.2109 & 269.4694 \\ \bottomrule \end{tabular} \vspace{-5pt} \caption{Results of model performance. Total latency includes inference, pre- and post-processing.} \vspace{-15pt} \label{results-table} \end{table} In our application, we chose the MobileNet-0.25 model. Our inference framerate is at 30 frames per second on device. Accounting for pre- and post-processing steps, the whole gesture detection pipeline can run at 27 fps sustainably. \section*{References} References follow the acknowledgments. Use unnumbered first-level heading for the references. Any choice of citation style is acceptable as long as you are consistent. It is permissible to reduce the font size to \verb+small+ (9 point) when listing the references. {\bf Remember that you can go over 8 pages as long as the subsequent ones contain \emph{only} cited references.} \medskip \small [1] Alexander, J.A.\ \& Mozer, M.C.\ (1995) Template-based algorithms for connectionist rule extraction. In G.\ Tesauro, D.S.\ Touretzky and T.K.\ Leen (eds.), {\it Advances in Neural Information Processing Systems 7}, pp.\ 609--616. Cambridge, MA: MIT Press. [2] Bower, J.M.\ \& Beeman, D.\ (1995) {\it The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System.} New York: TELOS/Springer--Verlag. [3] Hasselmo, M.E., Schnell, E.\ \& Barkai, E.\ (1995) Dynamics of learning and recall at excitatory recurrent synapses and cholinergic modulation in rat hippocampal region CA3. {\it Journal of Neuroscience} {\bf 15}(7):5249-5262.
{ "timestamp": "2017-12-15T02:00:28", "yymm": "1712", "arxiv_id": "1712.04961", "language": "en", "url": "https://arxiv.org/abs/1712.04961" }
\section{Introduction}\label{sec:intro} Statistical process control (SPC) applies statistical methods to the monitoring and control of a process in order to detect abnormal variations of the process. One of the most popular SPC tools is the control chart, which plots a statistic that measures a feature of the process over time. When the charting statistic is well within the predetermined control limits, it indicates that the process is in a state of statistical control (hereafter in-control). When this charting statistic goes beyond the control limits, it triggers an alarm to indicate that the process is likely experiencing abnormal variations (hereafter out-of-control). Control charts are easy to visualize and interpret, therefore they have been successfully applied to applications across many different industries, including fraud detection, disease outbreak surveillance, network traffic monitoring and others (see, for example, Tsung et al. (2007), Woodall (2006), Jeske et al. (2009)). In the SPC literature, there exist parametric control charts and nonparametric control charts. Parametric control charts need to assume a particular parametric distribution for the process. In practice it is often not easy to identify the parametric distribution that would be appropriate for a specific application. If the distribution is not specified correctly, parametric control charts may not perform as expected. In contrast, nonparametric control charts do not require specifying a particular parametric distribution for the process and remain valid regardless of the true underlying distribution. Therefore, nonparametric control charts are more desirable in many real world applications. There are many nonparametric control charts in the literature. We refer to Chakraborti, van der Laan and Bakir (2001) and Chapter 8 of Qiu (2014) for an overview on this topic. Most of the existing nonparametric control charts were developed to detect location changes only. However, in practical situations it is usually unknown in advance what kind of changes the process will experience. Therefore, it is more desirable to develop a nonparametric control chart that can detect any arbitrary distributional changes. For this purpose, Zou and Tsung (2010) proposed an EWMA chart based on a powerful goodness-of-fit test. However, according to the simulation studies conducted in Ross and Adams (2012), this EWMA chart is only sensitive in detecting scale increases and is not as powerful as its competitors in detecting other types of distributional changes including location shifts. In addition, their proposed EWMA chart involves a weight parameter $\lambda$, which practitioners need to pre-specify. Different choices of $\lambda$ will affect the detection power of the resulting control chart. In general, the EWMA chart with smaller $\lambda$ is more powerful for detecting smaller changes, and the one with larger $\lambda$ is more powerful for detecting larger changes. However, in practice, it is rarely known in advance what kind of changes will occur. To overcome the above limitations, Ross and Adams (2012) proposed two control charts based on the change-point detection (CPD) framework. Their proposed CPD charts are free of any tuning parameter and are shown to have better overall performance than Zou and Tsung's EWMA chart for detecting different distributional changes. However, like most CPD charts, the computation of their proposed charts is very intensive, since at each time point all the possible change-point scenarios need to be considered. To detect any arbitrary distributional changes, Qiu and Li (2011) also proposed two nonparametric control charts by first converting the nonparametric problem into a categorical data analysis problem through data categorization and then developing CUSUM charts for monitoring the resulting categorical data. The idea of developing nonparametric control charts through data categorization is very innovative, since it allows adoption of many existing categorical data analysis methods to develop new nonparametric tools in the SPC field. However, similar to the above Zou and Tsung's EWMA chart, the two CUSUM charts proposed by Qiu and Li (2011) involve a tuning parameter $k$, which needs to be pre-specified. In the parametric setting, the optimal choice of $k$ in the CUSUM statistic is usually linked to the out-of-control distribution, therefore practitioners have some general guideline on how to choose $k$. Unfortunately, in the nonparametric CUSUM statistics proposed in Qiu and Li (2011), it is not clear how $k$ is linked to the out-of-control distribution. Because of this, it is not even clear what the right range is for the value of $k$. In their paper, they considered $k=0.001$, 0.005, 0.01, or 0.05, which seems to be much smaller than those commonly used in other CUSUM statistics. According to some simulation study we conducted, the in-control run lengths of their CUSUM statistics with those small values of $k$ have much larger variability than what we usually expect from regular CUSUM statistics. It seems that some larger values of $k$ should be used instead. But again it is not clear what is the right choice of $k$ people should use in practice. Furthermore, based on our simulation studies, the control charts directly based on the categorial data after data categorization are usually less efficient than other rank-based nonparametric control charts due to the loss of the ordering information from the original data. To address all the above limitations, in this paper we propose a new nonparametric control chart for detecting arbitrary distributional changes. More specifically, we first follow the above data categorization idea to develop a new CUSUM chart for monitoring the resulting categorical data. The CUSUM chart we propose is more efficient than the ones used in Qiu and Li (2011) for detecting different distributional changes, since it is capable of incorporating the ordering information of the original data. To implement the new CUSUM chart, we need to specify the out-of-control distribution, which is rarely known in advance in practice. To overcome this difficulty, we borrow the idea proposed in Lorden and Pollak (2008) and develop an adaptive version of the proposed CUSUM chart. Our adaptive CUSUM chart does not require the specification of the out-of-control distribution. Instead, it uses the most recent data to estimate the out-of-control distribution. The resulting adaptive CUSUM chart has simple recursive formulas, so it is very efficient in computation and its implementation is simple and straightforward. To address the situation where there are no sufficiently large reference data available, we also develop a self-starting monitoring scheme of the proposed adaptive CUSUM chart. Our simulation studies show that the proposed self-starting adaptive CUSUM chart has better overall performance than other competitors for detecting different distributional changes. The rest of the paper is organized as follows. In Section 2, we describe our proposed nonparametric adaptive CUSUM chart and its properties. A simulation study is reported in Section 3 to evaluate the performance of our proposed control chart. In Section 4, we demonstrate the application of our proposed control chart using a real data set from a manufacturing process. Finally, we provide some concluding remarks in Section 5. All the proofs are deferred to the Appendix. \section{Methodology} \subsection{The proposed CUSUM statistic}\label{sec:CUSUM} The typical setup we consider in this paper is the following. There are $m$ independent and identically distributed reference (historical) data, denoted by $X_{-m+1}$, ..., $X_{0}$, from some in-control distribution $f_{0,X}$. Let $X_1, X_2, \cdots$ be the future observations collected over time from the process. At any time $t$, we observe $X_1, X_2, \cdots, X_t$, and the task of control charts at this time $t$ is to decide whether the process has changed based on $X_1, X_2, \cdots, X_t$. This can be formulated as the following hypothesis testing problem, \[ H_0: X_{1}, \cdots, X_t \text{ follows } f_{0,X}, \] versus \begin{equation} \label{test0} H_1: \exists \, \tau \in [1,t] \text{ such that } X_1, \cdots, X_{\tau-1} \text{ follows } f_{0,X} \text{ and } X_{\tau}, \cdots, X_t \text{ follows } f_{1,X}, \end{equation} where $\tau$ is the change point, $f_{1,X} \neq f_{0,X}$ and $f_{1,X}$ is usually referred to as the out-of-control distribution. If we further assume that $f_{0,X}$ and $f_{1,X}$ are both completely known, to test the hypothesis in (\ref{test0}), the test statistic based on the likelihood ratio method is \[ S_{t}=\max(0,\max_{1 \leq \tau \leq t} \sum_{i=\tau}^t\log\left\{\frac{f_{1,X}(X_i)}{f_{0,X}(X_i)}\right\}), \] and it has the following convenient recursive representation \begin{equation} \label{eqn:CUSUM} S_{t}=\max(0,S_{t-1}+\log\left\{\frac{f_{1,X}(X_t)}{f_{0,X}(X_t)}\right\}). \end{equation} The popular CUSUM chart discussed in Page (1954) is then constructed by monitoring the above $S_{t}$ over the time and it raises an alarm if $S_{t}$ exceeds some threshold. The above CUSUM chart is easy to construct and enjoys some optimality property (Moustakides (1986)), therefore it has been widely used in many applications. To implement the above CUSUM chart, both the in-control and out-of-control distributions, $f_{0,X}$ and $f_{1,X}$, need to be completely specified. However, in our nonparametric setting, both $f_{0,X}$ and $f_{1,X}$ of $X_1,...,X_t$ are unknown. To overcome this difficulty, we first use the data categorization idea introduced in Qiu and Li (2011) to categorize the data so that the in-control and out-of-control distributions of the resulting categorical data can be easily established. More specifically, let $-\infty<q^{(1)}_1<q^{(1)}_2<\cdots<q^{(1)}_{d-1}<\infty$ be the $d-1$ boundary points, and the real line is then partitioned into the following $d$ intervals, \[ A^{(1)}_1=(-\infty, q^{(1)}_1],\, A^{(1)}_2=(q^{(1)}_1,q^{(1)}_2],\,...,\, A^{(1)}_d=(q^{(1)}_{d-1},\infty). \] Define \[ Y^{(1)}_{t,j}= I(X_t \in A^{(1)}_j), \quad \text{for } j=1,...,d, \] where $I(u)$ is the indicator function that equals 1 when $u$ is true and 0 otherwise. Then $Y^{(1)}_{t,j}$ indicates whether $X_t$ falls in the $j$-th interval $A^{(1)}_j$. Define $\mbs{Y}^{(1)}_t=(Y^{(1)}_{t,1},...,Y^{(1)}_{t,d})'$. It is easy to see that $\mbs{Y}^{(1)}_t$ follows a multinomial distribution with $n=1$ and $p^{(1)}_j=P(X_t \in A^{(1)}_j)$, $j=1,...,d$, denoted by Multi$(1;p^{(1)}_1,...,p^{(1)}_d)$. Therefore, based on the above data categorization, the original data $X_t$ with any arbitrary distribution is converted into the multinomial random variable $\mbs{Y}^{(1)}_t$. To completely characterize the distribution of $\mbs{Y}^{(1)}_t$, we need to know $\{q^{(1)}_1,q^{(1)}_2,...,q^{(1)}_{d-1}\}$. Following Qiu and Li (2011), we choose $q^{(1)}_j$ to be the $(j/d)$-th quantile of the in-control distribution of $X_t$. Then the in-control distribution $f_{0,Y^{(1)}}$ of the $\mbs{Y}^{(1)}_t$ is simply Multi$(1;1/d,...,1/d)$. Based on those $q^{(1)}_j$'s, we first assume that the out-of-control distribution $f_{1,Y^{(1)}}$ of $\mbs{Y}^{(1)}_t$ is given by another multinomial distribution Multi$(1; p^{(1)}_1,...,p^{(1)}_d)$, where $\sum_{j=1}^dp^{(1)}_j=1$ and $(p^{(1)}_1,...,p^{(1)}_d)\neq (1/d,...,1/d)$. Using the in-control and out-of-control distributions of $\mbs{Y}^{(1)}_t$ instead of those of $X_t$, the CUSUM statistic in (\ref{eqn:CUSUM}) becomes \begin{align} \label{eqn:CUSUM0} S^{(0)}_{t}&=\max\Big(0,S^{(0)}_{t-1}+\log\left\{\frac{f_{1,Y^{(1)}}(\mbs{Y}^{(1)}_t)}{f_{0,Y^{(1)}}(\mbs{Y}^{(1)}_t)}\right\}\Big) \nonumber\\ &=\max\Big(0,S^{(0)}_{t-1}+ \sum_{j=1}^d Y^{(1)}_{t,j}\log(dp^{(1)}_j)\Big). \end{align} Similar to the charting statistics proposed in Qiu and Li (2011), the above CUSUM statistic is usually less powerful than other rank-based charting statistics. The reason is that the ordering information of the original data $X_t$ is lost in (\ref{eqn:CUSUM0}), since it does not make use of the ordering information of the $d$ intervals, $A^{(1)}_1$,..., $A^{(1)}_d$. To overcome this drawback, we need to find a new way to construct the CUSUM statistic so that the ordering information of $A^{(1)}_1$,..., $A^{(1)}_d$ can be used. For this purpose, we first define the cumulative unions of $A^{(1)}_1$,..., $A^{(1)}_d$, i.e., \[ A^{(1)}_1, \quad A^{(1)}_1 \cup A^{(1)}_2, \quad A^{(1)}_1 \cup A^{(1)}_2 \cup A^{(1)}_3, \quad ..., \quad A^{(1)}_1 \cup \cdots \cup A^{(1)}_d. \] Similarly we define the cumulative sums of $Y^{(1)}_{t,1},...,Y^{(1)}_{t,d}$, i.e., \[ Z^{(1)}_{t,j}=\sum_{l=1}^j Y^{(1)}_{t,l}, \quad j=1,...,d. \] Then $Z^{(1)}_{t,j}$ indicates whether $X_t$ falls in the interval $A^{(1)}_1 \cup \cdots \cup A^{(1)}_j$. Write $\mbs{Z}^{(1)}_t=(Z^{(1)}_{t,1},...,Z^{(1)}_{t,d})'$. The new vector $\mbs{Z}^{(1)}_t$ contains the same amount of information as $\mbs{Y}^{(1)}_t$. However, if we use the log-likelihood ratio based on $\mbs{Z}^{(1)}_t$ in our CUSUM statistic, the ordering information of $A^{(1)}_1$,..., $A^{(1)}_d$ can be incorporated, so the ordering information of $X_t$ can be preserved. To develop the log-likelihood ratio based on $\mbs{Z}^{(1)}_t$, we first notice that $Z^{(1)}_{t,j}$, $j=1,...,d-1$, is a Bernoulli random variable and the log-likelihood ratio based on $Z^{(1)}_{t,j}$ is \[ Z^{(1)}_{t,j}\log\left(\frac{\sum_{l=1}^j p^{(1)}_l}{j/d}\right)+(1-Z^{(1)}_{t,j})\log\left(\frac{1-\sum_{l=1}^j p^{(1)}_l}{1-j/d}\right). \] Then our proposed log-likelihood ratio based on $\mbs{Z}^{(1)}_t$ is simply the weighted sum of the above log-likelihood ratios, i.e., \[ \log\left\{\frac{f_{1,Z^{(1)}}(\mbs{Z}^{(1)}_t)}{f_{0,Z^{(1)}}(\mbs{Z}^{(1)}_t)}\right\}=\sum_{j=1}^{d-1} \omega(j)\Big\{ Z^{(1)}_{t,j}\log\left(\frac{\sum_{l=1}^j p^{(1)}_l}{j/d}\right)+(1-Z^{(1)}_{t,j})\log\left(\frac{1-\sum_{l=1}^j p^{(1)}_l}{1-j/d}\right) \Big\}, \] where $\omega(j)$ is the weight function, and we choose $\omega(j)=(j/d)^{-1}(1-j/d)^{-1}$ to give more weights to the tail areas. Therefore, our proposed CUSUM statistic is \begin{align} \label{eqn:CUSUM_loc} S^{(1)}_{t}&=\max\Big(0,S^{(1)}_{t-1}+\log\left\{\frac{f_{1,Z^{(1)}}(\mbs{Z}^{(1)}_t)}{f_{0,Z^{(1)}}(\mbs{Z}^{(1)}_t)}\right\}\Big)\nonumber\\ &=\max\Big(0,S^{(1)}_{t-1}+ \sum_{j=1}^{d-1} \frac{d^2}{j(d-j)}\Big\{ Z^{(1)}_{t,j}\log\left(\frac{\sum_{l=1}^j p^{(1)}_l}{j/d}\right)+(1-Z^{(1)}_{t,j})\log\left(\frac{1-\sum_{l=1}^j p^{(1)}_l}{1-j/d}\right) \Big\}\Big). \end{align} As described above, using the log-likelihood ratio of $\mbs{Z}^{(1)}_t$ in our CUSUM statistic helps preserve the ordering information of the data. Based on how both the $d$ intervals, $A^{(1)}_1$,..., $A^{(1)}_d$, and their cumulative unions are constructed, the ordering information of the data used in the above CUSUM statistic is from the smallest to the largest. In the nonparametric literature, the Wilcoxon-Mann-Whitney test is a powerful test for testing location differences, and the Ansari-Bradley test is a powerful test for testing scale differences. Both tests can be considered as a rank-sum test. In the Wilcoxon-Mann-Whitney test, the data are ranked from the smallest to the largest, while in the Ansari-Bradley test, the data can be considered as being ranked from the center outward. This observation makes us believe that, although our CUSUM statistic in (\ref{eqn:CUSUM_loc}) can detect any arbitrary distributional changes, it might not be very powerful for detecting scale changes. To develop a CUSUM statistic that is efficient for scale changes, we need to make use of the center-outward ordering of the data. To do so, different from how we categorize the data previously, we categorize the data in a center-outward fashion. More specifically, let $q^{(2)}_j$, $j=1,...,2d-1$, be the $(j/(2d))$-th quantile of the in-control distribution of $X_t$. We partition the real line into the following $d$ regions, \begin{align*} A^{(2)}_1&=(q^{(2)}_{d-1}, q^{(2)}_{d+1}],\\ A^{(2)}_2&=(q^{(2)}_{d-2},q^{(2)}_{d-1}] \bigcup (q^{(2)}_{d+1},q^{(2)}_{d+2}],\\ A^{(2)}_3&=(q^{(2)}_{d-3},q^{(2)}_{d-2}] \bigcup (q^{(2)}_{d+2},q^{(2)}_{d+3}],\\ & \cdots \, \cdots\\ A^{(2)}_d&=(-\infty, q^{(2)}_{1}] \bigcup (q^{(2)}_{2d-1},\infty). \end{align*} It is clear that $A^{(2)}_1$,...,$A^{(2)}_d$ are ordered from the center outward. Define $Y^{(2)}_{t,j}= I(X_t \in A^{(2)}_j)$. It is easy to see that $\mbs{Y}^{(2)}_t=(Y^{(2)}_{t,1},...,Y^{(2)}_{t,d})'$ follows a multinomial distribution and its in-control distribution is Multi$(1;1/d,...,1/d)$. Again we assume that the out-of-control distribution of $\mbs{Y}^{(2)}_t$ is given by another multinomial distribution Multi$(1; p^{(2)}_1,...,p^{(2)}_d)$, where $\sum_{j=1}^dp^{(2)}_j=1$ and $(p^{(2)}_1,...,p^{(2)}_d)\neq (1/d,...,1/d)$. Although $A^{(2)}_1$,...,$A^{(2)}_d$ are ordered from the center outward, if we use $\mbs{Y}^{(2)}_t$ directly to construct the CUSUM statistic, the center-outward ordering of $A^{(2)}_1$,...,$A^{(2)}_d$ will not be utilized. Similar to how we construct $S^{(1)}_{t}$ in (\ref{eqn:CUSUM_loc}) to incorporate the left-to-right ordering information of the data, we consider the cumulative unions of $A^{(2)}_1$,...,$A^{(2)}_d$, \[ A^{(2)}_1, \quad A^{(2)}_1 \cup A^{(2)}_2, \quad A^{(2)}_1 \cup A^{(2)}_2 \cup A^{(2)}_3, \quad ..., \quad A^{(2)}_1 \cup \cdots \cup A^{(2)}_d, \] and the cumulative sums of $Y^{(2)}_{t,1},...,Y^{(2)}_{t,d}$, \[ Z^{(2)}_{t,j}=\sum_{l=1}^j Y^{(2)}_{t,l}, \quad j=1,...,d. \] Using the same method for obtaining $S^{(1)}_{t}$ in (\ref{eqn:CUSUM_loc}), we can obtain the following CUSUM statistic that makes use of the center-outward ordering information of the data, \begin{align} \label{eqn:CUSUM_scal} S^{(2)}_{t}&=\max\Big(0,S^{(2)}_{t-1}+\log\left\{\frac{f_{1,Z^{(2)}}(\mbs{Z}^{(2)}_t)}{f_{0,Z^{(2)}}(\mbs{Z}^{(2)}_t)}\right\}\Big)\nonumber\\ &=\max\Big(0,S^{(2)}_{t-1}+ \sum_{j=1}^{d-1} \frac{d^2}{j(d-j)}\Big\{ Z^{(2)}_{t,j}\log\left(\frac{\sum_{l=1}^j p^{(2)}_l}{j/d}\right)+(1-Z^{(2)}_{t,j})\log\left(\frac{1-\sum_{l=1}^j p^{(2)}_l}{1-j/d}\right) \Big\}\Big). \end{align} Both $S^{(1)}_t$ and $S^{(2)}_t$ can be used to detect any arbitrary distributional changes. As shown in our simulation study in Section 3.2, $S^{(1)}_t$ is more powerful than $S^{(2)}_t$ for detecting location changes, since it uses the left-to-right ordering information of the data. In contrast, $S^{(2)}_t$ uses the center-outward ordering information of the data, therefore it is more powerful than $S^{(1)}_t$ for detecting scale changes. If no prior information is available on what type of changes the process might experience, we propose to use the following CUSUM statistic, \begin{equation} \label{eqn:CUSUM_loc_scal} S_t=\max(S^{(1)}_t,S^{(2)}_t). \end{equation} \subsection{The adaptive CUSUM statistic} \label{sec:adaptiveCUSUM} To implement the above CUSUM statistic $S_t$, $\{p^{(1)}_1,...,p^{(1)}_d\}$ and $\{p^{(2)}_1,...,p^{(2)}_d\}$ in the out-of-control distributions of $\mbs{Y}^{(1)}_t$ and $\mbs{Y}^{(2)}_t$ need to be specified in advance. This can be a difficult task for many real-world applications, where prior knowledge of the out--of-control distribution may not be available. This is the case even for the standard CUSUM statistic when both the in-control and out-of-control distributions are the normal distributions but with different means. To circumvent this difficulty, a few adaptive CUSUM statistics were proposed in the literature. For example, in Sparks (2000), instead of using the specified out-of-control mean in the standard CUSUM statistic, an estimate of the out-of-control mean using an exponentially weighted moving average of all the past observations is plugged in. In Han and Tsung (2006), the absolute value of the current observation is used as the estimate of the out-of-control mean in the standard CUSUM statistic. Following the same idea, Lorden and Pollak (2008) proposed another way to estimate the out-of-control mean to be used in the CUSUM statistic, and proved the asymptotic optimality of the resulting CUSUM statistic under a single-parameter exponential family. Recently, Wu (2016) generalized Lorden and Pollak's result to the multi-parameter exponential family. In both Lorden and Pollak (2008) and Wu(2016), the key observation is that, at any given time $t$, the most recent time $\hat{\tau}$ when the CUSUM statistic goes back to 0 provides a candidate estimate for the possible change point $\tau$, and therefore the observations collected after $\hat{\tau}$ can be used to estimate the parameters in the out-of-control distribution. In the following, we adopt the approach from Lorden and Pollak (2008) and Wu (2016) and substitute $\{p^{(i)}_1,...,p^{(i)}_d\}$ ($i=1,2$) in our proposed CUSUM statistic $S_t$ by their estimates based on the observations collected after their change point estimates $\hat{\tau}^{(i)}$, where $\hat{\tau}^{(i)}$ is the most recent time when the CUSUM statistic $S^{(i)}_t$ equals 0. More specifically, define, for $i=1,2$, $t \geq 1$, \begin{equation} \label{eqn:adaptiveCUSUM} \hat{S}^{(i)}_{t}=\max\Big(0,\hat{S}^{(i)}_{t-1}+ \sum_{j=1}^{d-1} \frac{d^2}{j(d-j)}\Big\{ Z^{(i)}_{t,j}\log\left(\frac{\sum_{l=1}^j \hat{p}^{(i)}_{t,l}}{j/d}\right)+(1-Z^{(i)}_{t,j})\log\left(\frac{1-\sum_{l=1}^j \hat{p}^{(i)}_{t,l}}{1-j/d}\right) \Big\}\Big), \end{equation} where the $\hat{p}^{(i)}_{t,l}$ are the estimates of the $p^{(i)}_l$ at time $t$ and are defined by \begin{equation} \label{eqn:phat} \hat{p}^{(i)}_{t,l}=\frac{\alpha_l+N^{(i)}_{t,l}}{\sum_{j=1}^d \alpha_j+ N^{(i)}_t}. \end{equation} In the above estimates, $N^{(i)}_t$ is the number of observations collected before the current time $t$ but after the candidate change point estimate $\hat{\tau}^{(i)}$. Similarly, $N^{(i)}_{t,l}$ is the number of observations falling in the $l$th interval $A^{(i)}_l$ before time $t$ but after time $\hat{\tau}^{(i)}$. Both $N^{(i)}_t$ and $N^{(i)}_{t,l}$ can be calculated recursively by \begin{align*} N^{(i)}_{t}&=\begin{cases} N^{(i)}_{t-1}+1, & \text{if } \hat{S}^{(i)}_{t-1}>0, \\ 0, & \text{if } \hat{S}^{(i)}_{t-1}=0, \end{cases}\\ N^{(i)}_{t,l}&= \begin{cases} N^{(i)}_{t-1,l}+ Y^{(i)}_{t-1,l}, & \text{if } \hat{S}^{(i)}_{t-1}>0, \\ 0, & \text{if } \hat{S}^{(i)}_{t-1}=0. \end{cases} \end{align*} The constants $\{\alpha_1,...,\alpha_d\}$ in (\ref{eqn:phat}) can be considered as the parameters of the Dirichlet distribution, the conjugate prior for $\{p^{(i)}_1,...,p^{(i)}_d\}$. Therefore, the above estimate $\hat{p}^{(i)}_{t,l}$ can be considered as a Bayesian estimate. In Bayesian statistics, it is common to choose $\alpha_1=\cdots=\alpha_d=1$ as the noninformative prior for $\{p^{(i)}_1,...,p^{(i)}_d\}$. However, in our case a closer examination of $\hat{p}_{t,l}$ reveals that, whenever $\hat{S}^{(i)}_{t}$ returns to 0, $\alpha_l/\sum_{j=1}^d\alpha_j$ will be used to estimate $p^{(i)}_{l}$. Therefore, the choice $\alpha_1=\cdots=\alpha_d=1$ does not work. Instead, we can choose $\{\alpha_1,...,\alpha_d\}$ proportional to $\{p^{(i)}_1,...,p^{(i)}_d\}$ when the process experiences the smallest distributional change that is meaningful. In this paper, we choose $\{\alpha_1,...,\alpha_d\}$ as follows. We first assume that the in-control distribution of $X_t$ is $N(0,1)$ and its smallest meaningful out-of-control distribution is either $N(0.25,1)$ or $N(-0.25,1)$. Under this in-control and out-of-control distributional assumption for $X_t$, we can obtain the corresponding out-of-control distribution of $\mbs{Y}^{(1)}_t$, denoted by Multi$(1; p^+_{1},...,p^+_{d})$ for $N(0.25,1)$ and Multi$(1; p^-_{1},...,p^-_{d})$ for $N(-0.25,1)$. Then we choose $\alpha_j=dp^+_{j}$ or $dp^-_{j}$, $j=1,...,d$. When using $\alpha_{j}=dp^+_{j}$ in $\hat{S}^{(1)}_t$, denoted by $\hat{S}^{(1+)}_t$, the prior indicates a positive location shift, so $\hat{S}^{(1+)}_t$ is more powerful for detecting positive location shifts. When using $\alpha_{j}=dp^-_{j}$ in $\hat{S}^{(1)}_t$, denoted by $\hat{S}^{(1-)}_t$, the prior indicates a negative location shift, so $\hat{S}^{(1-)}_t$ is more powerful for detecting negative location shifts. Similarly, when using $\alpha_{j}=dp^+_{j}$ in $\hat{S}^{(2)}_t$, denoted by $\hat{S}^{(2+)}_t$, the prior indicates a scale increase, so $\hat{S}^{(2+)}_t$ is more powerful for detecting scale increases. When using $\alpha_{j}=dp^-_{j}$ in $\hat{S}^{(2)}_t$, denoted by $\hat{S}^{(2-)}_t$, the prior indicates a scale decrease, so $\hat{S}^{(2-)}_t$ is more powerful for detecting scale decreases. If we do not have any prior information about what type of changes the process might encounter, the charting statistic we use is \begin{equation} \label{eqn:adaptiveCUSUM_loc_scal} \hat{S}_t=\max(\hat{S}^{(1+)}_t,\hat{S}^{(1-)}_t,\hat{S}^{(2+)}_t,\hat{S}^{(2-)}_t), \end{equation} which is efficient to detect any type of distributional changes. \subsection{Determining the control limit}\label{sec:controllimit} \label{sec:controllimit} As described in the previous section, our proposed adaptive CUSUM statistic is simply $\hat{S}_t=\max(\hat{S}^{(1+)}_t,\hat{S}^{(1-)}_t,\hat{S}^{(2+)}_t,\hat{S}^{(2-)}_t)$, and the resulting control chart is to monitor $\hat{S}_t$ over time $t$, and it raises an alarm if $\hat{S}_t$ exceeds the control limit $h$. As we can see from (\ref{eqn:adaptiveCUSUM}), $\hat{S}_t$ is a function of $\mbs{Y}^{(1)}_t$ and $\mbs{Y}^{(2)}_t$ only. Define \begin{align*} &B^{(1)}_1=\left(0,\frac{1}{d}\right],\, B^{(1)}_2=\left(\frac{1}{d},\frac{2}{d}\right],\,...,\, B^{(1)}_d=\left(\frac{d-1}{d},1\right),\\ &B^{(2)}_1=\left(\frac{d-1}{2d},\frac{d+1}{2d}\right], \, B^{(2)}_2=\left(\frac{d-2}{2d},\frac{d-1}{2d}\right] \bigcup \left(\frac{d+1}{2d},\frac{d+2}{2d}\right], \, ...,\, B^{(2)}_d=\left(0,\frac{1}{2d}\right] \bigcup \left(\frac{2d-1}{2d},1\right), \end{align*} and for $i=1, 2$, \[ U^{(i)}_{j}= I(U \in B^{(i)}_j), \quad \text{for } j=1,...,d, \] where $U$ is a uniform random variable on (0,1). Let $\mbs{U}^{(1)}=(U^{(1)}_1,...,U^{(1)}_d)'$ and $\mbs{U}^{(2)}=(U^{(2)}_1,...,U^{(2)}_d)'$. Then based on the probability integral transformation, it is easy to see that the in-control joint distribution of $\mbs{Y}^{(1)}_t$ and $\mbs{Y}^{(2)}_t$ is the same as the joint distribution of $\mbs{U}^{(1)}$ and $\mbs{U}^{(2)}$. Therefore, our proposed adaptive CUSUM control chart based on $\hat{S}_t$ is distribution-free. Determining the control limit $h$ for this CUSUM chart can be achieved by simulating data from any standard continuous distribution, say the standard normal distribution, as $X_t$ and finding $h$ to obtain the desired in-control average run length (denoted by $ARL_0$) through a bi-section search. Table \ref{tab:h} shows the computed control limit $h$ using the bi-section search algorithm based on 10,000 replications for different choices of $d$ when $ARL_0=200, 370, 500, 1000$. \begin{table}[ht] \centering \caption{The computed control limit $h$ for our proposed adaptive CUSUM chart based on 10,000 replications for different choice of $d$ when $ARL_0=200, 370, 500, 1000$.\label{tab:h}} \begin{tabular}{r|cccc} \hline $ARL_0$ & $d=10$ & $d=20$ & $d=30$ & $d=40$ \\ \hline 200 & 90.275 & 185.466 & 281.644 & 379.191 \\ 370 & 105.941 & 218.886 & 333.933 & 449.201 \\ 500 & 113.308 & 235.241 & 358.960 & 483.987 \\ 1000& 131.299 & 273.411 & 418.364 & 564.137 \\ \hline \end{tabular} \end{table} \subsection{Self-starting monitoring scheme}\label{sec:sefstarting} \label{sec:selfstart} To categorize the original data $X_t$ and implement our proposed control chart based on $\hat{S}_t$, we need to know $\{q^{(1)}_j\}_{j=1}^{d-1}$ and $\{q^{(2)}_j\}_{j=1}^{2d-1}$, which are the $(j/d)$-th quantile and $(j/(2d))$-th quantile of the in-control distribution of $X_t$, respectively. Since those quantiles are rarely known in practice, we can approximate them by their sample estimates from the in-control reference data. However, in order for the effect of using those quantile estimates instead of the true values on the $ARL_0$ to be negligible, it usually requires a substantial amount of in-control reference data. In many real-world applications, it can be very challenging to have such data. To solve this problem, we develop a self-starting monitoring scheme where the estimates of quantiles $\{q^{(1)}_j\}_{j=1}^{d-1}$ and $\{q^{(2)}_j\}_{j=1}^{2d-1}$ are updated sequentially each time when a new observation is collected. More specifically, at time $t$ we have $m+t-1$ observations collected in the past, i.e., \[ X_{-m+1},...,X_0, X_1,...,X_{t-1}. \] Let $X_{t,(1)}\leq X_{t,(2)} < \cdots < X_{t,(m+t-1)}$ denote their order statistics. For a given $j$, $j=1,...,2d-1$, find the integer $l$ such that $1\leq l \leq m+t-2$ and \[ \frac{l}{m+t} \leq \frac{j}{2d} \leq \frac{l+1}{m+t}. \] Then based on $X_{-m+1},...,X_0, X_1,...,X_{t-1}$, the $(j/(2d))$-th quantile of the in-control distribution of $X_t$, $q^{(2)}_j$, can be estimated by \begin{equation} \label{eqn:quantiles} \hat{q}^{(2)}_{t,j}= \left(1-\frac{j(m+t)}{2d}+l\right) X_{t,(l)}+\left(\frac{j(m+t)}{2d}-l\right) X_{t,(l+1)}. \end{equation} Since $q^{(1)}_j=q^{(2)}_{2j}$ for $j=1,...,d-1$, the estimates of $q^{(1)}_j$ can be obtained accordingly. Using those estimates, at time $t$ we partition the real line into the following $d$ left-to-right regions, \[ \hat{A}^{(1)}_{t,1}=(-\infty, \hat{q}^{(1)}_{t,1}],\, \hat{A}^{(1)}_{t,2}=(\hat{q}^{(1)}_{t,1},\hat{q}^{(1)}_{t,2}],\,...,\, \hat{A}^{(1)}_{t,d}=(\hat{q}^{(1)}_{t,d-1},\infty), \] or the following $d$ center-outward regions, \begin{align*} \hat{A}^{(2)}_{t,1}&=(\hat{q}^{(2)}_{t,d-1}, \hat{q}^{(2)}_{t,d+1}],\\ \hat{A}^{(2)}_{t,2}&=(\hat{q}^{(2)}_{t,d-2},\hat{q}^{(2)}_{t,d-1}] \bigcup (\hat{q}^{(2)}_{t,d+1},\hat{q}^{(2)}_{t,d+2}],\\ & \cdots \, \cdots\\ \hat{A}^{(2)}_{t,d}&=(-\infty, \hat{q}^{(2)}_{t,1}] \bigcup (\hat{q}^{(2)}_{t,2d-1},\infty). \end{align*} Define $\hat{\mbs{Y}}^{(1)}_{t}=(\hat{Y}^{(1)}_{t,1},...,\hat{Y}^{(1)}_{t,d})'$ and $\hat{\mbs{Y}}^{(2)}_{t}=(\hat{Y}^{(2)}_{t,1},...,\hat{Y}^{(2)}_{t,d})'$, where \[ \hat{Y}^{(1)}_{t,j}= I(X_t \in \hat{A}^{(1)}_{t,j}) \quad \text{ and } \quad \hat{Y}^{(2)}_{t,j}= I(X_t \in \hat{A}^{(2)}_{t,j}), \quad \text{ for } j=1,...,d. \] The following result shows the in-control distributions of $\hat{\mbs{Y}}^{(1)}_{t}$ and $\hat{\mbs{Y}}^{(2)}_{t}$. \begin{theorem} \label{thm1} For $i=1$, $2$, $\hat{\mbs{Y}}^{(i)}_{t}$ are independent and identically distributed as Multi$(1;1/d,...,1/d)$ when the process is in-control. \end{theorem} Based on the above result, $\hat{\mbs{Y}}^{(i)}_{t}$ has the same in-control distribution as $\mbs{Y}^{(i)}_{t}$, $i=1,2$. Therefore, in our self-starting monitoring scheme, we replace $\mbs{Y}^{(i)}_{t}$ in our proposed adaptive CUSUM statistic described in Section \ref{sec:adaptiveCUSUM} by $\hat{\mbs{Y}}^{(i)}_{t}$, and the resulting self-starting control chart can still use the control limit we obtain from Section \ref{sec:controllimit}. In the above self-starting monitoring scheme, it is assumed that the calculation of our sequential quantile estimates (\ref{eqn:quantiles}) starts from $t=1$. In order for Theorem 1 to hold, the size of the reference data $m$ is at least $2d-1$, since this ensures that, for any $t\geq 1$ and any $j$, $j=1,...,2d-1$, we can find an integer $l$ such that $1\leq l \leq m+t-2$ and \[ \frac{l}{m+t} \leq \frac{j}{2d} \leq \frac{l+1}{m+t}. \] If the number of observations we have is smaller than $2d-1$, it implies that we can not find such an integer $l$ for some $j$. If this is the case, we simply define \[ \hat{q}^{(2)}_{t,j}=\begin{cases} X_{t,(1)}, \quad & \text{ if } j/2d < 1/(m+t)\\ X_{t,(m+t-1)}, \quad & \text{ if } j/2d > (m+t-1)/(m+t)\\ \end{cases} \] When using the above $\hat{q}^{(2)}_{t,j}$, the in-control distribution of $\hat{\mbs{Y}}^{(2)}_{t}$ is not exactly Multi$(1;1/d,...,1/d)$. Therefore, if $m<2d-1$, the in-control distribution of $\hat{\mbs{Y}}^{(2)}_{t}$ is a little off from its expected one for $t<2d-m$. Since this is the case only for $t<2d-m$, we expect that its effect on the $ARL_0$ is negligible if $2d-m$ is not large. In the following, we report a simulation study to evaluate such effects. In the simulation study, we choose the size of the reference data $m=10$ or $20$ and the number of categories the data are categorized into $d$=10, 20, 30, or 40. Three different in-control distributions, $f_{0,X}$, are considered: the standard normal, denoted by $N(0,1)$; the $t$ distribution with 2.5 degrees of freedom, denoted by $t(2.5)$; the lognormal distribution with parameters $\mu=1$ and $\sigma=0.5$, denoted by $LN(1,0.5)$. Using the control limits reported in Table \ref{tab:h}, we apply our proposed self-starting monitoring scheme to the data simulated from the above three in-control distributions, and record the time it takes to trigger an alarm, which is the in-control run length. This is repeated 10,000 times and the average of the 10,000 in-control run lengths is the simulated $ARL_0$ of our proposed self-starting monitoring scheme. Table \ref{tab:arl0} shows the simulated $ARL_0$ along with their corresponding standard errors (in the parentheses) under different settings. \begin{table}[!htbp] \centering \caption{The simulated $ARL_0$ for our proposed self-starting adaptive CUSUM chart based on 10,000 replications for different choices of $m$ and $k$ when $ARL_0=200, 370, 500, 1000$.\label{tab:arl0}} \begin{tabular}{|l|l||cccc|} \hline & &\multicolumn{4}{|c|}{$ARL_0=200$} \\ \cline{3-6} $m$ & $f_{0,X}$ & $d=10$ & $d=20$ & $d=30$ & $d=40$\\ \hline & $N(0,1)$ & 201.13(1.86) & 196.13(1.80) & 187.86(1.77) & 181.96(1.78) \\ 10 & $t(2.5)$ & 200.11(1.87) & 194.62(1.80) & 189.36(1.78) & 180.87(1.77) \\ & $LN(1,0.5)$ & 201.40(1.86) & 196.27(1.80) & 187.77(1.77) & 181.93(1.78) \\ \hline & $N(0,1)$ & 200.99(1.87) & 199.01(1.80) & 197.10(1.77) & 195.10(1.80) \\ 20 & $t(2.5)$ & 201.56(1.89) & 199.98(1.82) & 198.27(1.79) & 194.52(1.79) \\ & $LN(1,0.5)$ & 200.68(1.86) & 198.85(1.80) & 197.14(1.78) & 195.02(1.80) \\ \hline \hline & &\multicolumn{4}{|c|}{$ARL_0=370$} \\ \cline{3-6} $m$ & $f_{0,X}$ & $d=10$ & $d=20$ & $d=30$ & $d=40$\\ \hline & $N(0,1)$ & 372.60(3.50) & 366.54(3.41) & 361.38(3.43) & 349.79(3.41) \\ 10 & $t(2.5)$ & 368.37(3.53) & 367.35(3.43) & 360.51(3.38) & 348.35(3.39) \\ & $LN(1,0.5)$ & 372.00(3.50) & 365.84(3.41) & 361.63(3.43) & 348.83(3.41) \\ \hline & $N(0,1)$ & 372.14(3.51) & 369.11(3.38) & 373.16(3.44) & 364.74(3.40) \\ 20 & $t(2.5)$ & 368.60(3.55) & 371.98(3.46) & 370.59(3.37) & 365.27(3.40) \\ & $LN(1,0.5)$ & 371.42(3.52) & 368.73(3.38) & 372.99(3.43) & 364.44(3.40) \\ \hline \hline & &\multicolumn{4}{|c|}{$ARL_0=500$} \\ \cline{3-6} $m$ & $f_{0,X}$ & $d=10$ & $d=20$ & $d=30$ & $d=40$\\ \hline & $N(0,1)$ & 499.75(4.74) & 491.82(4.63) & 482.85(4.63) & 478.76(4.65) \\ 10 & $t(2.5)$ & 497.66(4.78) & 501.00(4.75) & 494.74(4.72) & 478.32(4.65) \\ & $LN(1,0.5)$ & 499.81(4.74) & 491.05(4.63) & 482.64(4.64) & 478.17(4.65) \\ \hline & $N(0,1)$ & 499.29(4.75) & 496.14(4.64) & 496.29(4.64) & 498.95(4.65) \\ 20 & $t(2.5)$ & 497.02(4.77) & 504.20(4.72) & 507.01(4.72) & 497.49(4.65) \\ & $LN(1,0.5)$ & 499.48(4.75) & 495.77(4.64) & 495.71(4.65) & 499.33(4.65) \\ \hline \hline & &\multicolumn{4}{|c|}{$ARL_0=1000$} \\ \cline{3-6} $m$ & $f_{0,X}$ & $d=10$ & $d=20$ & $d=30$ & $d=40$\\ \hline & $N(0,1)$ & 990.69(9.47) & 990.39(9.69) & 988.51(9.76) & 965.14(9.64) \\ 10 & $t(2.5)$ & 989.89(9.65) & 992.05(9.66) & 991.03(9.71) & 966.82(9.59) \\ & $LN(1,0.5)$ & 991.64(9.48) & 990.65(9.68) & 988.75(9.76) & 965.21(9.64) \\ \hline & $N(0,1)$ & 989.05(9.47) & 995.48(9.68) & 999.60(9.75) & 982.38(9.58) \\ 20 & $t(2.5)$ & 988.25(9.64) & 998.71(9.63) & 1005.20(9.72) & 994.89(9.65) \\ & $LN(1,0.5)$ & 989.45(9.47) & 995.54(9.68) & 999.78(9.75) & 982.91(9.58) \\ \hline \end{tabular} \end{table} As mentioned above, only the first $2d-m$ observations can potentially cause the $ARL_0$ to deviate from the nominal level. To make such effects to be negligible, $2d-m$ should not be very large. This implies that the minimal size of the reference data we need to maintain the desired $ARL_0$ should increase as $d$ increases. As we can see from Table \ref{tab:arl0}, for $d=10$ or 20, the simulated $ARL_0$ are close to the nominal level even when $m=10$. However, for $d=30$ or 40, when $m=10$, the simulated $ARL_0$ can deviate from the nominal level, indicating the size of the reference data $m$ need to increase in those cases. Based on our simulations, $m=20$ seems to work well for all the cases considered here. \subsection{Post-signal diagnostics}\label{sec:postsignal} When using the control chart to monitor the process in practice, in addition to detecting a change as quickly as possible, it is also important to identify what kind of distributional changes have triggered the alarm. In the literature, most of the existing nonparametric control charts have to implement extra tests to identify what kind of distributional changes have occurred after an alarm. Different from those methods, our proposed adaptive CUSUM chart can identify the distributional change automatically when the alarm is triggered. To see this, recall that our adaptive CUSUM chart simply monitors $\hat{S}_t=\max(\hat{S}^{(1+)}_t,\hat{S}^{(1-)}_t,\hat{S}^{(2+)}_t,\hat{S}^{(2-)}_t)$, and it raises an alarm whenever $\hat{S}_t$ exceeds some control limit $h$. Because $\hat{S}^{(1+)}_t$, $\hat{S}^{(1-)}_t$, $\hat{S}^{(2+)}_t$, and $\hat{S}^{(2-)}_t$ all have the same in-control distribution of run lengths, our proposed monitoring scheme is equivalent to monitoring $\hat{S}^{(1+)}_t$, $\hat{S}^{(1-)}_t$, $\hat{S}^{(2+)}_t$, and $\hat{S}^{(2-)}_t$ separately, and raising an alarm whenever at least one of them exceeds $h$. Recall that $\hat{S}^{(1+)}_t$ is more powerful for detecting positive location shifts, $\hat{S}^{(1-)}_t$ is more powerful for detecting negative location shifts, $\hat{S}^{(2+)}_t$ is more powerful for detecting scale increases, and $\hat{S}^{(2-)}_t$ is more powerful for detecting scale decreases. Therefore, checking which charting statistics among $\hat{S}^{(1+)}_t$, $\hat{S}^{(1-)}_t$, $\hat{S}^{(2+)}_t$, and $\hat{S}^{(2-)}_t$ have exceeded the control limit $h$ when the alarm is triggered can identify what kind of distributional changes have caused the alarm. This acts as a built-in post-signal diagnostic function, which is another appealing feature of our method. \section{Simulation Studies} \subsection{The proposed adaptive CUSUM chart versus the CPD charts} In this section, we report several simulation studies to evaluate the performance of our proposed self-starting adaptive CUSUM chart for detecting different distributional changes. In particular, we compare our proposed control chart with some CPD charts, since they also do not involve any tuning parameter or require significant amount of reference data. In Ross and Adams (2012), two CPD charts for detecting arbitrary distributional changes were developed, one is based on the Kolmogorov-Smirnov (KS) test statistic and the other on the Cramer-von-Mises (CvM) test statistic. In their conclusions, they recommended using the CvM CPD chart, since it is usually better than the one based on the KS test statistic. In Ross, Tasoulis and Adams (2011), another CPD chart based on the Lepage test statistic was proposed. Although technically the Lepage CPD chart is only for location and scale changes, it seems to be very powerful for other situations as well. Therefore, we include the CvM CPD chart and the Lepage CPD chart in our comparison. To study how $d$ (the number of categories) affects the performance of our proposed control chart, we consider four choices of $d$, $d=10$, 20, 30, and 40. Based on the simulation study conducted in Section \ref{sec:selfstart}, a warm-up period of $20$ observations can ensure good $ARL_0$ performance of our proposed self-starting control chart for those choices of $d$. For the CvM CPD chart and the Lepage CPD chart, a warm-up period of 20 observations is also recommended in Ross and Adams (2012) and Ross, Tasoulis and Adams (2011). Therefore, for all the three charts, we start monitoring only after the first 20 observations have been received. Following the simulation settings considered in Ross and Adams (2012), we compare the performance of our proposed control chart along with the CvM CPD chart and the Lepage CPD chart for detecting location changes, scale changes and more general distributional changes. \subsection*{Location changes} For location changes, three different in-control distributions are considered: the standard normal, $N(0,1)$; the $t$ distribution with 2.5 degrees of freedom, $t(2.5)$; and the lognormal distribution with parameters $\mu=1$ and $\sigma=0.5$, $LN(1,0.5)$. For $t(2.5)$ and $LN(1,0.5)$, we also standardize the data so that the in-control distribution has mean 0 and standard deviation 1. We denote the resulting distributions by $t(2.5)/\sqrt{5}$ and $(LN(1,0.5)-3)/1.6$, respectively. To simulate location changes, we add a constant $\delta$ $\in \{0.25,0.5,0.75,1,1.5,2\}$ to the observations collected after the change-point $\tau$. Two choices of $\tau$ are considered: $\tau=50$ or $300$. The average time taken to detect the change (denoted by $ARL_1$) from 10,000 simulations is then recorded for each chart. Table \ref{tab:loc} shows the $ARL_1$ of all the three control charts along with their corresponding standard errors (in the parentheses) under different settings. \begin{table}[!htbp] \centering \caption{The simulated $ARL_1$ for our proposed self-starting adaptive CUSUM chart with different choices of $d$, the Lepage CPD chart and the CvM CPD chart for detecting location shifts.\label{tab:loc}} \begin{tabular}{|r|r||rrrr||r|r|} \hline \multicolumn{8}{|c|}{$N(0,1)+\delta$}\\ \hline & &\multicolumn{4}{|c||}{Proposed} & & \\ \cline{3-6} $\tau$ & $\delta$ & $d=10$ & $d=20$ & $d=30$ & $d=40$ & Lepage & CvM \\ \hline \hline & 0.25 & 395.71 (4.54) & 381.98(4.52) & 373.24(4.35) & 369.53(4.36) & 436.73(4.77) & 382.95(4.63) \\ &0.50 & 179.36(3.19) & 158.55(2.95) & 151.34(2.91) & 143.85(2.76) & 232.36(3.62) & 157.97(2.91) \\ & 0.75 & 46.34(1.12) & 40.12(0.91) & 38.13(0.84) & 36.06(0.67) & 62.96(1.32) & 37.44(0.97) \\ 50 & 1.00 & 17.02(0.18) & 16.78(0.14) & 17.43(0.13) & 17.82(0.12) & 20.04(0.23) & 14.85(0.14) \\ & 1.50 & 8.47(0.04) & 8.89(0.04) & 9.22(0.04) & 9.45(0.04) & 6.89(0.05) & 6.64(0.04) \\ & 2.00 & 6.08(0.02) & 6.19(0.02) & 6.41(0.02) & 6.54(0.03) & 3.67(0.02) & 4.32(0.02) \\ \hline & 0.25 & 205.69(2.88) & 172.74(2.43) & 167.83(2.30) & 159.45(2.20) & 227.90(2.81) & 164.03(2.10) \\ & 0.50 & 40.71(0.36) & 37.77(0.29) & 37.35(0.28) & 36.57(0.26) & 49.63(0.43) & 38.29(0.32) \\ & 0.75 & 19.25(0.11) & 18.91(0.11) & 19.20(0.10) & 19.37(0.10) & 20.75(0.15) & 17.90(0.12) \\ 300 & 1.00 & 12.37(0.06) & 12.43(0.06) & 12.81(0.06) & 12.91(0.06) & 11.61(0.08) & 10.78(0.06) \\ & 1.50 & 7.29(0.03) & 7.26(0.03) & 7.43(0.03) & 7.48(0.03) & 5.18(0.03) & 5.71(0.03) \\ & 2.00 & 5.41(0.02) & 5.21(0.02) & 5.22(0.02) & 5.24(0.02) & 3.06(0.02) & 3.90(0.02) \\ \hline \hline \multicolumn{8}{|c|}{$t(2.5)/\sqrt{5}+\delta$}\\ \hline & &\multicolumn{4}{|c||}{Proposed} & & \\ \cline{3-6} $\tau$ & $\delta$ & $d=10$ & $d=20$ & $d=30$ & $d=40$ & Lepage & CvM \\ \hline \hline & 0.25 & 262.22(3.94) & 256.42(3.92) & 244.59(3.80) & 239.02(3.63) & 304.36(4.11) & 194.13(3.26) \\ & 0.50 & 32.60(0.79) & 34.67(0.78) & 33.42(0.62) & 34.60(0.76) & 38.23(0.65) & 20.81(0.42) \\ & 0.75 & 11.85(0.08) & 13.20(0.08) & 14.07(0.08) & 14.83(0.09) & 11.94(0.10) & 8.58(0.06) \\ 50 & 1.00 & 7.96(0.04) & 8.90(0.04) & 9.66(0.04) & 10.12(0.05) & 6.48(0.05) & 5.63(0.03) \\ & 1.50 & 5.45(0.02) & 5.95(0.02) & 6.40(0.03) & 6.69(0.03) & 3.23(0.02) & 3.77(0.01) \\ & 2.00 & 4.70(0.01) & 4.90(0.02) & 5.31(0.02) & 5.44(0.02) & 2.41(0.01) & 3.19(0.01) \\ \hline & 0.25 & 63.18(0.77) & 62.30(0.73) & 61.65(0.69) & 60.74(0.62) & 73.87(0.67) & 46.57(0.41) \\ & 0.50 & 16.01(0.09) & 17.28(0.09) & 18.05(0.10) & 18.56(0.10) & 16.86(0.11) & 13.10(0.08) \\ & 0.75 & 8.95(0.04) & 9.93(0.04) & 10.62(0.05) & 11.03(0.05) & 7.51(0.04) & 7.07(0.04) \\ 300 & 1.00 & 6.42(0.03) & 7.00(0.03) & 7.47(0.03) & 7.82(0.03) & 4.53(0.02) & 4.95(0.02) \\ & 1.50 & 4.70(0.01) & 4.62(0.02) & 4.87(0.02) & 5.09(0.02) & 2.58(0.01) & 3.40(0.01) \\ & 2.00 & 4.23(0.01) & 3.82(0.01) & 3.86(0.01) & 3.96(0.01) & 2.05(0.01) & 3.00(0.01) \\ \hline \hline \multicolumn{8}{|c|}{$(LN(1,0.5)-3)/1.6+\delta$}\\ \hline & &\multicolumn{4}{|c||}{Proposed} & & \\ \cline{3-6} $\tau$ & $\delta$ & $d=10$ & $d=20$ & $d=30$ & $d=40$ & Lepage & CvM \\ \hline \hline & 0.25 & 330.66(4.36) & 295.84(4.21) & 282.71(3.99) & 278.45(3.96) & 412.16(4.82) & 376.09(4.70) \\ & 0.50 & 71.74(1.91) & 54.20(1.19) & 50.64(0.99) & 51.14(0.97) & 94.96(1.74) & 109.15(2.43) \\ & 0.75 & 19.37(0.19) & 20.56(0.12) & 21.44(0.11) & 22.49(0.11) & 27.33(0.21) & 23.07(0.54) \\ 50 & 1.00 & 12.72(0.06) & 14.13(0.06) & 15.09(0.06) & 15.90(0.07) & 15.69(0.09) & 10.66(0.08) \\ & 1.50 & 7.98(0.03) & 9.01(0.03) & 9.77(0.04) & 10.26(0.04) & 7.71(0.04) & 5.47(0.02) \\ & 2.00 & 6.07(0.02) & 6.85(0.02) & 7.39(0.03) & 7.73(0.03) & 4.35(0.02) & 4.00(0.01) \\ \hline & 0.25 & 101.39(1.37) & 84.21(0.99) & 78.17(0.80) & 75.66(0.72) & 144.45(1.45) & 122.44(1.46) \\ & 0.50 & 24.91(0.13) & 25.52(0.12) & 26.33(0.12) & 26.89(0.12) & 37.61(0.19) & 26.90(0.17) \\ & 0.75 & 14.53(0.06) & 15.73(0.06) & 16.56(0.06) & 17.22(0.06) & 19.41(0.09) & 13.17(0.07) \\ 300 & 1.00 & 10.49(0.04) & 11.60(0.04) & 12.29(0.04) & 12.83(0.04) & 11.99(0.06) & 8.28(0.03) \\ & 1.50 & 6.89(0.02) & 7.62(0.02) & 8.12(0.03) & 8.48(0.03) & 5.57(0.03) & 4.80(0.01) \\ & 2.00 & 5.22(0.02) & 5.67(0.02) & 6.00(0.02) & 6.29(0.02) & 3.31(0.01) & 3.57(0.01) \\ \hline \end{tabular} \end{table} As we can see from Table \ref{tab:loc}, the choice of $d$ affects the $ARL_1$ of the proposed CUSUM chart. In general, our CUSUM charts with larger $d$ have better $ARL_1$ than those with smaller $d$ for detecting small location shifts, and vice versa for detecting large location shifts. This can be explained by the following. On one hand, our charting statistic with larger $d$ is usually more sensitive to the location changes, since it monitors the location changes in $d$ categories. Therefore, for small location shifts, our CUSUM charts with larger $d$ are more powerful. On the other hand, our charting statistic with larger $d$ requires more observations in total to build up the evidence for location changes. Therefore, for large location shifts, it takes our CUSUM charts with larger $d$ longer time to detect those changes. Considering the performance for detecting both small and large location shifts, we recommend using $d=20$ in our proposed CUSUM chart. Now we compare our proposed CUSUM chart with the two CPD charts. Between the two CPD charts, the CvM CPD chart is generally better than the Lepage CPD chart. For small location shifts, our proposed CUSUM chart is always better than the Lepage CPD chart. Comparing with the CvM CPD chart, the performance of our CUSUM chart is similar in the normal distribution, worse in the $t$ distribution, and better in the lognormal distribution. For large location shifts, the two CPD charts are generally better than our CUSUM chart. This is because the two CPD charts are based on the ranks of the observations, while our CUSUM chart is constructed through the categorization of the observations. When the process experiences large shifts, most of the observations will have large ranks which can quickly drive the charting statistics of the two CPD charts to exceed their respective control limits. However, this ranking information will not be completely preserved through data categorization, therefore our CUSUM chart will not react as quickly as those two CPD charts to large location shifts. \subsection*{Scale changes} For scale changes, we also consider the three in-control distributions: $N(0,1)$, $t(2.5)/\sqrt{5}$ and $(LN(1,0.5)-3)/1.6$. To simulate scale changes, we multiply a constant $\delta$ $\in \{1.5, 2, 3,0.5, 0.33, 0.2\}$ to the observations collected after the change-point $\tau$. Again $\tau=50$ or $300$. The first three choices of $\delta$ indicate an increase in scale, while the last three choices indicate a decrease in scale. Table \ref{tab:scal} shows the $ARL_1$ of all the three control charts along with their corresponding standard errors (in the parentheses) from 10,000 simulations under different settings. \begin{table}[!htbp] \centering \caption{The simulated $ARL_1$ for our proposed self-starting adaptive CUSUM chart with different choices of $d$, the Lepage CPD chart and the CvM CPD chart for detecting scale changes.\label{tab:scal}} \begin{tabular}{|r|r||rrrr||r|r|} \hline \multicolumn{8}{|c|}{$N(0,1) \times \delta$}\\ \hline & &\multicolumn{4}{|c||}{Proposed} & & \\ \cline{3-6} $\tau$ & $\delta$ & $d=10$ & $d=20$ & $d=30$ & $d=40$ & Lepage & CvM \\ \hline \hline & 1.50 & 175.01(3.05) & 145.11(2.68) & 136.53(2.53) & 125.93(2.42) & 149.53(2.62) & 314.07(4.05) \\ & 2.00 & 31.32(0.62) & 27.83(0.53) & 25.79(0.45) & 23.07(0.41) & 26.89(0.58) & 202.42(3.21) \\ & 3.00 & 11.34(0.07) & 10.97(0.07) & 10.46(0.06) & 9.90(0.06) & 8.48(0.07) & 61.37(1.07) \\ 50 & 0.50 & 36.99(0.87) & 33.39(0.60) & 33.07(0.44) & 33.73(0.49) & 62.46(1.28) & 562.99(5.68) \\ & 0.33 & 13.74(0.06) & 15.25(0.07) & 16.25(0.07) & 16.97(0.07) & 19.93(0.10) & 192.20(3.00) \\ & 0.20 & 9.47(0.04) & 10.59(0.04) & 11.41(0.04) & 11.98(0.05) & 13.72(0.03) & 44.90(0.36) \\ \hline & 1.50 & 41.04(0.37) & 34.17(0.28) & 32.25(0.25) & 31.17(0.24) & 34.57(0.33) & 131.91(1.66) \\ & 2.00 & 16.75(0.11) & 14.62(0.09) & 13.97(0.08) & 13.57(0.08) & 13.16(0.10) & 46.85(0.40) \\ & 3.00 & 9.28(0.05) & 8.23(0.04) & 7.86(0.04) & 7.57(0.04) & 6.56(0.04) & 22.24(0.16) \\ 300 & 0.50 & 18.91(0.08) & 19.93(0.08) & 20.84(0.08) & 21.40(0.09) & 30.90(0.11) & 102.58(0.43) \\ & 0.33 & 10.71(0.04) & 11.83(0.04) & 12.55(0.04) & 13.09(0.04) & 17.63(0.04) & 42.19(0.10) \\ & 0.20 & 7.42(0.02) & 8.18(0.03) & 8.76(0.03) & 9.11(0.03) & 13.49(0.02) & 25.31(0.04) \\ \hline \hline \multicolumn{8}{|c|}{$t(2.5)/\sqrt{5} \times \delta$}\\ \hline & &\multicolumn{4}{|c||}{Proposed} & & \\ \cline{3-6} $\tau$ & $\delta$ & $d=10$ & $d=20$ & $d=30$ & $d=40$ & Lepage & CvM \\ \hline \hline & 1.50 & 266.73(3.71) & 257.87(3.66) & 250.77(3.66) & 243.65(3.60) & 252.97(3.66) & 359.59(4.34) \\ & 2.00 & 88.86(2.01) & 79.27(1.85) & 74.22(1.69) & 67.69(1.52) & 84.17(1.80) & 260.92(3.79) \\ & 3.00 & 17.65(0.26) & 17.58(0.16) & 17.33(0.16) & 16.83(0.19) & 15.47(0.19) & 120.56(2.25) \\ 50 & 0.50 & 101.00(2.33) & 87.01(1.97) & 81.13(1.86) & 79.67(1.81) & 141.32(2.58) & 613.67(5.68) \\ & 0.33 & 19.14(0.16) & 20.19(0.14) & 21.29(0.17) & 22.03(0.16) & 28.61(0.20) & 364.21(4.59) \\ & 0.20 & 11.17(0.05) & 12.24(0.05) & 13.00(0.06) & 13.66(0.06) & 16.36(0.05) & 73.66(1.04) \\ \hline & 1.50 & 71.40(0.89) & 65.83(0.84) & 62.91(0.72) & 61.60(0.67) & 64.71(0.74) & 179.23(2.26) \\ & 2.00 & 23.52(0.17) & 23.03(0.16) & 22.96(0.15) & 22.98(0.15) & 21.63(0.18) & 68.29(0.63) \\ & 3.00 & 11.66(0.06) & 11.20(0.06) & 11.35(0.06) & 11.57(0.06) & 9.28(0.07) & 29.33(0.22) \\ 300 & 0.50 & 27.14(0.17) & 27.45(0.16) & 27.97(0.16) & 28.47(0.16) & 45.51(0.21) & 162.65(0.93) \\ & 0.33 & 13.27(0.06) & 14.17(0.06) & 14.75(0.06) & 15.34(0.06) & 22.94(0.07) & 56.25(0.18) \\ & 0.20 & 8.44(0.03) & 9.23(0.03) & 9.86(0.04) & 10.20(0.04) & 15.63(0.03) & 30.98(0.07) \\ \hline \hline \multicolumn{8}{|c|}{$(LN(1,0.5)-3)/1.6 \times \delta$}\\ \hline & &\multicolumn{4}{|c||}{Proposed} & & \\ \cline{3-6} $\tau$ & $\delta$ & $d=10$ & $d=20$ & $d=30$ & $d=40$ & Lepage & CvM \\ \hline \hline & 1.50 & 118.98(2.34) & 95.71(2.03) & 88.54(1.93) & 78.25(1.71) & 98.82(1.95) & 264.31(3.74) \\ & 2.00 & 22.43(0.35) & 20.01(0.22) & 18.81(0.24) & 17.43(0.19) & 17.45(0.23) & 127.10(2.42) \\ & 3.00 & 10.30(0.06) & 9.85(0.06) & 9.60(0.06) & 9.20(0.05) & 7.45(0.06) & 33.83(0.39) \\ 50 & 0.50 & 28.75(0.57) & 27.65(0.39) & 28.31(0.41) & 29.93(0.44) & 43.64(0.74) & 434.44(5.24) \\ & 0.33 & 13.51(0.06) & 14.92(0.07) & 15.88(0.07) & 16.59(0.07) & 18.72(0.06) & 109.94(2.07) \\ & 0.20 & 10.24(0.04) & 11.44(0.05) & 12.26(0.05) & 12.84(0.05) & 13.89(0.03) & 34.77(0.24) \\ \hline & 1.50 & 31.32(0.25) & 25.76(0.19) & 23.94(0.17) & 22.85(0.16) & 24.76(0.22) & 85.38(0.92) \\ & 2.00 & 14.30(0.09) & 12.34(0.07) & 11.70(0.07) & 11.18(0.06) & 10.67(0.08) & 34.16(0.27) \\ & 3.00 & 8.67(0.04) & 7.51(0.04) & 7.13(0.03) & 6.88(0.03) & 5.88(0.04) & 18.08(0.13) \\ 300 & 0.50 & 17.42(0.08) & 18.54(0.08) & 19.46(0.08) & 19.99(0.08) & 27.37(0.08) & 65.02(0.25) \\ & 0.33 & 10.79(0.04) & 11.69(0.04) & 12.43(0.05) & 12.85(0.05) & 17.56(0.04) & 31.45(0.08) \\ & 0.20 & 8.68(0.03) & 9.58(0.04) & 10.11(0.04) & 10.49(0.04) & 14.35(0.02) & 20.80(0.04) \\ \hline \end{tabular} \end{table} As seen from Table \ref{tab:scal}, the performance of our proposed CUSUM chart also depends on the choice of $d$. In general, our CUSUM charts with larger $d$ have better $ARL_1$ than those with smaller $d$ for detecting scale increases, and vice versa for detecting scale decreases. Based on the performance for detecting both scale increases and decreases, we again recommend using $d=20$ in our proposed CUSUM chart. Between the two CPD chart, the Lepage CPD chart is much better than the CvM CPD for detecting scale changes. Comparing with the Lepage CPD chart, the performance of our CUSUM chart is similar for detecting scale increases, and much better for detecting scale decreases. \subsection*{More general changes} For more general distributional changes, we follow the settings considered in Ross and Adams (2012), and the eight types of distributional changes considered in their paper are listed in Table \ref{tab:change}. Again the change occurs after the change-point $\tau=50$ or $300$. Table \ref{tab:gen} shows the $ARL_1$ of all the three control charts along with their corresponding standard errors (in the parentheses) from 10,000 simulations under the eight different distributional changes. \begin{table}[!htbp] \centering \caption{The type of more general changes considered in the simulations.\label{tab:change}} \begin{tabular}{|c|c|} \hline Change Type & \\ \hline \hline 1 & Exp(1) $\rightarrow$ Exp(3)\\ 2 & Exp(3) $\rightarrow$ Exp(1)\\ 3 & Gamma(2,2) $\rightarrow$ Gamma(3,2)\\ 4 & Gamma(3,2) $\rightarrow$ Gamma(2,2) \\ 5 & Weibull(1) $\rightarrow$ Weibull(3)\\ 6 & Weibull(3) $\rightarrow$ Weibull(1)\\ 7 & Uniform(0,1) $\rightarrow$ Beta(5,5) \\ 8 & Beta(5,5) $\rightarrow$ Uniform(0,1) \\ \hline \end{tabular} \end{table} \begin{table}[!htbp] \centering \caption{The simulated $ARL_1$ for our proposed self-starting adaptive CUSUM chart with different choices of $d$, the Lepage CPD chart and the CvM CPD chart for detecting general distributional changes.\label{tab:gen}} \begin{tabular}{|r|c||rrrr||r|r|} \hline & Change &\multicolumn{4}{|c||}{Proposed} & & \\ \cline{3-6} $\tau$ & type & $d=10$ & $d=20$ & $d=30$ & $d=40$ & Lepage & CvM \\ \hline \hline & 1 & 18.88(0.37) & 19.30(0.18) & 20.00(0.12) & 20.77(0.19) & 25.57(0.28) & 18.16(0.26) \\ & 2 & 16.19(0.19) & 15.34(0.14) & 15.20(0.11) & 15.35(0.11) & 14.67(0.21) & 14.44(0.13) \\ & 3 & 65.01(1.68) & 53.07(1.27) & 55.03(1.37) & 50.15(1.16) & 89.08(1.80) & 54.22(1.23) \\ & 4 & 61.06(1.56) & 51.51(1.25) & 46.40(1.06) & 46.65(1.18) & 82.00(1.86) & 49.80(1.16) \\ 50 & 5 & 17.38(0.09) & 18.98(0.09) & 20.14(0.09) & 21.09(0.10) & 23.26(0.11) & 182.70(3.10) \\ & 6 & 12.74(0.09) & 12.13(0.08) & 11.83(0.08) & 11.10(0.07) & 9.79(0.09) & 54.05(0.93) \\ & 7 & 18.76(0.20) & 19.87(0.11) & 21.08(0.11) & 21.82(0.11) & 29.12(0.33) & 392.65(4.79) \\ & 8 & 15.97(0.14) & 14.62(0.11) & 14.07(0.10) & 13.12(0.09) & 12.49(0.13) & 119.02(2.19) \\ \hline & 1 & 13.55(0.06) & 14.56(0.06) & 15.40(0.06) & 15.89(0.06) & 16.82(0.09) & 11.81(0.06) \\ & 2 & 11.72(0.06) & 10.88(0.06) & 10.76(0.06) & 10.61(0.05) & 8.48(0.06) & 10.82(0.07) \\ & 3 & 22.43(0.13) & 21.92(0.12) & 22.23(0.12) & 22.67(0.12) & 27.33(0.19) & 20.90(0.14) \\ & 4 & 21.58(0.14) & 20.30(0.12) & 20.64(0.12) & 20.48(0.12) & 21.72(0.17) & 20.31(0.15) \\ 300 & 5 & 14.38(0.05) & 15.71(0.06) & 16.61(0.06) & 17.17(0.06) & 20.69(0.05) & 37.74(0.13) \\ & 6 & 9.98(0.05) & 8.93(0.05) & 8.52(0.04) & 8.32(0.04) & 7.16(0.05) & 21.75(0.16) \\ & 7 & 13.61(0.05) & 14.80(0.05) & 15.60(0.06) & 16.19(0.06) & 22.08(0.06) & 60.33(0.18) \\ & 8 & 11.69(0.06) & 10.25(0.05) & 9.96(0.05) & 9.63(0.05) & 8.52(0.06) & 30.55(0.23) \\ \hline \end{tabular} \end{table} As we can see from Table \ref{tab:gen}, different choices of $d$ make slight differences in $ARL_1$ for our proposed CUSUM chart. Our recommendation $d=20$ from the previous simulation studies also seems to work well in all the settings considered here. Between the two CPD charts, there is no clear winner: the CvM CPD chart works better in change types 1, 3 and 4, while the Lepage CPD chart works better in change types 5, 6, 7 and 8. Among all eight change types, we can see that, if our proposed CUSUM chart is not the best, it is very close to the best. In summary, based on the three simulation studies presented above for detecting different types of distributional changes, our proposed CUSUM chart is the best in overall performance comparing with the other two CPD charts. Coupling with its computational advantage over the two CPD charts, our proposed CUSUM chart proves to be a flexible and efficient monitoring tool. \subsection{The proposed adaptive CUSUM chart versus other possible nonparametric adaptive CUSUM charts} In Section \ref{sec:CUSUM}, before we get to the CUSUM statistic $S_t$ in (\ref{eqn:CUSUM_loc_scal}), we also describe several other possible CUSUM statistics based on the categorized data. For example, $S^{(0)}_t$ defined in (\ref{eqn:CUSUM0}) directly uses the categorized data $\mbs{Y}^{(1)}_t$, $S^{(1)}_t$ in (\ref{eqn:CUSUM_loc}) makes use of the left-to-right ordering of the data, and $S^{(2)}_t$ in (\ref{eqn:CUSUM_scal}) incorporates the center-outward ordering of the data. Similar to the approaches presented in Sections 2.2-2.4, based on the CUSUM statistics $S^{(0)}_t$, $S^{(1)}_t$ and $S^{(2)}_t$, we can also develop their self-starting adaptive CUSUM charts, and their corresponding charting statistics are denoted by $\hat{S}^{(0)}_t$, $\hat{S}^{(1)}_t$ and $\hat{S}^{(2)}_t$, respectively. In this section, we compare those control charts with our proposed self-starting adaptive CUSUM chart based on the charting statistic $\hat{S}_t$ in (\ref{eqn:adaptiveCUSUM_loc_scal}). This is to demonstrate the reason described in Section \ref{sec:CUSUM} when we choose $S_t$ as our CUSUM statistic. The simulation settings we consider in this section are the same as those in the previous section. Tables \ref{tab:loc1}-\ref{tab:gen1} summarize the $ARL_1$ of the four control charts along with their corresponding standard errors (in the parentheses) from 10,000 simulations under those settings. In all four control charts, we set $d=20$. \begin{table}[!htbp] \centering \caption{The simulated $ARL_1$ for the self-starting adaptive CUSUM chart based on $\hat{S}_t^{(0)}$, $\hat{S}_t^{(1)}$, $\hat{S}_t^{(2)}$ and $\hat{S}_t$ for detecting location shifts.\label{tab:loc1}} \begin{tabular}{|r|r||r|rr|r|} \hline \multicolumn{6}{|c|}{$N(0,1) + \delta$}\\ \hline $\tau$ & $\delta$ & $\hat{S}_t^{(0)}$ & $\hat{S}_t^{(1)}$ & $\hat{S}_t^{(2)}$ & $\hat{S}_t$ \\ \hline \hline & 0.25 & 447.58(4.79) & 359.45(4.40) & 491.15(4.88) & 381.98(4.52) \\ & 0.50 & 309.99(4.30) & 126.72(2.67) & 479.39(4.84) & 158.55(2.95) \\ & 0.75 & 147.60(3.03) & 30.48(0.67) & 458.07(4.83) & 40.12(0.91) \\ 50 & 1.00 & 49.77(1.51) & 13.93(0.10) & 384.61(4.64) & 16.78(0.14) \\ & 1.50 & 10.20(0.07) & 7.61(0.03) & 149.65(3.01) & 8.89(0.04) \\ & 2.00 & 6.52(0.03) & 5.40(0.02) & 25.81(0.98) & 6.19(0.02) \\ \hline & 0.25 & 308.95(3.93) & 144.36(2.11) & 473.90(4.85) & 172.74(2.43) \\ & 0.50 & 84.56(1.36) & 32.66(0.26) & 413.94(4.66) & 37.77(0.29) \\ & 0.75 & 26.93(0.26) & 16.43(0.10) & 246.76(3.89) & 18.91(0.11) \\ 300 & 1.00 & 14.77(0.09) & 10.85(0.05) & 69.28(1.82) & 12.43(0.06) \\ & 1.50 & 7.69(0.03) & 6.46(0.03) & 11.32(0.07) & 7.26(0.03) \\ & 2.00 & 5.11(0.02) & 4.71(0.02) & 6.36(0.03) & 5.21(0.02) \\ \hline \hline \multicolumn{6}{|c|}{$t(2.5)/\sqrt{5}+\delta$}\\ \hline $\tau$ & $\delta$ & $\hat{S}_t^{(0)}$ & $\hat{S}_t^{(1)}$ & $\hat{S}_t^{(2)}$ & $\hat{S}_t$ \\ \hline \hline & 0.25 & 344.73(4.52) & 216.33(3.61) & 483.24(4.87) & 256.42(3.92) \\ & 0.50 & 98.96(2.41) & 27.00(0.63) & 379.85(4.36) & 34.67(0.78) \\ & 0.75 & 17.16(0.45) & 11.23(0.07) & 215.96(3.51) & 13.20(0.08) \\ 50 & 1.00 & 9.16(0.11) & 7.68(0.04) & 83.47(2.09) & 8.90(0.04) \\ & 1.50 & 5.82(0.02) & 5.19(0.02) & 12.20(0.34) & 5.95(0.02) \\ & 2.00 & 4.71(0.02) & 4.29(0.02) & 6.82(0.06) & 4.90(0.02) \\ \hline & 0.25 & 128.71(2.10) & 51.20(0.57) & 448.46(4.73) & 62.30(0.73) \\ & 0.50 & 20.10(0.15) & 15.14(0.09) & 127.25(2.59) & 17.28(0.09) \\ & 0.75 & 10.30(0.05) & 8.84(0.04) & 18.35(0.22) & 9.93(0.04) \\ 300 & 1.00 & 7.04(0.03) & 6.25(0.03) & 10.05(0.05) & 7.00(0.03) \\ & 1.50 & 4.41(0.02) & 4.25(0.02) & 5.65(0.02) & 4.62(0.02) \\ & 2.00 & 3.46(0.01) & 3.64(0.01) & 4.25(0.02) & 3.82(0.01) \\ \hline \hline \multicolumn{6}{|c|}{$(LN(1,0.5)-3)/1.6+\delta$}\\ \hline $\tau$ & $\delta$ & $\hat{S}_t^{(0)}$ & $\hat{S}_t^{(1)}$ & $\hat{S}_t^{(2)}$ & $\hat{S}_t$ \\ \hline \hline & 0.25 & 442.91(4.87) & 299.41(4.25) & 448.83(4.81) & 295.84(4.21) \\ & 0.50 & 253.46(4.11) & 49.25(1.24) & 335.58(4.25) & 54.20(1.19) \\ & 0.75 & 79.66(2.13) & 17.53(0.11) & 241.27(3.34) & 20.56(0.12) \\ 50 & 1.00 & 23.01(0.72) & 12.00(0.05) & 188.16(2.48) & 14.13(0.06) \\ & 1.50 & 9.38(0.04) & 7.69(0.03) & 96.86(1.54) & 9.01(0.03) \\ & 2.00 & 6.80(0.03) & 5.88(0.02) & 27.65(0.59) & 6.85(0.02) \\ \hline & 0.25 & 268.36(3.59) & 78.77(1.04) & 336.51(3.98) & 84.21(0.99) \\ & 0.50 & 48.67(0.61) & 22.41(0.11) & 214.29(2.08) & 25.52(0.12) \\ & 0.75 & 19.36(0.11) & 13.61(0.05) & 204.82(1.49) & 15.73(0.06) \\ 300 & 1.00 & 12.49(0.05) & 10.08(0.04) & 165.66(1.49) & 11.60(0.04) \\ & 1.50 & 7.64(0.03) & 6.67(0.02) & 13.61(0.09) & 7.62(0.02) \\ & 2.00 & 5.56(0.02) & 5.01(0.02) & 7.70(0.03) & 5.67(0.02) \\ \hline \end{tabular} \end{table} \begin{table}[!htbp] \centering \caption{The simulated $ARL_1$ for the self-starting adaptive CUSUM chart based on $\hat{S}_t^{(0)}$, $\hat{S}_t^{(1)}$, $\hat{S}_t^{(2)}$ and $\hat{S}_t$ for detecting scale changes.\label{tab:scal1}} \begin{tabular}{|r|r||r|rr|r|} \hline \multicolumn{6}{|c|}{$N(0,1) \times \delta$}\\ \hline $\tau$ & $\delta$ & $\hat{S}_t^{(0)}$ & $\hat{S}_t^{(1)}$ & $\hat{S}_t^{(2)}$ & $\hat{S}_t$ \\ \hline \hline & 1.50 & 362.25(4.40) & 316.70(3.96) & 123.90(2.44) & 145.11(2.68) \\ & 2.00 & 214.70(3.54) & 188.35(3.11) & 23.02(0.46) & 27.83(0.53) \\ & 3.00 & 55.36(1.56) & 52.00(1.23) & 9.52(0.06) & 10.97(0.07) \\ 50 & 0.50 & 425.12(4.97) & 142.66(2.71) & 28.53(0.57) & 33.39(0.60) \\ & 0.33 & 224.94(4.03) & 39.88(0.27) & 13.04(0.06) & 15.25(0.07) \\ & 0.20 & 74.07(2.07) & 29.62(0.07) & 9.07(0.04) & 10.59(0.04) \\ \hline & 1.50 & 127.00(1.94) & 94.94(1.27) & 30.99(0.26) & 34.17(0.28) \\ & 2.00 & 30.82(0.35) & 31.25(0.21) & 13.07(0.08) & 14.62(0.09) \\ & 3.00 & 13.46(0.08) & 16.50(0.09) & 7.35(0.04) & 8.23(0.04) \\ 50 & 0.50 & 177.71(2.82) & 48.16(0.15) & 17.12(0.07) & 19.93(0.08) \\ & 0.33 & 40.81(0.56) & 31.82(0.07) & 10.17(0.04) & 11.83(0.04) \\ & 0.20 & 20.81(0.15) & 25.62(0.05) & 7.07(0.02) & 8.18(0.03) \\ \hline \hline \multicolumn{6}{|c|}{$t(2.5)/\sqrt{5} \times \delta$}\\ \hline $\tau$ & $\delta$ & $\hat{S}_t^{(0)}$ & $\hat{S}_t^{(1)}$ & $\hat{S}_t^{(2)}$ & $\hat{S}_t$ \\ \hline \hline & 1.50 & 411.34(4.48) & 377.21(4.38) & 246.63(3.72) & 257.87(3.66) \\ & 2.00 & 299.55(4.07) & 275.83(3.91) & 63.94(1.57) & 79.27(1.85) \\ & 3.00 & 145.71(2.85) & 149.81(2.91) & 15.06(0.13) & 17.58(0.16) \\ 50 & 0.50 & 473.84(5.11) & 378.01(4.87) & 69.85(1.78) & 87.01(1.97) \\ & 0.33 & 333.34(4.70) & 83.60(1.59) & 17.04(0.14) & 20.19(0.14) \\ & 0.20 & 136.31(3.05) & 35.46(0.13) & 10.51(0.05) & 12.24(0.05) \\ \hline & 1.50 & 229.07(3.04) & 180.72(2.39) & 58.62(0.72) & 65.83(0.84) \\ & 2.00 & 73.63(1.09) & 57.14(0.63) & 20.56(0.15) & 23.03(0.16) \\ & 3.00 & 20.62(0.15) & 22.85(0.14) & 10.12(0.05) & 11.20(0.06) \\ 300 & 0.50 & 302.79(4.30) & 80.13(0.57) & 23.40(0.14) & 27.45(0.16) \\ & 0.33 & 73.34(1.47) & 40.65(0.13) & 12.21(0.05) & 14.17(0.06) \\ & 0.20 & 26.23(0.22) & 29.24(0.06) & 7.97(0.03) & 9.23(0.03) \\ \hline \hline \multicolumn{6}{|c|}{$(LN(1,0.5)-3)/1.6 \times \delta$}\\ \hline $\tau$ & $\delta$ & $\hat{S}_t^{(0)}$ & $\hat{S}_t^{(1)}$ & $\hat{S}_t^{(2)}$ & $\hat{S}_t$ \\ \hline \hline & 1.50 & 303.53(4.14) & 242.17(3.54) & 92.41(2.16) & 95.71(2.03) \\ & 2.00 & 136.61(2.90) & 98.81(2.11) & 17.79(0.23) & 20.01(0.22) \\ & 3.00 & 30.82(0.97) & 30.38(0.72) & 8.70(0.05) & 9.85(0.06) \\ 50 & 0.50 & 349.78(4.53) & 76.41(1.51) & 25.62(0.48) & 27.65(0.39) \\ & 0.33 & 154.09(3.36) & 33.95(0.13) & 13.27(0.07) & 14.92(0.07) \\ & 0.20 & 52.25(1.73) & 26.63(0.08) & 10.29(0.05) & 11.44(0.05) \\ \hline & 1.50 & 63.78(0.98) & 51.25(0.48) & 23.95(0.19) & 25.76(0.19) \\ & 2.00 & 20.56(0.15) & 22.64(0.15) & 11.16(0.07) & 12.34(0.07) \\ & 3.00 & 11.31(0.06) & 14.07(0.08) & 6.80(0.03) & 7.51(0.04) \\ 300 & 0.50 & 91.68(1.68) & 37.50(0.11) & 16.19(0.08) & 18.54(0.08) \\ & 0.33 & 26.63(0.23) & 26.92(0.06) & 10.17(0.04) & 11.69(0.04) \\ & 0.20 & 15.09(0.10) & 22.35(0.05) & 8.33(0.03) & 9.58(0.04) \\ \hline \end{tabular} \end{table} \begin{table}[!htbp] \centering \caption{The simulated $ARL_1$ for the self-starting adaptive CUSUM chart based on $\hat{S}_t^{(0)}$, $\hat{S}_t^{(1)}$, $\hat{S}_t^{(2)}$ and $\hat{S}_t$ for detecting general distributional changes.\label{tab:gen1}} \begin{tabular}{|r|r||r|rr|r|} \hline $\tau$ & Change type & $\hat{S}_t^{(0)}$ & $\hat{S}_t^{(1)}$ & $\hat{S}_t^{(2)}$ & $\hat{S}_t$ \\ \hline \hline & 1 & 67.25(0.36) & 16.26(0.17) & 307.53(3.87) & 19.30(0.18) \\ & 2 & 193.37(1.14) & 13.29(0.11) & 187.03(3.57) & 15.34(0.14) \\ & 3 & 67.25(0.36) & 40.29(1.01) & 470.85(4.89) & 53.07(1.27) \\ & 4 & 193.37(1.14) & 37.44(0.89) & 441.57(4.86) & 51.51(1.25) \\ 50 & 5 & 67.25(0.36) & 40.43(0.38) & 17.22(0.10) & 18.98(0.09) \\ & 6 & 193.37(1.14) & 54.56(1.40) & 10.66(0.07) & 12.13(0.08) \\ & 7 & 67.25(0.36) & 57.22(0.83) & 16.95(0.10) & 19.87(0.11) \\ & 8 & 193.37(1.14) & 105.21(2.25) & 12.56(0.09) & 14.62(0.11) \\ \hline & 1 & 17.81(0.10) & 12.61(0.05) & 256.40(2.22) & 14.56(0.06) \\ & 2 & 12.09(0.07) & 10.05(0.05) & 15.63(0.11) & 10.88(0.06) \\ & 3 & 34.29(0.36) & 19.04(0.11) & 385.35(4.43) & 21.92(0.12) \\ & 4 & 29.47(0.28) & 17.88(0.11) & 159.17(3.05) & 20.30(0.12) \\ 300 & 5 & 31.77(0.37) & 29.87(0.08) & 13.69(0.05) & 15.71(0.06) \\ & 6 & 14.26(0.08) & 17.24(0.10) & 8.13(0.04) & 8.93(0.05) \\ & 7 & 76.41(1.36) & 37.38(0.09) & 12.75(0.05) & 14.80(0.05) \\ & 8 & 17.94(0.12) & 21.23(0.13) & 9.24(0.05) & 10.25(0.05) \\ \hline \end{tabular} \end{table} From Tables \ref{tab:loc1}-\ref{tab:gen1}, we can see that the adaptive CUSUM chart based on $\hat{S}^{(1)}_t$ is the most efficient among the four control charts for detecting location shifts. This is due to the fact that $\hat{S}^{(1)}_t$ makes use of the left-to-right ordering of the data. Similarly, because $\hat{S}^{(2)}_t$ makes use of the center-outward ordering of the data, the adaptive CUSUM chart based on $\hat{S}^{(2)}_t$ is the most efficient for detecting scale changes. Our proposed CUSUM charting statistic $\hat{S}_t$ is simply the maximum of $\hat{S}^{(1)}_t$ and $\hat{S}^{(2)}_t$, therefore it takes advantage of the benefits of both $\hat{S}^{(1)}_t$ and $\hat{S}^{(2)}_t$ and is capable of detecting both location and scale changes in an efficient manner. In contrast, the adaptive CUSUM chart based on $\hat{S}^{(0)}_t$ performs the worst among the four control charts in most of the settings considered here. This can be explained by the fact that $\hat{S}^{(0)}_t$ is based on the categorized data $\mbs{Y}^{(1)}_t$ directly and fails to make use of the ordering information of the data. This simulation study shows the importance of preserving the ordering information of the data when designing nonparametric control charts through data categorization. \section{Real data application} In this section, we use a data set in Zou and Tsung (2010) to demonstrate the application of our proposed control chart. The data set consists of 200 observations collected from an aluminium electrolytic capacitor (AEC) manufacturing process, and each observation is the capacitance level of the AEC. Figure \ref{fig:data}(a) shows the time series plot of those 200 observations. As shown in Zou and Tsung (2010), the normality assumption does not hold for this data set, therefore some nonparametric control chart is more suitable in this application. We apply our proposed self-starting adaptive CUSUM chart to this data set. Similar to our simulation study, we set the $\text{ARL}_0$ to be 500, choose $d$ to be 20, and start monitoring after the first 20 observations. Figure \ref{fig:data}(b) shows the trajectory of our proposed charting statistic over the time. \begin{figure}[!htpb] \begin{center} \begin{tabular}{c} \includegraphics[width=3.3in,height=3.3in]{data.eps}\\(a) \end{tabular} \begin{tabular}{c} \includegraphics[width=3.3in,height=3.3in]{St.eps}\\(b) \end{tabular} \caption{(a) The time series plot of the AEC data. (b) Our proposed control chart based on $\hat{S}_t$ for monitoring the AEC data.}\label{fig:data} \end{center} \end{figure} As seen from Figure \ref{fig:data}(b), our proposed control chart triggers an alarm at the 188th observation. In addition to detecting the change, we are also interested in identifying what kind of distributional changes have triggered the alarm. As mentioned in Section \ref{sec:postsignal}, our proposed monitoring scheme is equivalent to monitoring $\hat{S}^{(1+)}_t$, $\hat{S}^{(1-)}_t$, $\hat{S}^{(2+)}_t$, and $\hat{S}^{(2-)}_t$ separately, and raising an alarm whenever at least one of them exceeds the control limit. Figures \ref{fig:CUSUMs}(a)-(d) show the trajectories of $\hat{S}^{(1+)}_t$, $\hat{S}^{(1-)}_t$, $\hat{S}^{(2+)}_t$, and $\hat{S}^{(2-)}_t$ over the time. From Figure \ref{fig:CUSUMs}, we can see that the alarm is mainly caused by $\hat{S}^{(1-)}_t$. Recall that $\hat{S}^{(1+)}_t$ is more powerful for detecting positive location shifts, $\hat{S}^{(1-)}_t$ is more powerful for detecting negative location shifts, $\hat{S}^{(2+)}_t$ is more powerful for detecting scale increases, and $\hat{S}^{(2-)}_t$ is more powerful for detecting scale decreases. From the above, we can conclude that the process is experiencing a negative location shift. This seems to be consistent with what can be observed from the time series plot of the data in Figure \ref{fig:data}(a). \begin{figure}[!htpb] \begin{center} \begin{tabular}{c} \includegraphics[width=3.3in,height=3.3in]{St1+.eps} \\(a) \end{tabular} \begin{tabular}{c} \includegraphics[width=3.3in,height=3.3in]{St1-.eps} \\(b) \end{tabular} \begin{tabular}{c} \includegraphics[width=3.3in,height=3.3in]{St2+.eps} \\(c) \end{tabular} \begin{tabular}{c} \includegraphics[width=3.3in,height=3.3in]{St2-.eps} \\(d) \end{tabular} \caption{The control chart based on (a) $\hat{S}^{(1+)}$; (b) $\hat{S}^{(1-)}$; (c) $\hat{S}^{(2+)}$; (d) $\hat{S}^{(2-)}$ for monitoring the AEC data.}\label{fig:CUSUMs} \end{center} \end{figure} \section{Concluding remarks} In this paper, we propose a nonparametric adaptive CUSUM chart for detecting arbitrary distributional changes. It is free of any tuning parameter, easy to implement and fast in computation. It does not require a large reference data set to start with due to its self-starting nature. It can also automatically identify the distributional changes once an alarm is triggered. Our simulation studies show that the overall performance of the proposed control chart is the best comparing with other existing nonparametric control charts for detecting a variety of distributional changes. All the above features make our proposed control chart very attractive to use in practice. Although our proposed control chart is for detecting any arbitrary distributional changes, based on its construction we can easily develop other efficient nonparametric control charts if only certain types of distributional changes are of interest. For example, if we are only concerned about positive location shifts, we can build our control chart based on $\hat{S}^{(1+)}_t$. Similarly, for negative location shifts, we can use $\hat{S}^{(1-)}_t$; for scale increases, we can use $\hat{S}^{(2+)}_t$; and for scale decreases, we can use $\hat{S}^{(2-)}_t$. If we are only interested in detecting location shifts (both positive and negative), we can use $\max(\hat{S}^{(1+)}_t,\hat{S}^{(1-)}_t)$. If we are only interested in detecting scale changes, we can use $\max(\hat{S}^{(2+)}_t,\hat{S}^{(2-)}_t)$. If scale decreases are not particularly of interest, we can use $\max(\hat{S}^{(1+)}_t,\hat{S}^{(1-)}_t, \hat{S}^{(2+)}_t)$. From the above, we can see that our proposed charting statistic also offers many possibilities to construct other efficient nonparametric control charts for detecting certain types of distributional changes. We plan to further evaluate the performance of those control charts in our future studies. \section*{Appendix: Proof} \begin{proof}[\textbf{Proof of Theorem \ref{thm1}}] Based on the probability integral transformation, without loss of generality we assume that the in-control distribution of $X_t$ is the uniform distribution on (0,1). It is clear that $\hat{\mbs{Y}}^{(i)}_{t}$ follows a multinomial distribution. Note that \begin{align*} &P\left(X_t \in (0, \hat{q}^{(1)}_{t,j}]\right)=E\left(\hat{q}^{(1)}_{t,j}\right)\\ =&\left(1-\frac{j(m+t)}{d}+l\right) E\left(X_{t,(l)}\right)+\left(\frac{j(m+t)}{d}-l\right)E\left(X_{t,(l+1)}\right) \end{align*} where $ l/(m+t) \leq j/d < (l+1)/(m+t)$. Since the in-control distribution of $X_t$ is the uniform distribution on (0,1), the order statistics $X_{t,(l)}$ and $X_{t,(l+1)}$ follow the beta distribution beta$(l,m+t-l)$ and beta$(l+1,m+t-l-1)$, respectively. Therefore, \[ P\left(X_t \in (0, \hat{q}^{(1)}_{t,j}]\right) =\left(1-\frac{j(m+t)}{d}+l\right) E\left(X_{t,(l)}\right)+\left(\frac{j(m+t)}{d}-l\right)E\left(X_{t,(l+1)}\right)=j/d. \] As a result, \[ P(\hat{Y}^{(1)}_{t,j} =1)=P\left(X_t \in (\hat{q}^{(1)}_{t,j-1}, \hat{q}^{(1)}_{t,j}]\right)=P\left(X_t \in (0, \hat{q}^{(1)}_{t,j}]\right)-P\left(X_t \in (0, \hat{q}^{(1)}_{t,j-1}]\right)=1/d, \] Similarly, we can obtain \[ P\left(X_t \in (0, \hat{q}^{(2)}_{t,j}]\right) =\left(1-\frac{j(m+t)}{2d}+l\right) E\left(X_{t,(l)}\right)+\left(\frac{j(m+t)}{2d}-l\right)E\left(X_{t,(l+1)}\right)=j/(2d), \] where $ l/(m+t) \leq j/(2d) < (l+1)/(m+t)$, and \[ P(\hat{Y}^{(2)}_{t,j} =1)=P\left(X_t \in (\hat{q}^{(2)}_{t,k-j},\hat{q}^{(2)}_{t,k-j+1}]\right)+P\left(X_t \in ( \hat{q}^{(2)}_{t,k+j-1}, \hat{q}^{(2)}_{t,k+j}]\right)=1/d. \] Therefore, both $\hat{\mbs{Y}}^{(1)}_{t}$ and $\hat{\mbs{Y}}^{(2)}_{t}$ follow Multi$(1;1/d,...,1/d)$, the same as $\mbs{Y}^{(1)}_{t}$ and $\mbs{Y}^{(2)}_{t}$. To prove that the $\hat{\mbs{Y}}^{(i)}_{t}$, $i=1,2$, are independently distributed among different $t$, we notice that the sequential rank of $X_t$, i.e., the rank of $X_t$ in the set $X_{-m+1},...,X_0, X_1,...,X_{t-1},X_t$, independently follows a uniform distribution on the integers 1,2,..., $m+t$. Define $\hat{C}_{t,1}=(0,X_{t,(1)}], \, \hat{C}_{t,2}=(X_{t,(1)}, X_{t,(2)}], ..., \hat{C}_{t,m+t}=(X_{t,(m+t-1)},1)$. The above independence of the sequential ranks implies that the probabilities of $X_t$ falling in the intervals $\hat{C}_{t,1},..., \hat{C}_{t,m+t}$ are independent among different $t$. Since $\hat{A}^{(i)}_{t,j}$, $i=1,2$ and $j=1,...,d$, can be all constructed from $\hat{C}_{t,1},..., \hat{C}_{t,m+t}$, the probabilities of $X_t$ falling in the regions $\hat{A}^{(i)}_{t,1},..., \hat{A}^{(i)}_{t,d}$ are also independent among different $t$. This proves that $\hat{\mbs{Y}}^{(i)}_{t}$, $i=1,2$, are independently distributed among different $t$. \end{proof} \nocite*{}
{ "timestamp": "2017-12-15T02:03:08", "yymm": "1712", "arxiv_id": "1712.05072", "language": "en", "url": "https://arxiv.org/abs/1712.05072" }
\section{Introduction} In the financial literature, models based on L\'evy (or $\alpha$-stable) distributions~\cite{Zolotarev86} play a prominent role, because such distributions possess heavy tails and thus allow extreme but realistic events, such as sudden jumps of market prices, that Gaussian models fail to describe; their relevance in financial modeling has been known since the works of Mandelbrot and Fama in the 1960s~\cite{Mandelbrot63,Fama65}. They are also closely related to fractional analysis: when the price log-returns are driven by an $\alpha$-stable distribution with maximal negative asymmetry (or skewness)~\cite{Carr03} then, after some suitable transformations, the price of an option on this asset is solution to a space-fractional PDE with boundary conditions \cite{KK16}. Such models recently gained in popularity, because, as noticed by Walter in his epistemological work on on financial models \cite{C13}, technology has changed our perception of risk. At the time when traders could only see a closing price on their screens (that is, the final price on a given trading day), then the Gaussian hypothesis had a dominating influence on their mind. But when data providers became able to collect and redistribute intraday market data, it became clear that the intraday prices exhibited continuous jumps, and therefore traders and market makers started taking heavy tail models into account. With high frequency trading, a new revolution has begun: nowadays, financial engineers need to consider the aggregation of a large number of trades in short periods of time, alternating with non-trading periods. The clock time is no longer adapted to the real market dynamics and the old hypothesis of "market time" seems to be much more suitable. Montroll and Weiss \cite{Montroll65} introduced a very simple idea: instead of considering fixed time steps, they allowed the steps to vary randomly with some statistical distribution. This model, called the Continous Time Random Walk Model (CTRW), is a good framework for modeling the tick by tick dynamic of financial assets: Gorenflo et al. \cite{Gorenflo00} explained why the CTRW is a statistically relevant candidate for modeling German and Italian bond prices (Bund and BTP) and derived the corresponding time fractional PDE. To mix the advantages of both space and time fractionality, space-time double-fractional diffusion has been introduced and extensively studied from the theoretical point of view \cite{Gorenflo99,Zatloukal14,Luchko16,Luchko16a,Mainardi07,Mainardi10,Stynes16}; however, it has only been recently considered in financial modeling \cite{zhu14,Gong,KK16,koleva,Korbel16}. It features a more complete structure than the simple composition of the time and space fractional models as it exhibits non trivial phenomena including larges jumps and memory effects, which can not be understood as a simple market time re-parametrization of an $\alpha$-stable process. Let us also mention that it has also found many applications in real systems -- financial processes representing one of the most promising fields, where the fractional diffusion and generally fractional calculus has been successfully applied \cite{Akrami,funahashi,Jizba18,kleinert,Kerrs,Pagnini04,Tarasova17,Tarasova18}. Nevertheless, when it comes to option pricing, the old Gaussian model first described by Black and Scholes \cite{Black} is still the most used by market practitioners. The main reason is that this model is analytically solvable, that is, the price of an option can be easily expressed in terms of elementary functions of the market parameters. Realistic generalizations of the Black-Scholes model such as switching multifractal models~\cite{Calvet08}, stochastic volatility models~\cite{Heston93,jizba09} or jump processes~\cite{Tankov03} possess, at best, semi-closed pricing formulas or must be solved with help of numerical simulations. And, as for the space-time fractional diffusion model we mentioned earlier, pricing formulas take the form of Mellin-Barnes integral representation~\cite{KK16}, which are intractable for practitioners, and whose numerical estimation can be erroneous and time consuming. The purpose of this paper is to show that it is possible to transform this integral representation into a rapidly converging double-series, which does not involve any advanced mathematical operators. Moreover, one can easily control the numerical precision of the resulting price. The calculation of the series is based on multidimensional Mellin transform and residue summation in $\mathbb{C}^2$. The paper is organized as follows: Section 2 introduces basic concepts in multiple Mellin-Barnes integrals and discusses the properties of fractional diffusion fractional diffusion and its applications to option pricing. In Section 3, we derive a closed formula for the so-called risk-neutral parameter. The main result of the paper, i.e., the series representation for an European call option driven by the space-time fractional diffusion, is presented in Section 4, with discussion of several special cases. The final section is dedicated to conclusions. \section{Preliminary results} In this section, we briefly summarize the main results about option pricing based on fractional diffusion. The first space-fractional option pricing model was introduced by Carr and Wu \cite{Carr03} and it has been generalized for the case of space-time fractional diffusion in Ref. \cite{KK16}. These models are strongly related to Mellin-Barnes integrals; in order to evaluate these integrals, we start by introducing some concepts in multidimensional complex analysis and residue theory. \subsection{Mellin transform and residue summation} We enumerate, without proof, concepts and properties of the Mellin transform in one and two dimensions, that will be useful for deriving the main results of this paper. Proofs and full details on the theory of the one-dimensional Mellin transform are provided in \cite{Flajolet95}. An introduction to multidimensional complex analysis can be found e.g., in the classic textbook \cite{Griffiths78}, and applications to the specific cases of Mellin-Barnes integrals were developed in \cite{Tsikh94,Tsikh97}. \subsubsection{One-dimensional Mellin transform} Let us briefly summarize the main properties of Mellin transform: \begin{enumerate}[label=\textbf{\arabic*}.,wide, labelwidth=!, labelindent=0pt] \item The Mellin transform of a locally continuous function $f$ defined on $\mathbb{R}^+$ is the function $f^*$ defined by \begin{equation}\label{Mellin_def} f^*(s) \, := \, \int\limits_0^\infty \, f(x) \, x^{s-1} \, \mathrm{d} x \end{equation} The region of the complex plane $\{ \alpha < Re (s) < \beta \}$ into which the integral \eqref{Mellin_def} converges is often called the fundamental strip of the transform, and sometimes denoted $ < \alpha , \beta > $. \item The Mellin transform of the exponential function is, by definition, the Euler Gamma function: \begin{equation} \Gamma(s) \, = \, \int\limits_0^\infty \, e^{-x} \, x^{s-1} \, \mathrm{d} x \end{equation} with strip of convergence $\{ Re(s) > 0 \}$. Outside of this strip, it can be analytically continued, except at every negative integer $s=-n$ where it admits the singular behavior \begin{equation}\label{sing_Gamma} \Gamma(s) \, \underset{s\rightarrow -n}{\sim} \, \frac{(-1)^n}{n!}\frac{1}{s+n} \hspace*{1cm} n\in\mathbb{N} \end{equation} \item The inversion of the Mellin transform is performed via an integral along any vertical line in the strip of convergence: \begin{equation}\label{inversion} f(x) \, = \, \int\limits_{c-i\infty}^{c+i\infty} \, f^*(s) \, x^{-s} \, \frac{\mathrm{d} s}{2i\pi} \hspace*{1cm} c\in ( \alpha, \beta ) \end{equation} and notably for the exponential function one gets the so-called \textit{Cahen-Mellin integral}: \begin{equation}\label{Cahen} e^{-x} \, = \, \int\limits_{c-i\infty}^{c+i\infty} \, \Gamma(s) \, x^{-s} \, \frac{\mathrm{d} s}{2i\pi} \hspace*{1cm} c>0 \end{equation} \item When $f^*(s)$ is a ratio of products of Gamma functions of linear arguments: \begin{equation} f^*(s) \, = \, \frac{\Gamma(a_1 s + b_1) \dots \Gamma(a_n s + b_n)}{\Gamma(c_1 s + d_1) \dots \Gamma(c_m s + d_m)} \end{equation} then one speaks of a \textit{Mellin-Barnes integral}, whose \textit{characteristic quantity} is defined to be \begin{equation}\label{Delta_1D} \Delta \, = \, \sum\limits_{k=1}^n \, a_k \, - \, \sum\limits_{j=1}^m \, c_j \end{equation} $\Delta$ governs the behavior of $f^*(s)$ when $|s|\rightarrow \infty$ and thus the possibility of computing \eqref{inversion} by summing the residues of the analytic continuation of $f^*(s)$ right or left of the convergence strip: \begin{equation} \left\{ \begin{aligned} & \Delta < 0 \hspace*{1cm} f(x) \, = \, -\sum\limits_{Re(s_N) > \beta} \, \mathrm{Res}_{S_N} f^*(s)x^{-s} \\ & \Delta > 0 \hspace*{1cm} f(x) \, = \, \sum\limits_{Re(s_N) < \alpha} \, \mathrm{Res}_{S_N} f^*(s)x^{-s} \end{aligned} \right. \end{equation} For instance, in the case of the Cahen-Mellin integral one has $\Delta = 1$ and therefore: \begin{equation} e^{-x} \, = \, \sum\limits_{Re(s_n)<0} \mathrm{Res}_{s_n} \Gamma(s) \, x^{-s} \, = \, \sum\limits_{n=0}^{\infty} \, \frac{(-1)^n}{n!}x^n \end{equation} as expected from the usual Taylor series of the exponential function. \end{enumerate} \subsubsection{Multidimensional Mellin transform} Mellin transform can also be extended to the multidimensional domain. Below are the main properties of the multidimensional Mellin transform: \begin{enumerate}[label=\textbf{\arabic*}.,wide, labelwidth=!, labelindent=0pt] \item Let $\underline{a}_k$, $\underline{c}_j$, be vectors in $\mathbb{C}^2$, and $b_k$, $d_j$ some complex numbers. Let $\underline{t}:=\begin{bmatrix} t_1 \\ t_2 \end{bmatrix}$ and $\underline{c}:=\begin{bmatrix} c_1 \\ c_2 \end{bmatrix}$ in $\mathbb{C}^2$. The symbol "." denotes the Euclidean scalar product. We speak of a Mellin-Barnes integral in $\mathbb{C}^2$ when one deals with an integral of the type \begin{equation} \int\limits_{\underline{c}+i\mathbb{R}^2} \, \omega \end{equation} where $\omega$ is a complex differential 2-form which reads \begin{equation} \omega \, = \, \frac{\Gamma(\underline{a}_1.\underline{t}_1 + b_1) \dots \Gamma(\underline{a}_n.\underline{t}_n + b_n)}{\Gamma(\underline{c}_1.\underline{t}_1 + d_1) \dots \Gamma(\underline{c}_m.\underline{t}_m + b_m)} \, x^{-t_1} \, y^{-t_2} \, \frac{\mathrm{d} t_1}{2i\pi} \wedge \frac{\mathrm{d} t_2}{2i\pi} \quad \, x,y \in\mathbb{R} \end{equation} The singular sets induced by the singularities of the Gamma functions \begin{equation} D_k \, := \, \{ \underline{t}\in\mathbb{C}^2 \, , \, \underline{a}_k.\underline{t}_k + b_k = -n_k \, , \, n_k \in\mathbb{N} \} \,\,\,\, \, k=0 \dots n \end{equation} are called the \textit{divisors} of $\omega$. The \textit{characteristic vector} of $\omega$ is defined to be \begin{equation} \Delta \, = \, \sum\limits_{k=1}^n \underline{a}_k \, - \, \sum\limits_{j=1}^m \underline{c}_j \end{equation} and the \textit{admissible half-plane}: \begin{equation} \Pi_\Delta \, := \, \{ \underline{t}\in\mathbb{C}^2 \, , \, Re( \Delta . \underline{t} ) \, < \, Re( \Delta . \underline{c} )\, \} \end{equation} \item Let the $\rho_k$ in $\mathbb{R}$, the $h_k:\mathbb{C}\rightarrow\mathbb{C}$ be linear applications and $\Pi_k$ be a subset of $\mathbb{C}^2$ of the type \begin{equation}\label{Pik} \Pi_k \, := \, \{ \underline{t}\in\mathbb{C}^2, \, Re(h_k(t_k)) \, < \, \rho_k \}\, . \end{equation} A \textit{cone} in $\mathbb{C}^2$ is a Cartesian product \begin{equation} \Pi \, = \, \Pi_1 \times \Pi_2 \end{equation} where $\Pi_1$ and $\Pi_2$ are of the type \eqref{Pik}. Its \textit{faces} $\varphi_k$ are \begin{equation} \varphi_k \, := \, \partial \Pi_k \hspace*{1cm} k=1,2 \end{equation} and its \textit{distinguished boundary}, or \textit{vertex} is \begin{equation} \partial_0 \, \Pi \, := \, \varphi_1 \, \cap \, \varphi_2\, . \end{equation} \item Let $1<n_0<n$. We group the divisors $D=\cup_{k=0}^n \, D_k$ of the complex differential form $\omega$ into two sub-families \begin{equation} D_1 \, := \, \cup_{k=1}^{n_0} \, D_k, \,\,\, \,\,\, D_2 \, := \, \cup_{k=n_0+1}^{n} \, D_k, \hspace*{1cm} D \, = \, D_1\cup D_2. \end{equation} We say that a cone $\Pi\subset\mathbb{C}^2$ is \textit{compatible} with the divisors family $D$ if: \begin{enumerate} \item[-] \, Its distinguished boundary is $\underline{c}$; \item[-] \, Every divisor $D_1$ and $D_2$ intersect at most one of his faces: \begin{equation} D_k \, \cap \, \varphi_k \, = \, \emptyset \hspace*{1cm} \mathrm{for} \ k=1,2. \end{equation} \end{enumerate} \item Residue theorem for multidimensional Mellin-Barnes integral \cite{Tsikh94,Tsikh97}: If $\Delta \neq 0$ and if $\Pi\subset\Pi_\Delta$ is a compatible cone located into the admissible half-plane, then \begin{equation}\label{res_thm_C2} \int\limits_{\underline{c}+i\mathbb{R}^2} \, \omega \, = \, \sum\limits_{\underline{t}\in\Pi\cap(D_1 \cap D_2)} \mathrm{Res}_{\underline{t}} \, \omega \end{equation} and the series converges absolutely. The residues are to be understood as the "natural" generalization of the Cauchy residue, that is: \begin{multline} \mathrm{Res}_0 \, \left[ f(t_1,t_2) \, \frac{\mathrm{d} t_1}{2i\pi t_1^{\alpha_1}} \wedge \frac{\mathrm{d} t_2}{2i\pi t_1^{\alpha_2}} \right] \, = \, \frac{1}{(\alpha_1-1)!(\alpha_2-1)!} \times \\ \frac{\partial ^{\alpha_1+\alpha_2-2}}{\partial t_1^{\alpha_1-1} \partial t_2^{\alpha_2-1} } f(t_1,t_2) |_{t_1=t_2=0} \end{multline} where $\alpha_1$ and $\alpha_2$ are strictly positive integers. \end{enumerate} \subsection{Space-time fractional diffusion} Space-time (double)-fractional diffusion equation is a generalization of the ordinary diffusion equation for non-natural derivatives. One of the most popular forms is based on Caputo time-fractional derivative and Riesz-Feller fractional derivative. It can be expressed as \begin{equation}\label{double_fractional} \left({}^\ast_0 \mathcal{D}^\gamma_t - \mu [{}^\theta \mathcal{D}^\alpha_x] \right) g(x,t) = 0\, \end{equation} where $x \in \mathds{R}$ and $t \in [0,\infty)$. Parameters can acquire the following values: $\alpha \in (0,2]$, $\gamma \in (0,\alpha]$. Asymmetry parameter $\theta$ is defined in the so-called \emph{Feller-Takayasu diamond} $|\theta| \leq \min \left\{\alpha, 2-\alpha \right\}$. ${}^\ast_0 \mathcal{D}^\gamma_t$ denotes the \emph{Caputo fractional derivative}, which is defined as \begin{equation} {}^\ast_{t_0} \mathcal{D}^\nu_t f(t) = \frac{1}{\Gamma(\lceil \nu \rceil - \nu)} \int_{t_0}^t \frac{f^{\lceil \nu \rceil}(\tau)}{(t - \tau)^{ \nu +1-\lceil \nu \rceil}} \mathrm{d} \tau \end{equation} and ${}^\theta \mathcal{D}^\alpha_x$ denotes the \emph{Riesz-Feller fractional derivative}, which is usually defined via its Fourier image as \begin{equation} \mathcal{F}[{}^\theta \mathcal{D}^\nu_x f(x)](k) = - {}^\theta \psi^\nu (k)F[f(x)](k) = - \mu |k|^\nu e^{i(\mathrm{sign} k) \theta \pi /2} \mathcal{F}[f(x)](k)\, . \end{equation} Naturally, both derivatives become ordinary derivative operators for the order of the derivative is a natural number. According to the order of temporal-derivative $\gamma$, the equation requires one or two conditions. Apart from standard initial condition \begin{equation} g(x,t=0) = f_0(x)\, , \end{equation} it is for $\gamma >1$ necessary to input the condition \begin{equation} \frac{\partial g(x,t)}{\partial t} |_{t=0} = f_1(x)\, . \end{equation} Typically, we choose $f_1(x) \equiv 0$ (this is also used in the rest of this paper). The space-time fractional diffusion has been studied by many authors, perhaps the most famous is the paper by Gorenflo et al.\cite{Gorenflo99}, where it is also possible to find all technical details. Here, we briefly revise several important aspects of the space-time fractional diffusion. First, the scaling of the fundamental solution (also called \emph{Green function}) $g(x,t)$ is given by the scaling exponent $\Omega$, so we obtain the scaling \begin{equation} g(x,t) = \frac{1}{t^\Omega} \, G \left(\frac{x}{t^\Omega}\right) \end{equation} where $\Omega = \gamma/\alpha$. Second, for particular values of parameters, it is possible to recover well-known distributions. For $\gamma=1$, i.e., space-fractional diffusion, we recover L\'{e}vy diffusion driven by $\alpha$-stable distribution. For $\gamma=1$ and $\alpha=2$ we recover normal diffusion driven by Gaussian distribution. In order to express the solution $g(x,t)$, it is usual to transform Eq.~\eqref{double_fractional} into its Fourier-Laplace image ($x \stackrel{\mathcal{F}}{\rightarrow} k$, $t \stackrel{\mathcal{L}}{\rightarrow} s$). Let us remind the initial conditions $f_1(x) = 0$ and $f_0(x) = \delta(x)$. This leads to the algebraic equation \begin{equation}\label{lapfur} \hat{\bar{g}}^\theta_{\alpha,\gamma}(k,s) s^\gamma -s^{\gamma-1} + {}^\theta \psi^\alpha(k) \hat{\bar{g}}^\theta_{\alpha,\gamma}(k,s) = 0\, . \end{equation} The original solution $g_{\alpha,\gamma}^\theta(x,t)$ can be expressed by the inverse Fourier-Laplace transform. We show two important representations of the fundamental solution of Eq.~\eqref{double_fractional}. The first representation, in detail discussed in Ref.~\cite{Zatloukal14}, is important mainly for the case $\gamma <1$ and is based on \emph{Schwinger trick} $1/A = \int_0^\infty e^{- l A} \mathrm{d} l$. This enables to rewrite the expression $1/(s^\gamma + {}^\theta \psi^\alpha(k))$ as $\int_0^\infty e^{-l s^\gamma} e^{- l {}^\theta \psi^\alpha(k))} \mathrm{d} l$ so it is possible to separate functions depending of $s$ and $k$. Consequently, it is possible to rewrite the distribution as an integral composition of two kernels \begin{equation}\label{smearing} g_{\alpha,\gamma}^\theta(x,t) = \int_0^\infty g_\gamma(t,l) g_\alpha^\theta(l,x)\, \mathrm{d} l \end{equation} where $g_\gamma$ and $g_{\alpha}^\theta$ are solutions of single-fractional diffusion equations \begin{eqnarray} \frac{\partial g_\gamma(t,l)}{\partial l} &=& {}^\ast_0 \mathcal{D}^\gamma_t \, g_\gamma(t,l)\, ,\\ \frac{\partial g_\alpha^\theta(l,x)}{\partial l} &=& {}^\theta \mathcal{D}^\alpha_x \, g_\alpha^\theta(l,x)\, . \end{eqnarray} Each equation represents one class of single-fractional diffusion processes. The first equation describes time-fractional diffusion, while the second represents the space-fractional diffusion leading to $\alpha$-stable distributions. It is formally possible to use this representation also for $\gamma >1$, but in this case is $g_\gamma(t,l)$ not anymore positive and therefore cannot be interpreted as a \emph{smearing kernel} (details can be found in \cite{Zatloukal14,KK16}). On the other hand, it can be useful to use the Schwinger representation for calculation of some derived quantities, as e.g., moments of the distribution. We demonstrate this approach in Section \ref{sec:RN}, where we use this representation in order to calculate the risk-neutral factor corresponding to the space-time fractional diffusion. Alternatively, Eq. \eqref{lapfur} can be solved by Mellin transform technique resulting into the Mellin-Barnes integral. The inverse Laplace transform of Eq.~\eqref{lapfur} reads~\cite{Gorenflo99}: \begin{equation} \hat{g}(t,k) = E_{\gamma}({}^\theta \psi^\alpha(k) t^\gamma)\, , \end{equation} where $E_\gamma(x) = \sum_{n=0}^\infty \frac{x^n}{\Gamma(\gamma n + 1)}$ is the Mittag-Leffler function. It is possible to represent it via the Mellin-Barnes integral as \begin{equation} E_a(z) = \frac{1}{2 \pi i } \int\limits_{c - i \infty}^{c+i \infty} \frac{\Gamma(t_1)\Gamma(1-t_1)}{\Gamma(1-a t_1)} (-z)^{-t_1} \mathrm{d} t_1\, , \end{equation} where $c \in (0,1)$, which is given by the Mellin transform theorem \cite{Flajolet95}. After plugging back and straightforward calculation, one ends with Mellin-Barnes representation of space-time fractional Green function as \begin{eqnarray}\label{Green_function_DF} g_{\alpha,\gamma}^\theta(x,t) = \frac{ 1 }{\alpha x}\frac{1}{2 \pi i} \int\limits_{c_1 - i \infty}^{c_1 + i \infty} \Gamma \left[ \begin{array}{ccc} \frac{t_1}{\alpha} & 1-\frac{t_1}{\alpha} & 1-t_1 \\ 1-\frac{ \gamma t_1}{\alpha} & \frac{\alpha-\theta}{2\alpha}t_1 & 1 - \frac{\alpha-\theta}{2\alpha}t_1 \\ \end{array} \right]\nonumber \\ \times \left(\frac{x}{(-\mu t^{\gamma})^{1/\alpha}}\right)^{t_1} \, \mathrm{d} t_1 \end{eqnarray} where \emph{Gamma fraction} is defined as $\Gamma\left[\begin{array}{ccc} x_1 & \dots & x_n \\ y_1 & \dots & y_m \end{array}\right] = \frac{\Gamma(x_1)\dots\Gamma(x_n)}{\Gamma(y_1)\dots\Gamma(y_m)}$. \section{Price of European call option in the space-time fractional model} We recall the principles of option pricing and their applications to double fractional diffusion models. At the end of this section, we notably focus on series representation of the risk-neutral factor, which generalizes the well-known Black-Scholes risk-neutral factor $\frac{\sigma^2}{2}$ to our wider class of models. \subsection{Option pricing} Option pricing consists in two important aspects. First, a realistic model of underlying price and second, an appropriate hedging policy which maximally eliminates the risk. Optimally, the risk should be completely eliminated. Based on the assumption of \emph{efficient market}, the most popular hedging policy is the \emph{risk-neutral} pricing. In this scenario, the option price of European type, i.e., option with given maturity time, is calculated as \begin{equation}\label{expect} C(S_t,K,r,\sigma,t) = e^{-r \tau}\langle C(S_T,K,r,\sigma,T) | \mathcal{F}_t \rangle_\mathbb{Q} \end{equation} where $\tau=T-t$. $\mathbb{Q}$ is the so-called \emph{risk-neutral measure}, equivalent to original probability measure $\mathbb{P}$ describing the price evolution. For exponential processes described by its log-returns, the risk-neutral measure is given by \emph{Esscher transform} \cite{Gerber93}. The terminal condition at $t=T$ (or equivalently, the initial condition for $\tau=0$) is given by the option's payoff, which, for a call option, is equal to \begin{equation} C(S_T,K,r,\sigma,T) \, = \, \max \{ S_T - K , 0 \} \, =: \, [S_T-K]^+\, . \end{equation} For the space-time fractional model, the call option price \eqref{expect} can be expressed as the convolution of the Green function \eqref{Green_function_DF} and the payoff (after some suitable change of variables) \cite{KK16}: \begin{equation}\label{propagator} C_{\alpha,\gamma}^\theta (S,K,r,\mu,\tau) = e^{-r \tau} \int\limits_{-\infty}^\infty \ \left[S e^{\tau(r-q+\mu)+y} - K\right]^+ g_{\alpha,\gamma}^\theta(y,\tau) \mathrm{d} y\, . \end{equation} Note that in our future calculations we will take, without loss of generality, a dividend $q=0$ in order to simplify the notations. Factor $\mu$ appearing the ``modified payoff'' is a result of the risk-neutral measure $\mathbb{Q}$, which is obtained by the Esscher transform of the original measure $\mathbb{P}$. Details can be found in~\cite{KK16}. It is possible to calculate this so-called \emph{risk-neutral factor} $\mu$ as \begin{equation} \mu = - \log \int e^y g_{\alpha,\gamma}^\theta(y,\tau=1) \mathrm{d} y\, \end{equation} when the integral exists. Obviously, the necessary condition of integral convergence is exponential decay in positive tail of the probability distribution. This can be assured as soon as the \textit{maximal (negative) asymmetry condition} holds, that is when $\theta = \alpha-2$, $\alpha>1$. This fully asymmetric case was for space-fractional diffusion discussed in Ref.~\cite{Carr03}, for space-time fractional diffusion in Refs.~\cite{KK16,Korbel16}. Let us briefly discuss the interpretation of main model parameters, i.e., the derivative orders $\alpha$, $\gamma$ and $\sigma$ in option pricing. First, parameter $\sigma$ has the role of scale parameter and can be interpreted as the market risk. This means that if $\sigma$ increases, uncertainty in the market increases and\emph{all} option prices also increase and vice versa. On the other hand, this is not the case for parameters $\alpha$ and $\gamma$. As discussed in \cite{KK16,Korbel16}, they play the role of \emph{risk redistribution} parameters. The role of $\alpha$ characterizes spatial redistribution, because with decreasing $\alpha$ the negative tail of the distribution becomes ``heavier'' (the decay is slower) and the probability of large drops increases dramatically. This has been extensively discussed in \cite{Carr03}. Similarly, parameter $\gamma$ characterizes ``temporal'' risk redistribution, caused e.g., by some memory effects \cite{Tarasova18,Teyssiere}. Thus, it increases the risk for short/long-term options, while decreases the risk for the other type. The presence of space-time risk redistribution has the impact on various observable phenomena, e.g., to the shape of volatility-smile \cite{AK18}. \subsection{Risk-neutral factor for the space-time fractional diffusion}\label{sec:RN} It is unfortunately not always possible to express the Risk-neutral factor $\mu$ analytically. Nevertheless in the spatial-diffusion case (that is for $\gamma=1$) , it is known that ~\cite{Carr03} \begin{equation}\label{mu_Carr_Wu} \mu_1 = \left( \frac{\sigma}{\sqrt{2}}\right) ^\alpha \sec{\frac{\pi \alpha}{2}} \end{equation} when the maximal negative asymmetry assumption is fulfilled. For space-time fractional case it is possible to derive an integral representation based on Eq.~\eqref{smearing} and to to rewrite $\mu$ as \begin{eqnarray}\label{muDF1} \mu &=& - \log \int\limits_{-\infty}^\infty \, \mathrm{d} y \, e^y \int\limits_0^\infty \, \mathrm{d} l \, g_\gamma(\tau=1,l) g_\alpha^\theta(l,x) \nonumber \\ &=& - \log \int\limits_0^\infty \, \mathrm{d} l g_\gamma(\tau = 1,l) e^{-\left(\frac{\sigma}{\sqrt{2}} l \right)^\alpha \sec \left(\frac{\pi \alpha}{2}\right) } \, . \end{eqnarray} The last representation was obtained by change of integrals. For a Caputo time fractional derivatives, it can be shown \cite{KK16,Gorenflo99} that for $\gamma < 1$ \begin{equation} g_\gamma(\tau,l) \, = \, \frac{1}{\tau^\gamma} \, M_\gamma\left( \frac{l}{\tau^\gamma} \right) \end{equation} where $M_\nu(z)$ is a function of Wright type, which admits the following Mellin-Barnes representation \cite{Mainardi10}: \begin{equation} M_\nu(z) \, = \, \int\limits_{c-i\infty}^{c+i\infty} \, \frac{\Gamma(s)}{\Gamma(\nu s + 1 - \nu) } \, z^{-s} \, \frac{\mathrm{d} s}{2i\pi} \hspace*{1cm} c>0\, . \end{equation} Interestingly, the Mellin-Barnes representation is also valid for the case $\gamma >1$, but it does not lead to a positive smearing kernel. Plugging into \eqref{muDF1} and recalling \eqref{mu_Carr_Wu} we obtain: \begin{equation}\label{muDF2} \mu \, = \, - \log \int\limits_{c-i\infty}^{c+i\infty} \, \frac{\Gamma(s)}{\Gamma(\gamma s + 1 - \gamma) } \, \int\limits_0^\infty l^{-s} e^{-l^\alpha \, \mu_1} \, \mathrm{d} l \, \frac{\mathrm{d} s}{2i\pi}\, . \end{equation} The integral over $l$ is straightforward to perform, because it is the integral representation of the Gamma function \cite{Abramowitz72}: \begin{equation}\label{mugamma} \int\limits_0^\infty l^{-s} \, e^{-l^\alpha \mu_1} \, \mathrm{d} l = \, \frac{1}{\alpha}\Gamma\left(\frac{1-s}{\alpha}\right) \, \mu_1^{\frac{s-1}{\alpha}}\, . \end{equation} The integral converges in the strip $Re(s)<1$. As a result, we can formulate a proposition, which is a simple consequence of \eqref{muDF2} and \eqref{mugamma}: \begin{proposition} Let $\sigma>0$, $1<\alpha \leq 2$, and $\mu_1=\left( \frac{\sigma}{\sqrt{2}}\right) ^\alpha \sec{\frac{\pi \alpha}{2}}$, then the risk-neutral factor $\mu$ admits the representation: \begin{equation}\label{mu_MB} \mu \, = \, - \log \left[ \frac{1}{\alpha} \, \int\limits_{c-i\infty}^{c+i\infty} \, \frac{\Gamma(s)\Gamma(\frac{1-s}{\alpha})}{\Gamma(\gamma s + 1 - \gamma) } \, \mu_1^{\frac{s-1}{\alpha}} \, \frac{\mathrm{d} s}{2i\pi} \right] \hspace*{1cm} 0 < c < 1\, . \end{equation} \end{proposition} \noindent Let us now express \eqref{mu_MB} in the series representation, which can be more convenient for numerical applications: \begin{proposition} Let $\sigma>0$ and $1<\alpha \leq 2$, and $\mu_1=\left( \frac{\sigma}{\sqrt{2}}\right) ^\alpha \sec{\frac{\pi \alpha}{2}}$, then for any $\gamma > 1 - \frac{1}{\alpha}$ the risk-neutral factor $\mu$ can be expressed in the form of the absolutely convergent series: \begin{equation}\label{mu_series} \mu \, = \, - \log \, \sum\limits_{n=0}^{\infty} \, \frac{(-1)^n \Gamma(1+\alpha n)}{n!\Gamma(1+\gamma\alpha n)} \mu_1^n\, . \end{equation} \end{proposition} \begin{proof} The characteristic quantity $\Delta$ (see \eqref{Delta_1D}) associated to the Mellin-Barnes integral \eqref{mu_MB}, which governs its decay at infinity, is equal to \begin{equation} \Delta \, = \, 1 - \frac{1}{\alpha} - \gamma \, < \, 0 \end{equation} and therefore the line-integral in \eqref{mu_MB} is equal to minus the sum of the residues located in the right half plane $\{Re ( s) > 1 \}$. \begin{figure}[t] \centering \includegraphics[scale=0.5]{Fig1.pdf} \caption{Singularities for the Mellin-Barnes integral \eqref{mu_MB}. The poles located left of the convergence strip in $s=0,-1,-2,\dots$ are induced by the $\Gamma(s)$ term, those located right of the convergence strip in $s=1,1+\alpha,1+2\alpha,\dots$ are induced by the $\Gamma(\frac{1-s}{\alpha})$ term.} \label{fig1} \end{figure} They are induced by the poles of the $\Gamma(\frac{1-s}{\alpha})$ function, which arise at every negative argument, that is when $s=1+\alpha n, \, n \in\mathbb{N}$. Around theses points, the $\Gamma(\frac{1-s}{\alpha})$ function admits the singular behavior (see \eqref{sing_Gamma}): \begin{equation} \Gamma\left(\frac{1-s}{\alpha}\right) \, \underset{s \rightarrow 1+\alpha n}{\sim} \, \frac{(-1)^n}{n!}\frac{\alpha}{-s+1+\alpha n} \end{equation} and therefore the residues associated to the Mellin-Barnes integral in \eqref{mu_MB} are: \begin{equation} \mathrm{Res} (s = 1+\alpha n) \, = \, -\frac{(-1)^n\Gamma(1+\alpha n)}{n!\Gamma(\gamma(1+\alpha n) + 1 - \gamma)} \, \mu_1^n\, . \end{equation} Simplifying, taking minus the sum of this residues and the overall logarithm yields the formula \eqref{mu_series} \end{proof} \noindent We may note that, when taking $\gamma = 1$ in formula \eqref{mu_series}, we are left with \begin{equation} \mu \, = \, - \log \, \sum\limits_{n=0}^{\infty} \frac{(-1)^n\mu_1^n}{n!} \, = \, - \log \, e^{-\mu_1} \, = \, \mu_1 \end{equation} as expected. Moreover, it follows from the classical Taylor expansion for $\log(1+x)$ that: \begin{equation} \mu \, = \, \frac{\Gamma(1+\alpha)}{\Gamma(1+\gamma\alpha)}\mu_1 \, + \, O\left(\mu_1^2\right) \end{equation} and, in the case $\alpha=2$, we have: \begin{equation}\label{mu_series_BS} \mu \, = \, -\frac{\sigma^2}{\Gamma(1+2\gamma)} \, + \, O\left(\sigma^4\right) \end{equation} which resumes to the Gaussian parameter $-\frac{\sigma^2}{2}$ when $\gamma = 1$. \section{Series representation of the European call option} Let us now turn the attention to the series representation of Eq. \eqref{propagator}. This will be done in two steps. First, we use Mellin-Barnes representation for the Green function corresponding to space-time fractional solution. Second, we use the residue summation formula in order to obtain the double-series representation. Let us assume that $S,K,r,\tau \, > \, 0$. Let $1< \alpha \leq 2$ and $0<\gamma \leq \alpha$. We will denote the vectors $\underline{Z}\in\mathbb{C}^2$ by: \begin{equation} \underline{Z} \, := \begin{bmatrix} Z_1 \\ Z_2 \end{bmatrix} \hspace*{1cm} Z_1,\,Z_2 \in \mathbb{C} \end{equation} We will assume that the Carr-Wu maximal negative asymmetry hypothesis $\theta = \alpha - 2$ holds, and we will denote the call price by $C_{\alpha,\gamma}^{\alpha - 2}(S,K,r,\mu,\tau):=C_{\alpha,\gamma}(S,K,r,\mu,\tau)$. \subsection{Mellin-Barnes representation for the call price} We first derive a representation for the call price \eqref{propagator} under the form of a complex integral in $\mathbb{C}^2$. \begin{proposition} Let $[\log]:=\log\frac{S}{K} + r\tau$ and let $P\subset\mathbb{C}^2$ be the polyhedra \begin{equation} P \, := \, \{ \underline{t}\in\mathbb{C}^2 \, , \, 0 < Re (t_2) < 1, \, Re(t_2-t_1)>1 \} \end{equation} Then, for any $\underline{c}\in P$, the following holds: \begin{eqnarray}\label{Call_C2} C_{\alpha,\gamma} (S,K,r,\mu,\tau) =&& \nonumber\\ \frac{K e^{-r \tau}}{\alpha} \int\limits_{c_1-i\infty}^{c_1+i\infty} \int\limits_{c_2-i\infty}^{c_2+i\infty} \, (-1)^{-t_2} \frac{\Gamma(t_2)\Gamma(1-t_2)\Gamma(-1-t_1+t_2)}{\Gamma(1-\frac{\gamma}{\alpha}t_1)} \,&& \nonumber\\ \times (-[\log]-\mu\tau)^{1+t_1-t_2}(-\mu\tau^{\gamma})^{-\frac{t_1}{\alpha}}\frac{\mathrm{d} t_1}{2i\pi}\wedge\frac{\mathrm{d} t_2}{2i\pi}\, .&& \end{eqnarray} \end{proposition} \begin{proof} With maximal asymmetry hypothesis, the Green function $g_{\alpha,\gamma}^{\alpha - 2} (y,\tau) \, := \, g_{\alpha,\gamma}(y,\tau)$ simplifies into (see eq. \eqref{Green_function_DF}): \begin{equation} g_{\alpha,\gamma}(y,\tau) \, = \, \frac{1}{\alpha y} \int\limits_{c_1-i\infty}^{c_1+i\infty} \, \frac{\Gamma(1-t_1)}{\Gamma(1-\frac{\gamma}{\alpha}t_1)} \, \left( \frac{y}{(-\mu\tau^{\gamma})^{\frac{1}{\alpha}} } \right)^{t_1} \, \frac{\mathrm{d} t_1}{2i\pi} \, \end{equation} for $0 < c_1 < 1$. Inserting this into formula \eqref{propagator} yields: \begin{multline}\label{propagator_2} C_{\alpha,\gamma} (S,K,r,\mu,\tau) = \frac{K e^{-r \tau}}{\alpha} \, \int\limits_{c_1-i\infty}^{c_1+i\infty} \, \frac{\Gamma(1-t_1)}{\Gamma(1-\frac{\gamma}{\alpha}t_1)} \, \\ \times \int\limits_{-[\log]-\mu\tau}^{\infty} (e^{[\log]+\mu\tau + y}-1)y^{t_1-1} \, \mathrm{d} y (-\mu\tau^{\gamma})^{-\frac{t_1}{\alpha}}\frac{\mathrm{d} t_1}{2i\pi}\, . \end{multline} Here we have used the fact that $[Se^{(r+\mu)\tau +y}-K]^+ = K[e^{[\log]+\mu\tau+y}-1]^+ $. It is possible to integrate by parts in \eqref{propagator_2}, with the result: \begin{multline}\label{propagator_3} C_{\alpha,\gamma} (S,K,r,\mu,\tau) = -\frac{K e^{-r \tau}}{\alpha} \, \int\limits_{c_1-i\infty}^{c_1+i\infty} \, \frac{\Gamma(1-t_1)}{\Gamma(1-\frac{\gamma}{\alpha}t_1)} \, \frac{1}{t_1} \, \\ \times \int\limits_{-[\log]-\mu\tau}^{\infty} e^{[\log]+\mu\tau + y} \, y^{t_1} \, \mathrm{d} y (-\mu\tau^{\gamma})^{-\frac{t_1}{\alpha}}\frac{\mathrm{d} t_1}{2i\pi}\, . \end{multline} Let us introduce the following Mellin-Barnes representation of the exponential term (see Eq. \eqref{Cahen}): \begin{equation} e^{[\log]+\mu\tau + y} \, = \, \int\limits_{c_2-i\infty}^{c_2+i\infty} \, (-1)^{-t_2} \Gamma(t_2) ([\log] + \mu\tau + y )^{-t_2} \, \frac{dt_2}{2i\pi}\, , \end{equation} for $c_2 > 0$. Plugging into \eqref{propagator_3} transforms the integral over the Green variable $y$ into a \textit{Beta integral} \cite{Abramowitz72}, with the result: \begin{eqnarray} \int\limits_{-[\log]-\mu\tau}^{\infty} \, ([\log] + \mu\tau + y )^{-t_2} y^{t_1} \, \mathrm{d} y \, \nonumber\\ = \, (-[\log]-\mu\tau)^{1+t_1-t_2} \frac{\Gamma(1-t_2)\Gamma(-1-t_1+t_2)}{\Gamma(-t_1)} \end{eqnarray} Replacing in \eqref{propagator_3}, using the functional relation $-t_1\Gamma(-t_1) = \Gamma(1-t_1)$ and simplifying the fraction, we are left with the double integral in $\mathbb{C}^2$, as shown in Eq. \eqref{Call_C2}. The integral converges when the arguments of the Gamma functions in the numerator are positive, that is whenever $t_2>0$, $t_2<1$ and $-1-t_1+t_2>0$. \end{proof} \noindent The integral formula \eqref{Call_C2} can be expressed as a sum of residues in some region of $\mathbb{C}^2$, which is shown in the next section. \subsection{Residue summation} \begin{theorem} Let $1 < \alpha \leq 2$ and $1-\frac{1}{\alpha} < \gamma \leq \alpha$. Under maximal negative asymmetry hypothesis (i.e., $\theta=\alpha-2$), the European call price driven by space-time fractional diffusion is: \begin{equation}\label{Formula} C_{\alpha,\gamma}(S,K,r,\mu,\tau) \, = \, \frac{Ke^{-r\tau}}{\alpha} \, \sum\limits_{\substack{n = 0 \\ m = 1}}^{\infty} \, \frac{(-1)^n}{n!\Gamma(1-\gamma\frac{n-m}{\alpha})} (-[\log]-\mu\tau)^{n}(-\mu\tau^{\gamma})^{\frac{m-n}{\alpha}}\, . \end{equation} \end{theorem} \begin{proof} Let $\omega$ be the complex differential 2-form \begin{multline}\label{Call_C2_2} \omega \, : = \, (-1)^{-t_2} \frac{\Gamma(t_2)\Gamma(1-t_2)\Gamma(-1-t_1+t_2)}{\Gamma(1-\frac{\gamma}{\alpha}t_1)}\, \\ \times (-[\log]-\mu\tau)^{1+t_1-t_2}(-\mu\tau^{\gamma})^{-\frac{t_1}{\alpha}}\frac{\mathrm{d} t_1}{2i\pi}\wedge\frac{\mathrm{d} t_2}{2i\pi} \end{multline} so that the call price \eqref{Call_C2} can be written under the compact form \begin{equation} C_{\alpha,\gamma}(S,K,r,\mu,\tau) \, = \, \frac{Ke^{-r\tau}}{\alpha} \int\limits_{\underline{c}+i\mathbb{R}^2} \, \omega\, . \end{equation} The characteristic vector associated to $\omega$ is (see \cite{Tsikh94,Tsikh97}) : \begin{equation} \Delta \, = \, \begin{bmatrix} -1 + \frac{\gamma}{\alpha} \\ 1 \end{bmatrix}\, . \end{equation} Therefore, the half-plane of convergence one must considers: \begin{equation} \Pi_\Delta \, := \, \, \, \, \, \, \{ \underline{t} \in \mathbb{C}^2, \, Re( \Delta \, . \, \underline{t}) \, \, < \, \, Re( \Delta \, . \, \underline{c} \, ) \} \end{equation} is the one located under the line (see Fig. \ref{fig2}): \begin{equation} Re(t_2) \, = \, (1-\frac{\gamma}{\alpha}) (Re(t_1) \, - \, c_1) + c_2\, . \end{equation} \begin{figure}[t] \centering \includegraphics[scale=0.4]{Fig2.pdf} \caption{The admissible region $\Pi_{\Delta}$, for the complex 2-form $\omega$, is the one located under the dotted oblique line. There is only one compatible cone in this region: the green cone, which is compatible with the two family of divisors $D_1$ (oblique lines) and $D_2$ (horizontal lines). The sum of the residues at every $\mathbb{C}^2$-singularity (points) into this cone is therefore equal to the integral of $\omega$.} \label{fig2} \end{figure} \noindent Because $\gamma\leq\alpha$, we have $0 < 1-\frac{\gamma}{\alpha} \leq 1$ and therefore the cone $\Pi$ defined by \begin{equation} \Pi \, := \,\,\,\, \{\underline{t} \in \mathbb{C}^2 \, , \, Re(t_2) < 0 \, , \, Re(-t_1+t_2) < 1 \} \end{equation} is included in $\Pi_\Delta$. Moreover, it is compatible with the two families of divisors \begin{equation} \left\{ \begin{aligned} & D_1 \, = \, \left\{\underline{t}\in\mathbb{C}^2, -1 - t_1 + t_2 = -n_1 \,\, , \,\,\, n_1 \in\mathbb{N} \right\} \\ & D_2 \, = \, \left\{\underline{t}\in\mathbb{C}^2, t_2 = -n_2 \,\, , \,\,\, n_2 \in\mathbb{N} \right\} \end{aligned} \right. \end{equation} induced by the $\Gamma(-1-t_1+t_2)$ and $\Gamma(t_2)$ functions respectively. To compute the residues associated to every element of the singular set $D := D_1 \cap D_2$, we change the variables: \begin{equation} \left\{ \begin{aligned} & u_1 \, := \, -1 -t_1 + t_2 \\ & u_2 \, := \, t_2 \end{aligned} \right. \longrightarrow \left\{ \begin{aligned} & t_1 \, = \, -1+u_2-u_1 \\ & t_2 \, = \, u_2 \end{aligned} \right. \end{equation} so that in this new configuration $\omega$ reads \begin{multline} \omega \, = \, (-1)^{-u_2} \, \\ \frac{\Gamma(u_2)\Gamma(1-u_2)\Gamma(u_1)}{\Gamma(1-\gamma \frac{-1+u_2-u_1}{\alpha})} \left(-[\log]-\mu\tau \right)^{-u_1} (-\mu\tau^\gamma)^{\frac{1+u_1-u_2}{\alpha}} \, \frac{\mathrm{d} u_1}{2i\pi} \wedge \frac{\mathrm{d} u_2}{2i\pi} \end{multline} With this new variables, the divisors $D_1$ and $D_2$ are induced by the $\Gamma(u_1)$ and $\Gamma(u_2)$ functions, and intersect at every point of the type $(u_1,u_2)=(-n,-m)$, $n,m\in\mathbb{N}$. From the singular behavior of the Gamma function \eqref{sing_Gamma} around a singularity, we can write: \begin{multline} \omega \, \underset{(u_1,u_2)\rightarrow (-n,-m)} {\sim} \, \frac{(-1)^{n+m}}{n!m!} (-1)^{-u_2} \, \frac{\Gamma(1-u_2)}{\Gamma(1-\gamma \frac{-1+ u_2-u_1}{\alpha})} \\ \times \left(-[\log]-\mu\tau \right)^{-u_1} (-\mu\tau^\gamma)^{\frac{1+u_1-u_2}{\alpha}} \, \frac{\mathrm{d} u_1}{2i\pi (u_1+n)} \wedge \frac{\mathrm{d} u_2}{2i\pi (u_2 + m)} \end{multline} Taking the residues and simplifying: \begin{equation} \mathrm{Res}_{(-n,-m)} \, \omega \, = \, \frac{(-1)^n}{n!\Gamma(1-\gamma\frac{n-m-1}{\alpha})} \left(-[\log]-\mu\tau \right)^{n} (-\mu\tau^\gamma)^{\frac{1+m-n}{\alpha}}\, \end{equation} From the residue theorem for Mellin-Barnes integrals in $\mathbb{C}^2$ (see equation \eqref{res_thm_C2}), we know that the integral \eqref{Call_C2_2} is equal to the sum of residues into the whole cone $\Pi$: \begin{multline} C_{\alpha,\gamma}(S,K,r,\mu,\tau) \, = \, \\ \frac{Ke^{-r\tau}}{\alpha} \, \sum\limits_{\substack{n = 0 \\ m = 0}}^{\infty} \, \frac{(-1)^n}{n!\Gamma(1-\gamma\frac{n-m-1}{\alpha})} (-[\log]-\mu\tau)^{n} (-\mu\tau^{\gamma})^{\frac{1+m-n}{\alpha}}\, \end{multline} Performing the index substitution $m\rightarrow m+1$ yields the representation \eqref{Formula} and completes the proof. \end{proof} \subsection{Special cases} Let us discuss several special parameter choices corresponding to well-known diffusion models: \begin{itemize} \item \underline{Space-fractional diffusion:} When $\gamma=1$ in the series \eqref{Formula}, the series expansion is simplified and corresponds to the call price under the so-called Finite Moment L\'evy Stable model \cite{Carr03} \begin{equation} C_{\alpha,1} \, = \, \frac{Ke^{-r\tau}}{\alpha} \, \sum\limits_{\substack{n = 0 \\ m = 1}}^{\infty} \, \frac{(-1)^n}{n!\Gamma(1-\frac{n-m}{\alpha})} (-[\log]-\mu\tau)^{n}(-\mu\tau)^{\frac{m-n}{\alpha}} \end{equation} where $\mu = \mu_1 \, = \, (\sigma/\sqrt{2})^\alpha \sec\frac{\pi\alpha}{2}$. When, additionally, $\alpha=2$, we get the series expansion for the Black-Scholes (BS) price \begin{equation} C_{2,1} \, = \, \frac{Ke^{-r\tau}}{2} \, \sum\limits_{\substack{n = 0 \\ m = 1}}^{\infty} \, \frac{(-1)^n}{n!\Gamma(1-\frac{n-m}{2})} \left(-[\log]+\frac{\sigma^2}{2}\tau \right)^{n} \left(\frac{\sigma^2}{2}\tau \right)^{\frac{m-n}{2}} \end{equation} \item \underline{Neural diffusion:} For $\alpha=\gamma$, the diffusion equation becomes a generalization of the wave equation \cite{luchko13} (obtained for $\alpha=2$). In this case, the ratio $\gamma/\alpha=1$, and therefore the formula can be expressed as \begin{equation} C_{\alpha,\alpha} = \frac{Ke^{-r\tau}}{\alpha} \sum\limits_{\substack{n = 0 \\ m = 1 \\ m\geq n}}^{\infty} \, \frac{(-1)^n}{n! (m-n)!} (-[\log]-\mu\tau)^{n}((-\mu)^{1/\alpha} \tau)^{m-n} \end{equation} Let us note that the neural diffusion is not typical for financial processes and it is more important from the theoretical point of view, because it represents the borderline case of fractional diffusion. \item \underline{Time-fractional diffusion:} Taking $\alpha=2$ in \eqref{Formula} yields the series expansion for the time-fractional BS price: \begin{equation}\label{BS_frac} C_{2,\gamma} \, = \, \frac{Ke^{-r\tau}}{2} \, \sum\limits_{\substack{n = 0 \\ m = 1}}^{\infty} \, \frac{(-1)^n}{n!\Gamma(1-\gamma\frac{n-m}{2})} (-[\log]-\mu\tau)^{n}(-\mu\tau^{\gamma})^{\frac{m-n}{2}} \end{equation} \item \underline{At-the-money forward approximation of time-fractional diffusion:} Assuming that the asset is at-the-money forward, that is: \begin{equation} S \, = \, Ke^{-r\tau} \end{equation} then, by definition, $[\log]=0$ and the fractional BS price \eqref{BS_frac} becomes: \begin{equation}\label{BS_frac_ATMF} C_{2,\gamma}^{(ATMF)} \, = \, \frac{S}{2} \, \sum\limits_{\substack{n = 0 \\ m = 1}}^{\infty} \, \frac{(-1)^n}{n!\Gamma\left(1-\gamma\frac{n-m}{2}\right)} (-\mu)^{\frac{n+m}{2}}\tau^{\frac{(2-\gamma)n+\gamma m}{2}} \end{equation} As $\gamma<2$, the series \eqref{Call_C2_2} is a power series (which is not the case in the general series \eqref{BS_frac}, where negative powers arise) which starts for $n=0,m=1$ and goes as: \begin{equation} C_{2,\gamma}^{(ATMF)}\, = \, \frac{S}{2} \, \left[ \frac{\sigma}{\Gamma(1+\frac{\gamma}{2})} \, \sqrt{\frac{\tau^\gamma}{\Gamma(1+2\gamma)}} \, + \, \mathcal{O}(\sigma^2) \right] \end{equation} where we have used the approximation~\eqref{mu_series_BS} for the risk-neutral parameter. Taking $\gamma=1$ and recalling $\Gamma(\frac{3}{2})=\frac{\sqrt{\pi}}{2}$, we are left with \begin{equation} C_{2,1}^{(ATMF)} \, = \, \frac{S}{\sqrt{2\pi}} \, \sigma\sqrt{\tau} \, + \mathcal{O}(\sigma^2) \, \simeq \, 0.4 S \sigma\sqrt{\tau} \end{equation} which the well-known Brenner-Subrahmanyam approximation for the BS price \cite{BS94}. \end{itemize} \subsection{Convergence of the double-sum representation} In order to demonstrate the speed of convergence, let us calculate the contributions of each term in the double-sum \eqref{Formula} for a typical option price. Table \ref{fig:series} provides an example of the series convergence of an option with realistic parameters $S=3800, K=4000, r=1\%, \sigma= 20\%, \tau = 1, \alpha = 1.7, \gamma =0.9$; we observe that the convergence is very fast. We see that for the numerical precision of three decimal places, it is only necessary to sum up to $n=6$ and $m=6$. \begin{table}[h!] \centering \begin{scriptsize} \begin{tabular}{|c||ccccccc|} \hline {\quad \bfseries n \textbackslash \ m } & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline \hline 0 & 429.751 & 60.850 & 7.216 & 0.749 & 0.070 & 0.006 & 0.000 \\ 1 & -203.666 & -37.572 & -5.320 & -0.6315 & -0.065 & -0.006 & -0.000 \\ 2 & 28.893 & 8.903 & 1.642 & 0.233 & 0.0.028 & 0.003 & 0.000 \\ 3 & 0.549 & -0.842 & -0.259 & -0.048 & -0.007 & -0.000 & -0.000 \\ 4 & -0.352 & -0.012 & 0.018 & 0.006 & 0.001 & 0.000 & 0.000 \\ 5 & -0.016 & 0.006 & 0.000 & -0.000 & -0.000 & -0.000 & -0.000 \\ 6 & 0.005 & 0.000 & -0.000 & -0.000 & 0.000 & 0.000 & 0.000 \\ 7 & 0.000 & -0.000 & -0.000 & 0.000 & 0.000 & -0.000 & -0.000 \\ \hline Call & 255.162 & 286.495 & 289.792 & 290.090 & 290.126 & 290.128 & 290.128 \\ \hline \end{tabular} \end{scriptsize} \caption{Table containing the numerical values for the $(n,m)$-term in the series (\ref{Formula}) for the option price ($S=3800, \, K=4000, \, r=1\%, \sigma=20\%, \, \tau=1Y, \, \alpha=1.7$,$\gamma=0.9$). The call price converges to a precision of $10^{-3}$ after summing only very few terms of the series.} \label{fig:series} \end{table} \section{Conclusions} In this paper, we have introduced a new representation for the European option driven by a space-time fractional diffusion equation in the form of rapidly convergent double series \eqref{Formula}. This double series can be derived from the Mellin-Barnes integral representation of the European option with help of residue summation in $\mathbb{C}^2$. The series representation of the double-fractional option pricing model, which incorporates risk redistribution in both spatial and temporal domain, might be useful in real trading, since the formula can be easily used by practitioners without any deeper knowledge of advanced mathematical techniques (such as Mellin transform or residue summation in multidimensional complex analysis), and is an explicit function of observable market parameters. Moreover, it is possible to control the numerical precision of the pricing formula. Contrary to other representations, no numerical technique is needed to evaluate the option, which would typically be the case for complicated integral representations, where the integrals cannot be expressed analytically. Interestingly, the residue summation technique can be used in more applications, as shown for the case of the risk-neutral factor of the space-time fractional Green function obtained by the Esscher transform. One can think about more applications, as expressions for optimal hedging policies, optimal exercise times for American options, etc. One could even go beyond the field of financial processes and use the residue summation techniques to calculate general functions of random random variables driven by space-time fractional diffusion, or more generally, by any process, whose Green function can be expressed by means of Mellin-Barnes integral representation. \section*{Acknowledgements} J. K. acknowledges support from the Austrian Science Fund, Grant No. I 3073, and from the Czech Science Foundation, Grant No. 17–33812L.
{ "timestamp": "2018-10-16T02:19:41", "yymm": "1712", "arxiv_id": "1712.04990", "language": "en", "url": "https://arxiv.org/abs/1712.04990" }
\section{Introduction} Modern life is influenced by an immense variety of motors in all forms and sizes, driven by energy sources ranging from heat, as in a combustion engine, through chemical energy in biological motors to electrical energy in electric motors. As the miniaturization of modern devices moves towards ever smaller scales, the need for control over mechanical motion at these scales becomes increasingly pressing. Directed nanomechanical motion was realized using chemical energy \cite{Collins2016,Wilson2016}, light \cite{koumura1999,Klok2008}, and electrons \cite{Tierney2011,Kudernac2011} as driving agents. We focus here on nanodevices, in which the motion of slow mechanical degrees of freedom is controlled by their coupling to electronic transport through the device -- forming a nano-electromechanical motor. One model for such a motor is based on an electron pump operating in reverse \cite{Qi2009,Bustos-Marun2013,Fernandez-Alcazar2015,Fernandez-Alcazar2017}. In such a pump, the cyclic variations of parameters, here effected by a mechanical degree of freedom, lead to a net charge transport through the device \cite{Brouwer1998}. In the reverse mode a \textit{dc} bias is applied and the forces exerted by the scattered electrons drive the coupled mechanical degree of freedom \cite{BodePRL, Bode2012, Thomas2012, Thomas2015}, realizing a motor. As an example, consider a device, in which electrons in a one-dimensional (1d) wire are coupled to a slowly sliding periodic potential, which is associated with the mechanical degree of freedom of the motor as depicted in Fig.\ \ref{fig1}. This device exhibits the essential features of the ancient Archimedean screw. When operating the Archimedean screw as a pump, turning of the screw leads to water transport. The same happens in the electronic system, where the sliding periodic potential pumps electrons through the 1d conductor, forming a Thouless pump \cite{thouless1983quantization}. When the Archimedean screw is operated in reverse, the water pushed through makes it work as a turbine. Similarly, in the electronic system a current pushed through the 1d conductor by an applied \textit{dc} bias voltage slides the periodic potential associated with the slow mechanical degree of freedom, turning the device into a Thouless motor \cite{Qi2009,Bustos-Marun2013}. \begin{figure}[b] \includegraphics[width=7.0 cm,keepaspectratio]{ThoulessPumpMagnetEdge2Dc.pdf} \caption{\label{fig1} a) Model for a Thouless motor based on a single channel quantum wire in proximity to a chain of alternating charges. The sliding periodic potential $U(x)$ is associated with the rotational degree of freedom $\vartheta(t)$ of the quantum motor. b) A nanomagnet with magnetization $\bf M$ is coupled to a single edge of a quantum spin Hall insulator in the x-y-plane, where $\vartheta_M(t)$ (angle of the in-plane magnetization) is associated with the motor degree of freedom. } \end{figure} Possible physical realizations of the Thouless motor were proposed based on a nanoscale helical wire placed in between capacitor plates \cite{Qi2009} and on a quantum spin Hall (QSH) edge coupled to a nanomagnet \cite{Meng2014,Arrachea2015a,Silvestrov2016}. In the case of the helical wire, a slowly rotating transverse electric field leads to charge pumping, while in the inverse mode an applied \textit{dc} bias in presence of a static field leads to a rotation of the helix. Similarly the precession of the magnetization of the nanomagnet pumps charge along the QSH edge, while in the inverse mode an applied bias leads to a spin transfer torque acting on the nanomagnet and driving its precession, cf. Fig.\ \ref{fig1}b). The earlier theoretical description of adiabatic quantum motors assumed noninteracting electrons. When the electrons are confined to 1d, as in the present case of the quantum wire, the low energy behavior is modified by electron-electron interactions in essential ways. In this paper we investigate how these interaction effects modify the dynamics of adiabatic quantum motors. We describe the 1d electronic system as a Luttinger liquid (LL), which provides an exact description of its low energy excitations in terms of bosonic collective excitations \cite{giamarchi2004quantum}. LL theory has proven a useful description of both quantum wires \cite{Fisher1997,Auslaender2005} and QSH edges \cite{Wu2006}, covering the possible physical realizations of the Thouless motor mentioned above. Furthermore, our LL approach leads to a field theoretic description of quantum motors, complementing the earlier analysis on the basis of Landauer-B\"uttiker theory \cite{Bustos-Marun2013}. For definiteness we base our discussion on the Thouless motor and give an explicit translation of the results to the magnetic system in Sec.\ \ref{Translation}. We introduce the model of the Thouless pump in Sec.\ \ref{Model}. In Sec.\ \ref{CouplingPeriodPot} we investigate the coupling of the LL to the periodic potential and derive the effective gap size in presence of electron-electron interactions. Section \ref{RedDyn} is devoted to the derivation of the effective field theory of the motor degree of freedom that leads to an interaction dependent effective Langevin equation for the motor dynamics. In the case of an infinite LL the friction is enhanced by repulsive interactions, as shown in Sec.\ \ref{infiniteLL}. The connection to Fermi liquid (FL) leads yields an effective equation of motion including memory and restores the reduced noninteracting dissipation at steady velocity, as presented in Sec.\ \ref{ContactFL}. In the final section \ref{Translation} we give the explicit translation of the obtained results to the nanomagnet coupled to a QSH edge. \section{Model}\label{Model} Our model of a quantum motor is based on a finite length Thouless pump operating in reverse. A toy model realizing such a pump is sketched in Fig.\ \ref{fig1}. A single channel quantum wire is placed next to a chain of fixed, periodically alternating charges. These charges move with respect to the quantum wire when turning the wheel and advancing the angular degree of freedom $\vartheta$. This causes a slowly sliding periodic potential for the electrons, thereby forming a Thouless pump \cite{thouless1983quantization}. The sliding periodic potential $U$ (cf. Fig.\ \ref{fig1}) is of the form \begin{equation} U(x)=2\, V_0 \cos\left(q\, x-\vartheta(t)\right)\,\Theta\left(\frac{L}{2}-\abs{x}\right)\,,\label{eq:U} \end{equation} where $q$ is the wavevector of the potential of strength $2V_0$. For $q\approx 2k_{F}$ ($k_F$ is the Fermi momentum), the periodic potential causes backscattering between right and left moving electrons in the wire. The analysis of the system on the basis of Landauer-B\"uttiker theory for noninteracting electrons showed that, to exponential accuracy in the length $L$, the backscattering induced gap leads to a vanishing normal conductance, quantized charge pumping per cycle, and unit efficiency, i.e., a conversion of the entire electronic energy provided by the bias into mechanical energy associated with the degree of freedom $\vartheta$ \cite{Bustos-Marun2013}. To include the interaction effects when confining the electrons to the 1d quantum wire, we model the electrons as a spinless LL \cite{Haldane1981,giamarchi2004quantum}. The Hamiltonian of the bare electronic system (i.e.\ without the periodic potential) can then be expressed in terms of the bosonic displacement field $\phi(x)$ and phase field $\theta(x)$, \begin{equation} H=\frac{v_{c}}{2\pi}\int d x\left\{ \frac{1}{K}\left(\partial_{x}\phi(x)\right)^{2}+K\left(\partial_{x}\theta(x)\right)^{2}\right\} \,,\label{eq:H Fields LL-1} \end{equation} where $K$ is the dimensionless interaction parameter, with $K<1$ for repulsive electron-electron interactions ($K=1 $ for a noninteracting system), and $v_{c}$ is the charge velocity. The displacement field $\phi(x)$ describes the local density fluctuations through \begin{equation}\label{NormOrdDensityPhi} \normOrd{n_{R}(x)+n_{L}(x)}=\frac{\partial_{x}\phi(x)}{\pi} \end{equation} and the phase field $\theta(x)$ is associated with the difference in density between right and left movers, \begin{equation}\label{LocalDiffernenceTheta} \normOrd{n_{R}(x)-n_{L}(x)}=\frac{\partial_{x}\theta(x)}{\pi}\,. \end{equation} Here, $n_R$ and $n_L$ are the densities of right and left movers, respectively, and $\normOrd{\,...\,}$ denotes normal ordering. The bosonic fields fulfill the commutation relation $[\phi(x),\theta(x')]=i\pi\,\text{sgn}(x-x')/2$. One can express the fermionic fields in terms of the bosonic ones via \begin{align} \psi (x)&= \psi_R (x)+ \psi_L (x) \label{LeftRightMover}\,,\\ \psi_{R/L}(x)&= \frac{1}{\sqrt{2\pi\lambda}} \text{e}^{\pm i k_{F} x} \text{e}^{i \left[\theta(x) \pm \phi(x)\right]}\,, \label{PsiBosonized} \end{align} where we ignore the Klein factors and $\lambda$ is a short distance cutoff due to the finite band width \cite{giamarchi2004quantum}. The Euclidean (imaginary time) action of the bare LL in the $\phi$-representation takes the form \cite{Fisher1997,kane1992transport} \begin{equation} S_0=\int d{\bf r}\frac{1}{2\pi K}\left[\frac{1}{v_{c}}\left(\partial_{\tau}\phi\right)^{2}+v_{c}\left(\partial_{x}\phi\right)^{2}\right]\label{eq:Phi-Repr} \end{equation} in terms of the short hand notations $(x,\tau)=\bf r $ and $\int_{0}^{\beta} d \tau\int d x=\int d{\bf r}$. Using the bosonized fermionic fields in Eqs.\ \eqref{LeftRightMover} and \eqref{PsiBosonized}, the sliding periodic potential in Eq.\ \eqref{eq:U} contributes the sine-Gordon term \begin{align} S_{U} = \frac{2V_0}{2\pi\lambda}\int d{\bf r}\,\text{cos}\left[2\phi(x)+(2k_{F}-q)\, x+\vartheta(t)\right]\,\label{H_U} \end{align} for $x\in [-L/2,L/2]$ to the action. \section{Coupling to periodic potential} \label{CouplingPeriodPot} \subsection{Energy gap}\label{CalcEnergyGap} The unit efficiency of the Thouless motor depends crucially on the presence of an energy gap at the Fermi energy. In the absence of interactions, this gap has size $\Delta_\text{non-int.} =2V_0$. Interactions modify this gap. To start with, the sine-Gordon term is a relevant perturbation over a wide range of interaction strengths, indicating the formation of a gap. Consider $\vartheta(t)=0$ and perfect backscattering, $q=2k_{F}$, and employ the usual momentum shell renormalization group (RG) procedure for the sine-Gordon term in Eq.\ \eqref{H_U} \cite{giamarchi2004quantum}. Integrating out the fast modes of the action $S_0+S_U$ in Eqs.\ \eqref{eq:Phi-Repr} and \eqref{H_U} in a momentum shell $\gamma/b<\abs{q}<\gamma$ ($\gamma$ is the momentum cutoff, see Appendix \ref{RGperfectBS}) and rescaling time $\tau'=\tau/b$ and space $x'=x/b$, with $b=\e^l$, yields the familiar flow equation \begin{equation} \frac{ d V(l)}{ d l}=\left(2-K\right)V(l)\,,\label{eq:Delta-RG} \end{equation} for the strength of the periodic potential, while the free action $S_{0}$ remains unchanged to first order in the cumulant expansion. Thus the periodic potential is a relevant perturbation for all $K<2$ and the system flows to strong coupling. For a large coupling strength $V$, the displacement field $\phi$ is trapped near a minimum of the cosine. The electron density is commensurate and oscillates about the minima of the periodic potential in Fig.\ \ref{WignerCrystal}. \begin{figure} \includegraphics[width=8.5cm,keepaspectratio]{WignerCrystalf.pdf} \caption{\label{WignerCrystal} Oscillations of the electron density around the minima of the periodic potential corresponding to the action Eq.\ \eqref{SMassivePhase}. Quantum fluctuations of the electrons around the minima positions lead to a down-scaling of the strength of the periodic potential $V$ as described by Eq.\ \eqref{eq:Delta-EffektiveLowEnergy}, resulting in the renormalized gap given by Eq.\ \eqref{eq:DeltaInteractingFinal}.} \end{figure} The effective dynamics of $\phi$ can be obtained by expanding the action about this minimum, \begin{align} S &\simeq S_0 + \int d{\bf r}\frac{2V_0}{2\pi\lambda}\,\phi^{2} \nonumber \\ &= \sum_{n,m}\frac{1}{2\pi K}\left(\frac{1}{v_{c}}\omega_{n}^{2}+v_{c}q_{m}^{2}+\frac{4V_0K}{\lambda}\right)\abs{\phi_{n,m}}^{2} \,,\label{SMassivePhase} \end{align} where $\omega_n$ is a bosonic Matsubara frequency and $q_m$ is the wave vector. Thus, the system has a bare energy gap of size \begin{equation}\label{Delta bare} \Delta_0=\sqrt{\frac{4V_0K\, v_{c}}{\lambda}}\,, \end{equation} which can be understood as the pinning frequency of the classical Wigner crystal. Indeed, expanding the bare potential Eq.\ \eqref{eq:U} (with wavevector $2k_F$ and $\vartheta=0$) around a minimum yields $U\simeq V_{0}\left[2k_{F}x\right]^{2}$. This leads to a pinning frequency \begin{align} \omega_\text{pin} = \sqrt{ \frac{8k_{F}^{2}}{m}V_{0}}\,. \end{align} Using $v_c K\simeq v_F $, this reproduces the bare gap $\Delta_0$ in Eq.\ \eqref{Delta bare} up to a numerical prefactor stemming from the uncertainty in choosing the cutoff $\lambda\sim 2 \pi/k_F $. Quantum fluctuations of the electron density about the commensurate configuration (cf. Fig.\ \ref{WignerCrystal}) effectively decrease the restoring force of the potential and thus result in a downscaling of the effective gap. This effect is present even for noninteracting electrons. Indeed, for noninteracting electrons $K\rightarrow1$, the bare gap in Eq.\ \eqref{Delta bare} is different from $\Delta_\text{non-int.} =2V_0$. Repulsive interactions suppress the density fluctuations so that the downward renormalization becomes weaker as the repulsive interactions increase. In the Wigner crystal limit $K\rightarrow0$ fluctuations are fully suppressed, so that the bare gap Eq.\ \eqref{Delta bare} represents the actual gap of the system. We account for the quantum fluctuations by integrating out the high energy modes while retaining the original units, so energies can be compared. This leads to \begin{equation} \frac{ d V(l)}{ d l}=-KV(l)\,.\label{eq:Delta-EffektiveLowEnergy} \end{equation} We can see that, as anticipated, the downwards scaling is stronger for less repulsively interacting systems. Integrating out modes down to the gap leads to a self-consistent equation for the renormalized energy gap $\Delta=\sqrt{4V K\, v_{c}/\lambda}$, with $V$ obtained by integrating the flow equation \eqref{eq:Delta-EffektiveLowEnergy}, \begin{align}\label{VFlow} V = V_{0}\left(\frac{2 \pi v_{c}}{ \lambda \Delta }\right)^{-K}\,. \end{align} The resulting self-consistent equation for $\Delta$ has the solution \begin{equation} \Delta =\left(\frac{4V_{0}K\, v_{c}}{\left(2 \pi v_{c} \lambda ^{-1} \right)^{K}\lambda}\right)^{1/(2-K)}\,.\label{eq:DeltaInteractingFinal} \end{equation} This formula reproduces $\Delta_\text{non-int.} = 2 V_0 $ for noninteracting electrons (up to a numerical prefactor, as before). We also see explicitly that the gap is enhanced for repulsive electron-electron interactions ($K<1$), \begin{align}\label{GapEnhancement} \frac{\Delta(K)}{\Delta\left(K=1\right)} = K \left(\frac{\pi^2 v_c/\lambda }{2V_{0}K}\right)^{(1-K)/(2-K)}>1\,. \end{align} Here we used that $\pi v_c/\lambda\gg V_0$ is an energy of the order of the Fermi energy. \subsection{Changes of the chemical potential}\label{deviationsPerfectBS} The previous section considered the case of perfect commensurability $q=2k_{F}$ at the center of the gap $\mu=0$. The noninteracting Thouless motor maintains optimal efficiency as long as the chemical potential falls into the gap $\abs{\mu}\lesssim V_{0}$ \cite{Bustos-Marun2013}. We now investigate the robustness of the interacting system against changes of the chemical potential. A uniform chemical potential term $H_{\mu}=-\mu\int dx\,\partial_{x}\phi(x)/\pi$ can be absorbed into the free LL Hamiltonian Eq.\ \eqref{eq:H Fields LL-1} by shifting the field \begin{align} \tilde{\phi}(x)=\phi(x)-\mu\frac{K}{v_{c}}x\,. \end{align} This changes the coupling in Eq.\ \eqref{H_U} to \footnote{The electronic dispersion is linearized around $\pm k_F=\pm q/2$. Hence $2k_F-q=0$ in Eq.\ \eqref{H_U}.} \begin{align} S_{U}\left[\phi\right]=\frac{2V}{2\pi\lambda}\int d{\bf r}\,\text{cos}\left[2\tilde{\phi}(x)+2\mu\frac{K}{v_{c}}\,x\right]\,.\label{S-Deviation} \end{align} The chemical potential $\mu$ thus introduces a constant gradient $\nabla \tilde \phi =-\mu K/v_c$ into the configurations of $\tilde{\phi}$ that minimize the sine-Gordon term. Physically, this reflects the fact that the Luttinger liquid tries to adapt to a density which is commensurate with the periodic potential. The Luttinger liquid Hamiltonian in Eq.\ \eqref{eq:H Fields LL-1} gives the associated elastic energy cost per unit length \begin{align} \epsilon_{el}=\frac{v_{c}}{2\pi K}\left(\mu\frac{K}{v_{c}}\right)^{2}\,. \label{ElasticEnergy} \end{align} This cost increases with $\mu$ and eventually leads to depinning beyond a critical $\mu_{c}$, when adapting to the periodic potential becomes too costly. To take proper account of the renormalization of the potential due to quantum fluctuations, we use the effective low energy theory developed in Sec.\ \ref{CalcEnergyGap}. Since $\mu$ does not alter the renormalization of the strength of the periodic potential (up to first order in the cumulant expansion), we can express the effective potential $V$ in terms of the effective gap size $\Delta=\sqrt{4VK\,v_{c}/\lambda}$, with $\Delta$ given in Eq.\ \eqref{eq:DeltaInteractingFinal}. This leads to the effective low energy action \begin{align} S_{\text{eff}}[\tilde{\phi}]=S_{0}[\tilde{\phi}]+\int d{\bf r}\frac{\Delta^{2}}{4\pi Kv_{c}}\,\text{cos}\left[2\tilde{\phi}(x)+2\mu\frac{K}{v_{c}}\,x\right]\,.\label{SeffDeviations} \end{align} The elastic energy cost in Eq.\ \eqref{ElasticEnergy} can be reduced by inserting a finite density $n_{s}$ of $\pi$ phase slips into $\tilde{\phi}$, which are described by soliton solutions of $\tilde{\phi}$. With phase slips, the gradient of $\tilde \phi$ is no longer constant and has a reduced magnitude on average. We approximate the elastic energy cost $\epsilon_{el}$ of this configuration by calculating $\epsilon_{el}$ associated with the reduced average gradient, which yields \begin{align} \epsilon_{el}=\frac{v_{c}}{2\pi K}\left(\pi n_{s}-\mu\frac{K}{v_{c}}\right)^{2}\,. \end{align} With the assumption of a low soliton density the total energy cost can then be estimated as the sum of the elastic energy cost and the cost of $n_{s}$ solitons, \begin{align} \epsilon=\epsilon_{el}+n_{s}E_{sol}\,.\label{ETotSol} \end{align} The soliton solution and its energy $E_{sol}=2\Delta/(\pi K)$ can be derived from Eq.\ \eqref{SeffDeviations} in the standard way \cite{Rajaraman1987}. We find the optimal soliton density by minimizing the total energy cost for a given chemical potential $\mu$, \begin{align} n_{s,\text{opt}}=\frac{\mu K}{\pi v_{c}}-\frac{2\Delta}{\pi^{2}v_{c}}\,. \end{align} This soliton density becomes positive at the critical chemical potential \begin{align} \mu_{c}=\frac{2}{\pi}\frac{\Delta}{K}\,,\label{DeltaCritical} \end{align} beyond which the system leaves the pinned regime. Since repulsive electron-electron interactions enhance the effective gap size $\Delta$ according to Eq.\ \eqref{eq:DeltaInteractingFinal}, they also increase the robustness of the system against changes of the chemical potential. Note that the limit $K\rightarrow1$ of vanishing electron-electron interactions reproduces the critical chemical potential $\mu_{c}(K=1)\sim V_{0}$ of the noninteracting case. \subsection{Sliding periodic potential}\label{CouplingTimeDepPot} So far, we considered the motor degree of freedom to be at rest and chose $\vartheta=0$. In the absence of interactions, the adiabatic variation of $\vartheta$ pumps a unit charge between the leads per cycle. The same occurs in the interacting system. Restoring the motor degree of freedom $\vartheta(\tau)$ in Eq.\ \eqref{H_U}, the coupling to the periodic potential is \begin{align}\label{S-timeDepDeviation} S_{U}\left[\phi\right]=\frac{2V}{2\pi\lambda}\int d{\bf r}\,\text{cos}\left[2\phi(x)+\vartheta(\tau)\right]\,. \end{align} This introduces an explicit time dependence into the solutions $\phi_{\text{min}}$ that minimize the cosine \begin{align}\label{PinnedPhiDevBS} \phi_{\text{min}}(x,\tau)=-\frac{\vartheta(\tau)}{2} \end{align} (up to a constant that picks the specific minimum of the cosine). A time-dependent displacement field $\phi$ implies current flow. Using the continuity equation, we obtain the current density \begin{align} j(x,t)=-\frac{e}{\pi}\partial_{t}\phi(x,t)=\frac{e}{2\pi}\partial_{t}\vartheta(t)\,, \end{align} which describes pumping of a quantized charge \begin{equation}\label{PumpedCharge} Q_{P}=\int_{0}^{T}dt\, j(t)=e \end{equation} when advancing the periodic potential by one period. The interaction-enhanced gap implies a larger range of validity of this adiabatic treatment. Comparing the kinetic term in the Lagrangian to the energy gain due to the gap formation, we conclude that the adiabatic approximation remains valid as long as $\abs{\dot{\vartheta}}\ll \Delta$, where $\Delta$ is the renormalized gap of the interacting system. \section{Reduced dynamics of the motor degree of freedom}\label{RedDyn} \subsection{Bias voltage} \label{BiasVoltage} As long as $\abs{\mu} < \mu_c$ and $\abs{\dot\vartheta} <\Delta$, the electrons within the region of the periodic potential are locked to the minima of the periodic potential (cf. Fig.\ \ref{SinglePoint}) and the electronic dynamics is effectively frozen out. This is reflected in a locked displacement field $\tilde \phi=-\mu K \, x/(2 v_c)-\vartheta/2$ and a gapped spectrum (from here on omit the tilde for notational simplicity). Effectively, this allows us to shrink the length of the periodic potential to a single point $x=0$, at which the pinned displacement field $\phi(0,t)=-\vartheta(t)/2$ interacts with the free LLs, as shown schematically in Fig.\ \ref{SinglePoint}. \begin{figure} \includegraphics[width=8.0cm,keepaspectratio] {SinglePoint1.pdf} \caption{\label{SinglePoint} The pinning condition $\phi=-\vartheta/2$ within the area of the periodic potential reduces the coupling between motor degree of freedom and electrons to a free LL with a constrained boundary condition $\phi(0,t)=-\vartheta(t)/2$ when the area of the periodic potential is shrunk to a single point $x=0$.} \end{figure} In a motor setup, the applied bias voltage $V$ is used to drive the motor degree of freedom $\vartheta (t)$. When the electronic dynamics in the region of the periodic potential is frozen out, the voltage can also be taken to drop at the point $x=0$. This yields a contribution to the action \begin{align}\label{Bias} S_{\text{bias}} = -\frac{eV}{2}\int d{\bf r}\,\text{sgn}(x)\frac{\partial_x \phi}{\pi}=\frac{eV}{\pi}\int d\tau \phi(0,\tau )\,. \end{align} Here we used the bosonized form of the normal ordered electron density given in Eq.\ \eqref{NormOrdDensityPhi}. Integrating out all electronic degrees of freedom away from $x=0$ under the constraint $\phi(0,t)=-\vartheta(t)/2$, analogous to the treatment of a local impurity in a LL \cite{kane1992transport,CastroNetoAH1996}, leads to an effective description of the dynamics of the motor degree of freedom, including a non-conservative mean force stemming from the electronic bias, friction, and a fluctuating force. \subsection{Motor dynamics for an infinite Luttinger liquid} \label{infiniteLL} We first treat the coupling to an infinite LL. Integrating out the LL (see \cite{kane1992transport} and App. \ref{SinglePointAction}), we obtain the effective action \begin{align}\label{SeffMatsubaraInfiniteLL} S_{\text{eff}}=\sum_{n}\left(\frac{{\cal I} \omega_{n}^2}{2}+ \frac{\abs{\omega_{n}}}{4\pi K} \right)\abs{\vartheta_{n}}^{2}-\int_{0}^{\beta} d \tau\frac{eV}{2\pi}\vartheta\,, \end{align} for $\vartheta (t)$. Here, we added the kinetic energy of the motor with its moment of inertia ${\cal I}$. The second term describes a dissipative contribution to the motor dynamics and the third term a potential induced by the applied bias. To obtain the explicit equation of motion in real time, we analytically continue the action to the Keldysh contour \cite{KamenevFieldTheory}. The effective action then acquires the form \begin{align} S_{\text{eff}}&=\int\frac{ d \omega}{2\pi}(\bar{\vartheta}_{\omega}^{cl},\bar{\vartheta}_{\omega}^{q})\,\hat{K}(\omega)\,\left(\begin{array}{c} \vartheta_{\omega}^{cl}\\ \vartheta_{\omega}^{q} \end{array}\right)\,\,+\frac{eV}{\pi}\int d t\, \vartheta^q(t) \nonumber \\ \hat{K}(\omega)&= \left(\begin{array}{cc} 0 & K^{A}(\omega)\\ K^{R}(\omega) & K^{K}(\omega) \end{array}\right)\,.\label{KeldyshKernel} \end{align} We performed a Keldysh rotation of $\vartheta(t)$ into the quantum and classical components $\vartheta^{q}=(\vartheta^{+}-\vartheta^{-})/2$ and $\vartheta^{cl}=(\vartheta^{+}+\vartheta^{-})/2$, respectively. The kernels $K^{R(A)}(\omega)$ are the analytical continuations of the Matsubara correlator $\mathcal{K}(\omega_{n})={\cal I} \omega_{n}^2 /2+\abs{\omega_{n}}/(4\pi K) $ in Eq.\ \eqref{SeffMatsubaraInfiniteLL} to real frequencies $K^{R(A)}(\omega)=-2\mathcal{K}(i\omega_{n}\rightarrow \omega \pm i \eta )$ \cite{KamenevFieldTheory}. The Keldysh component follows from the fluctuation dissipation theorem, $K^{K}(\omega)=\left(K^{R}(\omega)-K^{A}(\omega)\right)\coth(\omega/2T)$ \footnote{Due to the pinned LL on the length of the periodic potential, the free LLs on the left and right side are isolated and act as independent equilibrium baths for the motor degree of freedom.}. Fourier transforming the action Eq.\ \eqref{KeldyshKernel} to real time we obtain \begin{align} S=\int d t \,\bigg\{ & -2\vartheta^{q}(t)\left[{\cal I} \ddot{\vartheta}^{cl}(t)+\frac{\dot{\vartheta}^{cl}(t)}{2\pi K} -\frac{eV}{2\pi}\right]\nonumber \\ & +\int dt'K^{K}(t-t')\vartheta^{q}(t)\vartheta^{q}(t')\bigg\} \,,\label{SeffRealTimeInfiniteLL} \end{align} where we performed an integration by parts. The Fourier transform of the Keldysh component reads \begin{equation} K^{K}(t)=\frac{iT^{2}}{K\cosh^2\left(\pi Tt\right)}\,, \end{equation} yielding a coupling of the quantum fields which is nonlocal in time. This action determines the reduced dynamics of the motor degree of freedom including all quantum fluctuations. The contribution quadratic in the quantum components leads to the fluctuating Langevin force in the classical equation of motion of the motor. Its explicit form can be obtained by decoupling the quantum components by a Hubbard-Stratonovich transformation \cite{KamenevFieldTheory} \begin{align} &\exp \left( i\int\frac{ d \omega}{2\pi}\, K^{K}(\omega)\abs{\vartheta^{q}(\omega)}^{2} \right)= \nonumber \\ &\int\mathbf{D}\left[\xi\right]\exp\left(\int\frac{ d \omega}{2\pi}\left[\frac{\abs{\xi(\omega)}^{2}}{i\, K^{K}(\omega)}+2i\bar{\xi}(\omega)\vartheta^{q}(\omega)\right]\right)\,, \label{HubbardStratLL} \end{align} where $\xi(t)$ is a real field. Introducing the integral over $\xi$ into the Keldysh partition function $Z=\int\mathbf{D}\left[\vartheta\right]\exp(iS)$ with the action $S$ given in Eq.\ \eqref{SeffRealTimeInfiniteLL} leads to the classical saddle point equation for $\vartheta$ \begin{equation} {\cal I} \ddot{\vartheta}^{cl}(t)=\frac{eV}{2\pi}-\frac{1}{2\pi K}\dot{\vartheta}^{cl}(t)+\xi(t)\label{EOMInfiniteLL} \end{equation} with \begin{equation} \braket{\xi(t)\xi(t')}=\frac{K^{K}(t'-t)}{2i}=\frac{T^2}{2K\cosh^2\left(\pi T\left[t'-t\right]\right)}\,. \end{equation} Note that the friction coefficient $\gamma$ takes the value $\gamma=\hbar / 2\pi K$ when we reinsert $\hbar$. In the classical limit of large $T$ we can approximate $\cosh^{-2}\left(\pi T\left[t'-t\right]\right)\simeq 2(\pi T)^{-1}\delta(t-t')$ and the fluctuating force becomes $\delta$-correlated, with the magnitude of the correlator determined by temperature and the friction coefficient, \begin{equation} \braket{\xi(t)\xi(t')}=2\gamma T\delta(t-t')\,. \end{equation} We see that the mean force is unaffected by the electron-electron interactions, while the friction $\gamma=(2 \pi K)^{-1}$ and with it the correlator of the fluctuating force are enhanced by repulsive electron-electron interactions. For $K\rightarrow1$ the effective dynamics in Eq.\ \eqref{EOMInfiniteLL} reproduces the noninteracting result of Ref. \onlinecite{Bustos-Marun2013}. The time-averaged steady state velocity of the motor follows from the equation of motion \eqref{EOMInfiniteLL} which yields $\dot{\vartheta}=K\,eV$. Since the pumped charge in Eq.\ \eqref{PumpedCharge} is the only charge transported across the periodic potential, we can directly calculate the current $I$ as pumped charge per unit time \begin{align}\label{CurrentInfiniteLL} I=\frac{e\dot{\vartheta}}{2\pi}=\frac{K e^2}{ 2\pi \hbar}V\,, \end{align} where we reinserted $\hbar$ to bring the current into the usual form in terms of the conductance quantum. We can use this current at steady state to define the \textit{dc} conductance of the motor $g_M=K e^2/h$, which takes the value of an infinite, ideal LL \cite{kane1992transport}. We use these results to investigate the efficiency $\eta$ of the interacting Thouless motor to perform work against an external load $F_{\,\text{load}}$. In the simple case that the load is independent of $\vartheta$, the steady velocity can be derived from Eq.\ \eqref{EOMInfiniteLL} via \begin{align}\label{steadyVelocityWithLoad} \frac{ \dot \vartheta}{2\pi K}=\frac{eV}{2\pi}-F_{\,\text{load}}\,. \end{align} At this velocity, the work performed on the load per unit time takes the form \begin{align} P_{\,\text{load}}=\dot \vartheta F_{\,\text{load}}=2\pi K \left(\frac{eV}{2\pi }-F_{\,\text{load}} \right) F_{\,\text{load}}\,. \end{align} This output power reaches a maximum \begin{align}\label{MaximumPower} P_{\,\text{load,max}}=2\pi K \left(\frac{eV}{4\pi }\right)^2 \end{align} at the load $F_{\,\text{load,max}}=eV/(4\pi)$. The efficiency at maximum power is defined as the ratio between $P_{\,\text{load,max}}$ and the electrical input power $P_{in}=IV$ provided by the bias, which is determined by the pumped current in Eq.\ \eqref{CurrentInfiniteLL}. With the velocity at $F_{\,\text{load,max}}$, this yields an efficiency \begin{equation}\label{EfficiencyInfiniteLL} \eta=\frac{P_{\,\text{load,max}}}{P_{in}}=\frac{1}{4 \pi }\, \end{equation} at maximum power. We see that the maximum output power is reduced by repulsive electron-electron interactions ($K<1$) due to the increased dissipation. The reduced mean velocity for interacting systems also decreases the input power, which yields an interaction-independent efficiency at maximum power. \subsection{Friction and energy current in an infinite Luttinger liquid} It is interesting to obtain a more explicit description of the friction coefficient $\gamma$. To this end, we compute the energy current carried by the LL for the time dependent boundary condition $\phi(0,t)=-\vartheta(t)/2$. The solution of $\phi$ under this time dependent constraint is shown in Eq.\ \eqref{RealTimeSolutionInfiniteLL} in Appendix \ref{SinglePointAction}. Assuming a steady velocity it takes the form \begin{align}\label{PhiSteadyVelocity} \phi(x,t) =\frac{-\dot{\vartheta}}{2}\left(t-\frac{\abs{x}}{v_{c}}\right)\,. \end{align} To see how this solution carries the dissipated energy away from the motor, we investigate the energy current density $j^{E}$ corresponding to this solution. $j^{E}$ can be derived from the Heisenberg equation of motion for the energy density \begin{align} \rho^{E}=\frac{v_{c}}{2\pi}\left[ \frac{1}{K}\left(\partial_{x}\phi(x)\right)^{2}+K\left(\partial_{x}\theta(x)\right)^{2}\right]\,, \end{align} which yields \begin{align} \partial_{t}\rho^{E} = i\left[H,\rho^{E}\right] = -\frac{v_{c}^{2}}{2\pi}\partial_{x}\left\{ \partial_{x}\theta(x),\partial_{x}\phi(x)\right\} \,, \end{align} where we used the commutation relations of the bosonic fields introduced above in Sec.\ \ref{Model} and $ \{.,.\} $ denotes the anticommutator. We can now directly deduce the energy current via the continuity equation \begin{equation} \partial_{t}\rho^{E} = -\nabla j^{E} \end{equation} which leads to \begin{equation}\label{jE} j^{E}=\frac{v_{c}^{2}}{2\pi}\left\{ \partial_{x}\theta(x),\partial_{x}\phi(x)\right\} \,. \end{equation} Since the gradient of $\theta$ if fully determined by the time dependence of $\phi$, i.e., $\partial_{t}\phi(x,t)=i\left[H,\phi(x,t)\right]=-v_{c}K\partial_{x}\theta(x)$, we can write down the energy current corresponding to the solution in Eq.\ \eqref{PhiSteadyVelocity} as \begin{equation} j^{E}= \frac{\dot{\vartheta}^{2}}{4\pi K} \text{sgn}(x)\,. \end{equation} Thus we can see that the dissipated power \begin{equation} -P_{diss}=\gamma\dot{\vartheta}^{2}=j^{E}(x>0)-j^{E}(x<0)\, \end{equation} is evenly split between the two sides and sent to $x=\pm \infty$. \subsection{Contact to Fermi liquid leads} \label{ContactFL} \begin{figure} \includegraphics[width=8.5cm,keepaspectratio] {LLFL2.pdf} \caption{\label{LLFL}Connecting FL leads causes backscattering of plasmons at the FL-LL boundary.} \end{figure} In the previous section we assumed an infinite LL which leads to enhanced dissipation and a reduced motor conductance due to repulsive electron-electron interactions. It is well known that when contacting a LL by FL leads, the \textit{dc} conductance of the wire takes the value of an ideal noninteracting channel $g=e^2/h$ \cite{Maslov1995, Ponomarenko1995, Safi1995}. In this section we investigate, whether attaching FL leads reduces the dissipation of the Thouless motor to the noninteracting value and reproduces the noninteracting motor conductance $g_M=e^2/h$. The FL leads generate backscattering of plasmons at the FL-LL boundary. This introduces memory into the effective equation of motion of the motor and results in reduced noninteracting dissipation at steady velocity, cf. Fig.\ \ref{LLFL}. The transition between LL and FL can be modeled as a change of the interaction parameter $K\rightarrow1$ and an associated change of the charge velocity $v_c\rightarrow v_F$ \cite{Maslov1995,Karzig2011}.The LL is connected to FL reservoirs at $x=\pm D/2$, which yields \begin{align} S_0&= \int d{\bf r}\frac{1}{2\pi}\left[\frac{\left(\partial_{\tau}\phi\right)^{2}}{ K(x)\,v_c(x)}+\frac{v_c(x)\left(\partial_{x}\phi\right)^{2}}{K(x)}\right]\,, \end{align} for the action in the $\phi$-representation, with \begin{align}\label{SpaceDependentLL} K(x)=\begin{cases} 1 & \abs{x}\geq D/2\\ K & \abs{x}<D/2 \end{cases} \end{align} and \begin{align} v_{c}(x)=\begin{cases} v_{F} & \abs{x}\geq D/2\\ v_{c} & \abs{x}<D/2\,. \end{cases} \end{align} We again obtain the effective action of the motor by integrating out the electronic degrees of freedom under the constraint $\phi(0,t)=-\vartheta(t)/2$. The procedure amounts to solving the saddle point equation for the $\phi$-field in the presence of the appropriate boundary conditions for $\phi$ and $\partial_x\phi$, as shown in Appendix \ref{SinglePointAction}. This yields the effective action \begin{align}\label{SeffMatsubaraLLFL} S[\vartheta]_{\text{eff}}=\sum_{n}\frac{M(\omega_n)}{4\pi K}\abs{\omega_{n}}\,\abs{\vartheta_{n}}^{2}-\int_{0}^{\beta} d \tau\frac{eV}{2\pi}\vartheta\,, \end{align} with \begin{align}\label{GammaLLFL} M(\omega_n)=\left(1+2\sum_{n=1}^{\infty}\e^{-n\abs{\omega_n}\mathcal{T}}r_p^{n}\right)\,, \end{align} where $\mathcal{T} =D/ v_c $ is the traversal time of the plasmons from $x=0$ to the FL-LL boundary and back and $r_p=\frac{K-1}{K+1}$ is the plasmon reflection amplitude. To obtain the real time dynamics we analytically continue to the Keldysh contour analogous to the infinite LL case above. The kernels now take the form \begin{align} K^{R(A)}(\omega)&=\frac{\pm i\omega}{2\pi K}\left(1+2\sum_{n=1}^{\infty}\e^{\pm ni\omega \cal{T}}r_p^{n}\right)\,, \end{align} and $K^K(\omega)=\left(K^{R}(\omega)-K^{A}(\omega)\right)\coth(\omega/2T)$ \cite{Note1}. Fourier transforming to real time shows that the plasmon scattering at the LL-FL boundary induces a coupling of the quantum field to earlier classical velocities, \begin{align} S_{\text{diss}}= S_{qq}- &\int dt\, \frac{2\vartheta^{q}(t)}{2\pi K} \\ \times& \left(\dot{\vartheta}^{cl}(t)+2\sum_{n=1}^{\infty}\dot{\vartheta}^{cl}(t-n\mathcal{T})r_p^{n}\right)\,.\nonumber \end{align} Here \begin{align} S_{qq}=\int d t dt'K^{K}(t-t')\vartheta^{q}(t)\vartheta^{q}(t') \end{align} is the contribution of the dissipative action which is quadratic in the quantum fields. In contrast to the infinite LL case above, the Keldysh kernel \begin{align} K^K (t)= \frac{iT^{2}}{K}\left[\sum_{n=-\infty}^{\infty}\frac{1}{\cosh^{2}\left(\pi T\left[t+n{\cal T}\right]\right)}r_{p}^{\abs{n}}\right] \end{align} leads to a nonlocal coupling of the quantum fields also in the high temperature limit, resulting in correlations of the fluctuating force which are nonlocal in time. As before we combine the dissipative action with the free part and the bias induced mean force and decouple the quantum fields via a Hubbard-Stratonovich transformation. This yields the nonlocal classical saddle point equation \begin{align}\label{EOMLLFL} {\cal I} \ddot{\vartheta}^{cl}(t)&=\frac{eV}{2\pi}+\xi(t) \\ -&\frac{1}{2\pi K}\left[\dot{\vartheta}^{cl}(t)+2\sum_{n=1}^{\infty}\dot{\vartheta}^{cl}(t-n\mathcal{T})r_p^{n}\right]\,.\nonumber \end{align} The correlator of the fluctuating force is given by \begin{align} \braket{\xi(t)\xi(t')}=\frac{T^{2}}{2K}\sum_{n=-\infty}^{\infty}\frac{1}{\cosh^{2}\left(\pi T\left[t-t'+n{\cal T}\right]\right)}\,r_{p}^{\abs{n}}\,. \end{align} In the high temperature limit this leads to finite correlations at all multiples of the traversal time $\cal{T}$ \begin{align} \braket{\xi(t)\xi(t')}\simeq\frac{2T}{2\pi K}\sum_{n=-\infty}^{\infty}\delta(t-t'+n{\cal T})\, r_{p}^{\abs{n}}\,. \end{align} Since $r_p<0$, the nonlocal couplings to the velocity, i.e.\ the contribution $\propto r_p^n$ in Eq.\ \eqref{EOMLLFL} caused by multiple plasmon reflections at the FL-LL boundary and $x=0$, have alternating signs and a decaying amplitude $\propto \abs{r_p}^n$. Hence this force damps the motion for all even multiples of the traversal time and boosts the motion for all odd ones for a fixed sign of the velocity. How much energy is dissipated in this process depends on the detailed trajectory of $\vartheta$. At constant velocity, the effective dynamics in Eq.\ \eqref{EOMLLFL} leads to reduced dissipation and an enhanced velocity $\dot{\vartheta}=eV$. This results in a larger pumped charge per unit time \begin{align} I=\frac{e\dot{\vartheta}}{2\pi}=\frac{e^2}{ 2\pi \hbar}V\,, \end{align} and hence a \textit{dc} motor conductance which equals that of an ideal \textit{noninteracting} channel. Note that we again reinstated $\hbar$. Therefore, analogous to the \textit{dc} conductance of an ideal LL channel in contact to FL reservoirs \cite{Maslov1995}, also the \emph{dc motor conductance} is ultimately governed by the interactions in the attached reservoirs. Correspondingly the maximum output power of the Thouless motor is increased to the noninteracting value $K\rightarrow1$ in Eq.\ \eqref{MaximumPower}. \subsection{Friction and energy current with attached Fermi liquid leads} We now explore explicitly how the energy current is modified by plasmon reflections at the LL-FL boundary. We consider two different trajectories: a constant velocity $\vartheta(t)=\dot{\vartheta}t$, and a sudden step $\vartheta(t)=\vartheta_{0}\Theta(t)$. For both cases, we derive the energy current as given above in Eq.\ \eqref{jE}, based on the solution for $\phi$ in Eq.\ \eqref{PhiRealtimeLLFL}. For a sudden step, the gradient and time derivative of $\phi$ are strongly peaked $\delta$-functions that cannot interfere with each other. In this case, all the energy of the initial excitation is released into the FL reservoirs after integrating over multiple scattering events. Thus, the total dissipated energy \begin{align} E_{diss}=\int dt [ j^{E}(x>0)-j^{E}(x<0) ] \end{align} is determined by the initial plasmon excitation and takes the value of an infinite LL \begin{align} E_{diss}=\frac{\tau \overline{\dot{\vartheta}^{2}}}{2\pi K}\,. \end{align} Here $\int dt \dot{\vartheta}(t)^{2}=\tau \overline{\dot{\vartheta}^{2}}$ determines the dissipation caused by the initial plasmon excitation and $\tau$ is the step duration. In contrast for a constant velocity the reflected plasmons in Eq.\ \eqref{PhiRealtimeLLFL} interfere with each other, leading to a constant gradient $\partial_{x}\phi=K\dot{\vartheta}\,\text{sgn}(x)/(2v_{c})$ and time derivative $\partial_{t}\phi=-\dot{\vartheta}$ in the region $\abs{x}<D/2$. This yields a reduced energy current \begin{align}\label{EnergyCurrentFLLeads} j^{E}=\frac{\dot{\vartheta}^{2}}{4\pi}\text{sgn}(x)\,, \end{align} which corresponds to the dissipated power with the reduced \textit{noninteracting} friction $\gamma=(2\pi)^{-1}$ \begin{align} -P_{diss}=\frac{1}{2\pi}\dot{\vartheta}^{2}=j^{E}(x>0)-j^{E}(x<0)\,. \end{align} Therefore it is interference of reflected plasmons (cf. Fig.\ \ref{LLFL}) which reduces the energy current at a constant velocity and thus prevents the system from releasing all the energy into the attached Fermi liquid leads. \section{Magnetic motor}\label{Translation} The counter-propagating states of a single QSH edge (cf. Fig \ref{fig1}) can be described as a Luttinger liquid analogous to the spinless quantum wire introduced in Sec.\ \ref{Model} \cite{Wu2006}. The bosonization of the helical channels is obtained from Eqs.\ \eqref{LeftRightMover} and \eqref{PsiBosonized} by replacing $\psi_{R}\rightarrow \psi_{R,\uparrow}$ and $\psi_{L}\rightarrow \psi_{L,\downarrow}$, while the Hamiltonian remains unchanged when written in terms of the bosonic fields. The exchange coupling to the nanomagnet $H_{M}=-J_0 /2 \int \dx \Psi^\dagger \pmb{\sigma} \Psi \cdot{\bf M}$ causes backscattering of the helical channels whenever $\bf M$ has a component in the x-y-plane. Here $\pmb{\sigma}$ is the vector of Pauli matrices and $\Psi=(\psi_{R,\uparrow},\psi_{L,\downarrow})^T$. For strong easy-plane anisotropy, we can parametrize the magnetization as $M_{x}=M \cos\vartheta_{M}$ and $M_{y}=M \sin\vartheta_{M}$, and the exchange coupling generates a sine-Gordon term \begin{align} S_{M}=-\frac{ J_0 M} {2\pi\lambda}\int d {\bf r} \,\cos\left(2\phi(x)+2k_{F}x+\vartheta_{M}\right)\,. \end{align} Here $k_F$ is measured from the Dirac point $k=0$. Thus, the coupling of the nanomagnet to the helical edge states takes the same mathematical form as the coupling of the sliding periodic potential to the spinless LL in a quantum wire and we can directly translate the results of Sec.\ \ref{CouplingPeriodPot} and \ref{RedDyn} to the magnetic motor. For $K<2$ the $\phi$-field is locked to $\phi(x)=n\pi-k_{F}x-\vartheta_{M}/2$, which corresponds to alignment of the spin density along the exchange field of the nanomagnet \begin{align} s_{x}(x)=\frac{1}{2\pi\lambda}\cos\vartheta_{M}\quad s_{y}(x)=\frac{1}{2\pi\lambda}\sin\vartheta_{M}\,. \end{align} A full precession of the magnetization leads to quantized charge pumping of one electron across the gapped region coupled to the nanomagnet. Quantum fluctuations lead to an interaction dependent downward scaling of the effective strength of the exchange coupling $J$, which results in an effective gap size for the lowest available modes \begin{equation} \Delta_M =\left(\frac{2J_{0}M K\, v_{c}}{\left(2 \pi v_{c} \lambda ^{-1} \right)^{K}\lambda}\right)^{1/(2-K)}\,.\label{eq:DeltaMagnetic} \end{equation} This formula reproduces the gap $\Delta_\text{non-int.} = J_0 M $ for noninteracting helical edge modes and shows the strong enhancement of the magnetically induced gap by repulsive electron-electron interactions, cf. Eq.\ \eqref{GapEnhancement}. The noninteracting QSH edge remains insulating as long as the chemical potential remains within the gap that is opened by the magnet around the Dirac point $k=0$. Section \ref{deviationsPerfectBS} shows that interactions also make the magnetic system more robust against changes of the chemical potential and demonstrates that it remains gapped as long as $\abs{\mu}$ is smaller than $\mu_{c}=2 \Delta_M /(\pi K) $, cf. Eq.\ \eqref{DeltaCritical}. In the case of a large easy-plane anisotropy energy $D M^2_z/2$ ($D>0$), the Landau-Lifshitz-Gilbert equation governing the time evolution of the magnetization can be reduced to an equation of motion for the angle of the in-plane magnetization $\vartheta_M$, in which the inverse anisotropy constant acts as an effective moment of inertia ${\cal I}=D^{-1}$ \cite{Bode2012spin,Meng2014,Arrachea2015a}. With this we can readily translate the results for the effective dynamics Sec.\ \ref{RedDyn} to the magnetic case, replacing $\vartheta \rightarrow \vartheta_M$ and ${\cal I}\rightarrow D^{-1}$ . Thus, in the case of an infinite helical liquid, one obtains \begin{align} \frac{\ddot{\vartheta}_M^{cl}(t)}{D}=\frac{eV}{2\pi}-\frac{1}{2\pi K}\dot{\vartheta}_M^{cl}(t)+\xi(t)\,. \end{align} The dissipation is enhanced by repulsive interactions, leading to a reduced current and reduced motor conductance $g_M=K e^2/h$ compared to the noninteracting case in Eq.\ \eqref{CurrentInfiniteLL}. When assuming contact of the helical edge to Fermi liquid reservoirs as done in Sec.\ \ref{ContactFL}, the plasmon backscattering at the transition between helical liquid and reservoirs leads to an effective equation of motion including memory in Eq.\ \eqref{EOMLLFL} and the reduced dissipation of a noninteracting helical liquid at steady state. \section{Summary} \label{sec:Concl} We investigated the effects of electron-electron interactions on a quantum motor that is based on a Thouless pump operating in reverse. Our field theoretic treatment enables a fully quantum description of the coupling induced motor dynamics. Repulsive interactions, of particular importance due to the reduced dimensionality of the system, enhance the energy gap opened by the coupling to the periodic potential. Interactions also increase the robustness of the system against changes in the chemical potential and increase the velocity range in which the system evolves adiabatically. Thus electron-electron interactions support the working principle of the motor. For infinite LLs with repulsive interactions, the friction experienced by the motor degree of freedom due to the coupling to the electrons is enhanced. When connecting the LL to FL reservoirs, plasmon reflections lead to an effective equation of motion including memory and a reduced dissipation which coincides with the noninteracting result for a constant motor velocity. Consequently, the effective motor conductance is determined by the attached noninteracting reservoirs analogous to the \textit{dc} conductance of an ideal LL. Our result also applies to a nanomagnet coupled to the helical edge of a QSH system. This system can be readily mapped to the Thouless motor, possibly leading to an experimentally more feasible realization of the motor. \section*{Acknowledgments} This work was supported in part by CRC 183 and SPP 1666 of the Deutsche Forschungsgemeinschaft. GR is grateful for support from the Packard Foundation, as well as from the IQIM, an NSF PFC, and to the Aspen Center for Physics, supported by the NSF grant PHY-1607761. \bibliographystyle{apsrev4-1}
{ "timestamp": "2017-12-15T02:00:20", "yymm": "1712", "arxiv_id": "1712.04952", "language": "en", "url": "https://arxiv.org/abs/1712.04952" }
\section{Introduction} Transition metal dichalcogenides (TMDs) present a promising class of nanomaterials with a direct band gap, efficient electron-light coupling, and strong Coulomb interaction \cite{THeinz, He2014, splendiani2010emerging, arora2015excitonic,gunnar_nano}. The latter gives rise to a variety of tightly bound excitons, which determine the optical response of TMDs \cite{Chernikov2014, gunnar_prb,steinhoff2014influence}. As atomically thin materials, they show an optimal surface-to-volume ratio and are therefore very sensitive to changes in their surroundings \cite{raja2017coulomb, schmidt2016reversible, conley2013bandgap}. A consequence is that one can tailor the optical fingerprint of these materials through external molecules \cite{voiry2015covalent,yuan2014establishing,yong2014ws,hayamizu2016bioelectronic}. Applying non-covalent functionalization \cite{Hirsch2005}, the electronic band structure remains to a large extent unaltered, while optical properties significantly change.\\ In our previous work, we have proposed a new sensing mechanism for molecules based on the activation of dark excitonic states in monolayer TMDs \cite{maja_sensor, ermin_review}. These dark states can lie energetically below the bright ones but are not directly accessible by light as they are either spin or momentum forbidden \cite{hoegeleMono,hoegeleBi,zhang2015experimental}. We have shown that in the presence of molecules the absorption spectra exhibits an additional peak appearing at the position of the optically inaccessible $K\Lambda$ exciton. Unfortunately, the peak is only visible considerably below room temperature due to the significant broadening of excitonic transitions \cite{selig2016excitonic}. The aim of this work is to investigate the possibility to achieve room-temperature sensing of molecules. To reach this goal we investigate photoluminescence (PL) spectra, since in contrast to absorption they are not characterized by a large background signal and thus show a better signal-to-noise ratio. Another important advantage of PL is its strong dependence on the number of excited excitons. After the process of thermalization, the excitons mainly occupy the energetically lowest dark $K\Lambda$ state \cite{selig2017dark}. As a result, the additional dark exciton peak is expected to be very large compared to the bright peak that otherwise strongly dominates absorption spectra. This work reveals the optimal conditions for the maximal visibility of dark excitons in PL spectra, which presents an important step towards a possible technological application of TMDs as molecular sensors. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{Figure1.pdf} \end{center} \caption{\textbf{Schematic illustration of molecule-induced photoluminescence.} Excitonic dispersion of tungsten based TMDs with the K valley located at the center-of-mass momentum $Q=0$ and energetically lower $\Lambda$ valley located at $Q=\Lambda \approx 6.6 \,\text{nm}{^{-1}} $. (a) Optical excitation induces a microscopic polarization $P^{K}$ in the K valley. Due to exciton-phonon interaction the polarization can decay either within the K valley or to the energetically lower $\Lambda$ valley. At the same time incoherent excitons $N^{KK}$ and $N^{K\Lambda}$ are formed and thermalize until a Bose distribution is reached. (b) The bright excitons $N^{KK}$ located at the K valley decay radiatively by emitting a photon (PL), whereas the dark excitons $N^{K\Lambda}$ at the $\Lambda$ valley require a center-of-mass momentum to decay back to the light cone and emit light. Molecules on the surface of the TMD material can provide this momentum and hence induce photoluminescence (M-PL) from the dark $\Lambda$ valley. } \label{schema} \end{figure} \section{Theoretical approach} \subsection{Photoluminescence} To get a microscopic access to the optical response of pristine and molecule-functionalized monolayer TMDs after excitation with a laser pulse, we apply the density matrix formalism in combination with the nearest-neighbor tight-binding approach \cite{Kochbuch, Kira2006,carbonbuch, kadi14}. Our goal is to calculate the steady-state photoluminescence $\textnormal{PL}(\omega_q)$ which is given by the rate of emitted photons \begin{equation}\label{SteadyStatePL} \textnormal{PL}(\omega_q) \propto \omega_q \frac{\partial}{\partial t} \langle c^{\dagger}_{\bf q}c_{ \bf q}\rangle \propto \text{Im} \left [ \sum_{\bf{k_1}\bf{k_2}\mu} M_{\bf q \bf{k_1}\bf{k_2}} S^{vc_{\mu}}_{\bf{k_1}\bf{k_2}} (\omega_q )\right ] \end{equation} which is determined by the photon-assisted polarization \cite{thranhardt2000quantum} $S^{vc_{\mu}}_{\bf{k_1}\bf{k_2}} = \langle c^{\dagger}_{\bf q} a^{\dagger v}_{\bf{k_1}} a^{c_\mu}_{\bf{k_2}} \rangle $ with electron annihilation (creation) $a^{(\dagger)}$ and photon annihilation (creation) $c^{(\dagger)}$ operators. This microscopic quantity is a measure for emitting photons with the energy $\hbar \omega_q$ due to relaxation from the state ($c_{ \mu}, \bf{k_2}$) in the conduction band of valley $\mu$ with the electronic momentum $\bf k_2$ to the state ($v, \bf{k_1}$) in the valence band with the momentum $\bf k_1$. Note that we take into account the conduction band minima both at the K and the $\Lambda$ valley, while there is only a valence band maximum at the K valley. We neglect the impact of the energetically lower $\Gamma$ valley. Before we derive the TMD Bloch equations, we account for the crucial importance of excitonic effects \cite{Chernikov2014, gunnar_prb,arora2015excitonic} by transforming the system to the excitonic basis. We use the relation $X^{cv}_{\bf{k_1 k_2}} \rightarrow X_{\bf{qQ}}^{cv }= \sum_{\mu} \varphi_{\bf q}^{\mu} X_{\bf{Q}}^{\mu}$, where each observable $X_{\bf{qQ}}^{cv }$ is projected to a new excitonic quantity $X_{\bf{Q}}^{\mu}$ that is weighted by the excitonic wave function $\varphi_{\bf q}^{\mu}$. For higher correlation it reads accordingly: $X^{cvvc}_{\bf{k_1 k_2 k_3 k_4}} \rightarrow X_{\bf{qQq'Q'}}^{cvvc }= \sum_{\mu\mu'} \varphi_{\bf q}^{\mu} \varphi_{\bf q'}^{\mu' *} X_{\bf{QQ'}}^{\mu\mu'}$ Here, we have introduced the center-of-mass momentum $\bf Q = k_2 - k_1$ and the relative momentum ${\bf q}=\alpha {\bf k_1} + \beta {\bf k_2}$ with $\alpha= \frac{m_h}{m_h + m_e^{\mu}}$ and $\beta=\frac{m_e^{\mu}}{m_h + m_e^{\mu}}$ with the electron (hole) mass $m_{e(h)}^{\mu}$. The excitonic eigenfunctions $\varphi_{q}$ and eigenenergies $\varepsilon^{\mu}$ are obtained by solving the Wannier equation, which presents an eigenvalue problem for excitons \cite{Kochbuch, Kira2006} \begin{equation}\label{wannier} \frac{\hbar^2 q^{2}}{2m^{\mu}} \varphi_{\bf q}^{\mu} - \sum_{\bf k} V_{\text{exc}}(\bf k) \varphi_{\bf {q-k} }^{\mu}=\varepsilon^{\mu}\varphi_{\bf {q}}^{\mu}. \end{equation} Here, $m^{\mu}=\frac{m_h\cdot m_e^{\mu}}{m_h+m_e^{\mu}}$ is the reduced exciton mass and $V_{\text{exc}}$ describes the attractive part of the Coulomb interaction that is responsible for the formation of excitons. The corresponding Coulomb matrix elements are calculated within the Keldysh potential \cite{Keldysh1978, gunnar_prb,Cudazzo2011}. For the case of pristine monolayer TMD one obtains the well-known Elliott formula for photoluminescence \cite{hoyer2005many} by solving the semiconductor luminescence equations \cite{kira1999quantum,thranhardt2000quantum} \begin{equation}\label{PLG} I^{\sigma}(\omega_q) \propto \text{Im} \left[ \sum_{\mu} \frac{ |M^{\sigma\mu}|^2 \delta_{\mu,K} \left( |P^{\mu}_0|^2 + N^{\mu}_0\right) } {\varepsilon^{\mu} - \hbar \omega_q - i\gamma^{\mu}} \right]. \end{equation} Here, $M^{\sigma\mu}$ is the exciton-photon matrix element corresponding to the coupling of the exciton to the $\sigma$ polarized light \cite{Kochbuch,thranhardt2000quantum}. In contrast to absorption that is only determined by the microscopic polarization $P^{\mu}_{\bf Q}= \sum_{\bf q} \varphi_{\bf q}^{\mu*} \langle a^{\dagger c_{ \mu}}_{{\bf q} +\alpha {\bf Q} } a^{v}_{{\bf q} -\beta {\bf Q} } \rangle $ the PL also shows an incoherent contribution that scales with the exciton occupation $N^{\mu}_{{\bf Q}}= \sum_{{\bf q_1} {\bf q_2}} \varphi_{\bf q_1}^{\mu*} \varphi_{\bf q_2}^{\mu} \delta \langle a^{\dagger c_{\mu}}_{\bf{q_1}+\alpha {\bf Q}} a^{ v}_{\bf{q_1}-\beta {\bf Q}} a^{\dagger v}_{\bf{q_2}-\beta {\bf Q}} a^{c_{\mu}}_{\bf{q_2}+\alpha {\bf Q}} \rangle$. This quantity corresponds to the expectation value of correlated electron-hole pairs and is referred to as incoherent excitons \cite{thranhardt2000quantum}. The latter cannot be created through optical excitation with a coherent laser pulse, but are formed assisted by e.g. exciton-phonon scattering \cite{selig2017dark,thranhardt2000quantum}. Finally, the denominator of Eq. (\ref{PLG}) contains the excitonic eigenvalues $\varepsilon^{\mu}$ determining the position of excitonic resonance in PL as well as the the dephasing rate $\gamma^{\mu}$ responsible for the linewidth of excitonic transitions. The latter will turn out to be crucial for the visibility of dark excitonic states. Therefore, we microscopically calculate $\gamma^{\mu}$ including its radiative and non-radiative part, which is dominated by exciton-phonon scattering in the considered low-excitation limit. We find $\gamma^{KK}= 5$ (10) meV and $\gamma^{K\Lambda}=1$ (8) meV for 77 (300) K \cite{selig2016excitonic} in WS$_2$. Our goal is now to calculate the molecule induced changes in the photoluminescence, i.e. to what extent \eqref{PLG} changes in presence of molecules. In order to calculate molecule-induced photoluminescence, we start with the relation for the luminescence in general, i.e. \eqref{SteadyStatePL}, and derive the TMD Bloch equations for our system. Beside the photon-assisted polarization $S^{\mu}$, which is the key quantity for the steady state luminescence, we will also investigate the molecule-induced changes in exciton polarization $P^{\mu}$ and incoherent exciton densities $N^{\mu}$ appearing in \eqref{PLG}. In the following, we refer to $P^{\mu}$ as the quantity describing coherent excitons. To derive the TMD Bloch equations for the microscopic quantities $X=S,P,N$, we exploit the Heisenberg equation of motion $i\hbar \dot{X} = [X,H]$. The Hamilton operator $H=H_0+H_{c-c}+H_{c-p}+H_{c-ph}+H_{c-m}$ describes many-particle interactions and includes the free carrier and phonon contribution $H_{0}$, the carrier-carrier interaction $H_{c-c}$, the carrier-photon interaction $H_{c-p}$, the carrier-phonon interaction $H_{c-ph}$ and the carrier-molecule interaction $H_{c-m}$. A detailed description of the Hamilton operator and the appearing matrix elements can be found in our previous work \cite{selig2016excitonic, majaTechnikPaper, gunnar_prb}. The carrier-molecule interaction is considered to be an interaction between excitons in the TMD and the dipole moment induced by the attached molecules. The molecules disturb the translational symmetry in the TMD lattice and soften the momentum conservation in the system. Depending on the molecular distribution and coverage certain momenta are favored offering the possibility to address certain otherwise dark excitonic states. The exciton-molecule coupling elements read \begin{equation} \label{G} G_{\bf{Qk}}^{\mu\nu}= \sum_{\bf{q}} ( \varphi_{\bf {q}}^{\mu*} g^{cc}_{\bf{q}_\alpha, \bf{q}_\alpha + \bf{k}} \varphi_{\bf{q+\beta k}}^{\nu} - \varphi_{\bf {q}}^{\mu*} g^{vv}_{\bf{q}_\beta - \bf{k}, \bf{q}_\beta} \varphi_{\bf {q-\alpha k}}^{\nu}) \end{equation} with $\bf{q}_\alpha=\bf{q}-\alpha\bf{Q}$ and $\bf{q}_\beta=\bf{q}+\beta\bf{Q}$. It corresponds to a sandwich between the involved excitonic wave functions and the carrier-dipole coupling $g_{\bf{kk'}}^{\lambda\lambda'} = \langle \Psi_{\bf k}^{\lambda} ({\bf{r}}) | \sum_l \phi^{\bf d}_l({\bf{r}}) | \Psi_{\bf k'}^{\lambda'} (\bf{r}) \rangle $ corresponding to the expectation value of the dipole potential $ \phi^{{\bf d}}_l({\bf{r}}) = \frac{1}{4\pi\epsilon_0} \frac{\bf d \cdot (r-R_l)}{|{\bf r-R_l}|^3} $ that is formed by all attached molecules and the tight-binding wave functions $\Psi^\lambda_{\bf k}(\bf r)$. As a result, the strength of the exciton-molecule interaction is given by the molecular dipole moment $\bf d$ and the distance $\bf R_l$ of the molecules from the TMD surface as well as by the number of attached molecules $l$ which can be translated to a molecular coverage $n_m$. Assuming homogeneously distributed molecules in x-y direction, one can write ${\bf R_l} = (x\cdot \Delta R, y\cdot \Delta R, R_z)$ with $x,y \in N $ and $\Delta R$ as an average distance between the molecules. With this we find for the carrier-dipole coupling elements \begin{eqnarray}\label{gg} g_{\bf{k_{1}k_{2}}}^{\lambda_1 \lambda_2} &=& \frac{e_{0}}{2\pi\varepsilon_{0}\hbar} \sum_{x} n_m \delta_{| {\bf{k_1-k_2}}|, \frac{2\pi x}{\Delta R}} \sum_{j}C_{j}^{\lambda_1 *} ({\bf k_{1}}) C_{j}^{\lambda_2} ({\bf k_{2}}) \notag \\ &\times& \delta_{\bf{k_{1}-k_{2}},\bf{q}} \int d{\bf {q}}\frac{\bf d \cdot \bf{q}}{|\bf q|^{2}} e^{-R_{z}q_z} \label{molDetail} \end{eqnarray} with the tight-binding coefficients $C^\lambda_j(\bf k)$, where $\lambda=v,c$ denotes the valence or the conduction band, while $j$ determines the contribution from different orbital functions \cite{gunnar_prb}. Applying the Heisenberg equation in excitonic basis, we obtain the following TMD Bloch equations \begin{eqnarray} \dot S_{\bf{kQ}}^{\mu} &=&\hspace{-2pt}-i\Delta\tilde{\omega}_{\bf{kQ}}^{\mu} S_{\bf{kQ}}^{\mu} \hspace{-2pt}+\hspace{-2pt} M^{\sigma \mu}_{\bf{kQ}} \delta_{\bf{Q,0}} \hspace{-2pt}\left( |P_{\bf{0}}^{\mu}|^2 \hspace{-2pt}+\hspace{-2pt} N_{\bf{Q}}^{\mu}\right) \notag \\ &+& \sum_{\nu, \bf{Q'}}\hspace{-2pt}G_{\bf{QQ'}}^{\mu \nu} S_{\bf{k, Q-Q'}}^\nu \label{SGleichung} \\ \dot{P}^{\mu}_\mathbf{Q}&=&-i\Delta {\omega}_{\bf{Q}}^\mu P^{\mu}_\mathbf{Q} + \Omega^{\mu}_{{\bf Q}} P^{\mu}_\mathbf{Q} \delta_{{\bf Q, 0}} + \sum_{\nu, \bf{Q'}} G_{\bf{QQ'}}^{\mu \nu} P_{\bf{ Q-Q'}}^\nu \label{PGleichung} \\ \dot{N}^{\mu}_\mathbf{Q}&=& \sum_{\nu, \bf{Q'}} \Gamma_{{\bf Q'}{\bf Q}}^{ \nu \mu , \text{in}} |P^{\nu}_{{\bf Q'}} |^2 \delta_{{\bf Q', 0}} -\Gamma^{\mu}_{\text{rad}} N^{\mu}_{{\bf Q}} \delta_{{\bf Q, 0}} \notag \\ &+& \sum_{\nu, \bf{Q'}} \left( \Gamma_{{\bf Q'}{\bf Q}}^{ \nu \mu , \text{in}} N^{\nu}_{{\bf Q'}} -\Gamma_{{\bf Q}{\bf Q'}}^{ \mu \nu , \text{out}} N^{\mu}_{{\bf Q}} \right) \notag \\ &+& \sum_{\nu,{\bf Q'}} \hspace{-2pt} |G_{\bf{QQ'}}^{\mu \nu}|^2 \hspace{-2pt} \left(\hspace{-2pt} N_{\bf{ Q-Q'}}^\nu \hspace{-2pt} - \hspace{-2pt} N_{\bf{ Q}}^\mu\right) \mathcal{L}_{\gamma_{\mu\nu}}\hspace{-2pt}(\varepsilon^{\nu}_{{\bf Q-Q'}}-\hspace{-2pt} \varepsilon^{\mu}_{{\bf Q}}) \label{NGleichung} \end{eqnarray} corresponding to a coupled system of differential equations for the photon-assisted polarization $S_{\bf{Q}}^{\mu}$, the microscopic polarization (coherent excitons) $P^{\mu}_\mathbf{Q}$, and the incoherent exciton occupation $N^{\mu}_\mathbf{Q}$. Here, we have introduced $\varepsilon^{\mu}_{{\bf Q}}=\varepsilon^{\mu}+\frac{\hbar^2 Q^{2}}{2M^\mu}$ with the total mass $M^{\mu}=m_h+m_e^{\mu}$ and $\Delta \omega_{\bf{Q}}^\mu= \frac{1}{\hbar} (\varepsilon^{\mu}_{{\bf Q}} -i\gamma^\mu)$. In Eq. (\ref{SGleichung}), the transition frequency is additionally determined by the photon frequency $\omega_{\bf k}$ and reads $\Delta \tilde{\omega}_{\bf{kQ}}^\mu =\Delta \omega_{\bf{Q}}^\mu - \omega_{\bf k}$. In Eq. \ref{NGleichung} $\mathcal{L}_{\gamma_{\mu\nu}}$ represents a Lorenzian function with width $\gamma^{\mu\nu}=\gamma_\mu+\gamma_\nu$. Furthermore, $G_{\bf{QQ'}}^{\mu \nu}$ is the exciton-molecule matrix element, which enables molecule-induced coupling between different states $\mu$ and $\nu$ for all quantities $S,P,N$. Equations (\ref{PGleichung}) and (\ref{NGleichung}) describe the optical excitation and decay of coherent excitons as well as the formation, thermalization, and decay of incoherent excitons (Fig. \ref{schema}(a)). In contrast, \eqref{SGleichung} describes the radiative decay of the thermalized excitons including the molecule-assisted photoemission process (Fig. \ref{schema}(b)). The coherent excitons are driven by the optical field $\Omega^{\mu}$ and decay radiatively and non-radiatively, which is both covered in the dephasing rate $\gamma^\mu$. The dephasing of coherent excitons leads to the formation of incoherent excitons, which is reflected by the term $\propto |P^2|$ in \eqref{NGleichung}. The incoherent excitons can also decay radiatively with the rate $\Gamma^{\mu}_{\text{rad}}$, as long as they are located within the light cone with $\textbf Q\approx 0$. Moreover, the incoherent excitons thermalize towards a thermal Bose distribution through exciton-phonon scattering. The corresponding out-scattering rate $\Gamma^{\mu\nu, \text{out}}_{{\bf{QQ'}}}$ describes processes from the state $(\mu, {\bf Q})$ to the state $(\nu, {\bf Q'})$, while the in-scattering rate $\Gamma^{\nu\mu , \text{in}}_{{\bf{Q'Q}}}$ describes the reverse process. More details on the scattering rates can be found in Ref. \onlinecite{selig2017dark}. Since we are interested in the steady-state photoluminescence after exciton formation, we can decouple \eqref{PGleichung} and \eqref{NGleichung} from \eqref{SGleichung}. We first need to solve \eqref{PGleichung} and \eqref{NGleichung} to get access to the thermalized exciton distribution. The results are presented in the next section, in particular focusing on the changes in the exciton dynamics induced by the presence of molecules. With this, we have access to the steady-state photoluminescence by solving \eqref{SGleichung} via Fourier transformation. To get analytic insights, we can restrict the appearing sum over the momentum $\bf Q'$ in \eqref{SGleichung} by taking into account only the most pronounced terms with ${\bf Q'} = 0$. Hence we find for the incoherent contribution of the photoluminescence \begin{equation}\label{PLana} I(\omega) \propto \text{Im} \bigg( \frac{|M_{\omega}^{\sigma K}|^2}{\Delta E_{\omega}^{K} - \frac{|G^{K\Lambda}|^2}{\Delta E_{\omega}^{\Lambda}}} \left[ N_{\bf 0}^{KK} (1-\alpha) + \alpha N_{\bf 0}^{K\Lambda} \right] \bigg) \end{equation} with $\alpha= \frac{|G^{K\Lambda}|^2}{(\Delta E_{\omega}^{\Lambda})(\varepsilon^{\Lambda}-\varepsilon^{K}+i\gamma^{K\Lambda})}$ and $\Delta E_{\omega}^{\mu} = \varepsilon^{\mu} - \hbar \omega - i\gamma^{\mu}$. For $G^{K\Lambda}=0$, i.e. the pristine case without molecules, this leads to the well-known Elliott formula from \eqref{PLG} in the incoherent limit with $|P^{\mu}| = 0$. If $G^{K\Lambda}\neq 0$, i.e. in the case of molecules attached to the surface of the TMD, we expect new peaks to appear in the PL, whenever $\Delta E_{\omega}^{\Lambda} = 0$. Now, we have all ingredients at hand to investigate molecule-induced changes in the photoluminescence. If not otherwise stated, we use a standard set of molecular parameters: dipole moment of $d=13$ D corresponding to the exemplary merocyanine molecules \cite{photocrome}, a dipole orientation of \mbox{90 $^\circ$}, and a molecular coverage $n_m=1.0 \text{ nm}^{-2}$. The orientation of the molecules is assumed to be perpendicular to the TMD plane, which is the most favorable case for densely packed molecules \cite{tsuboi2003formation}. Moreover, we assume the molecules to be attached non-covalently via van der Waals interaction leading to a distance between molecules and TMD surface of $R_z =0.36$ nm. We model the realistic situation, where TMD monolayers are located on a SiO$_2$ substrate with a dielectric constant of $\epsilon_{\text{bg}} = 3.9$. We assume a typical carrier density of $n_{ex}=10^{11} \text{ cm}^{-2}$. To calculate the relative separation between bright $KK$ and dark $K\Lambda$ exciton, we solve the Wannier equation using consistent DFT input parameters regarding the electronic band structure of TMDs \cite{andor}. We find $\Delta E^{K\Lambda}= E^{KK}-E^{K\Lambda} \approx 50$ meV for WS$_2$ as our standard TMD material. Finally, we use an exemplary temperature of $T=77$ K as the linewidths in this regime are narrow enough to study the molecule-induced changes in the optical response. A detailed temperature study including optimal room temperature conditions is revealed in the last section of this manuscript. First, we will show the influence of the molecules on the exciton dynamic by solving \eqref{PGleichung} and \eqref{NGleichung} in order to access the steady state exciton distribution needed for photoluminescence. With this insight, we will then calculate the molecule-assisted photoluminescence by solving \eqref{SGleichung} and exploiting \eqref{SteadyStatePL}. \subsection{Exciton dynamics}\label{ExcDyn} \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{Figure2.pdf} \end{center} \caption{\textbf{Impact of molecules on exciton dynamics.} Molecule induced changes $\Delta N^{\mu} = (N^{\mu}_{\text{mol}}-N^{\mu}_{0})/N^{tot}_0 $ in the occupation of $KK$ (orange line) and $K\Lambda$ (purple line) excitonic states in WS$_2$ functionalized with merocyanine molecules with a dipole moment of 13 Debye at the exemplary temperature of 77 K. Here, $N^{\mu}_{\text{mol}}$ and $N^{\mu}_{0} $ denote the excitonic occupation in and without the presence of molecules, respectively. We observe that the molecules first slightly enhance the formation of $K\Lambda$ excitons ($t<0.2$ ps), while during the exciton thermalization, they increase the population of $KK$ excitons. In general, the molecule-induced changes in the exciton dynamics are rather small with less than 3$\%$. The inset shows the absolute exciton dynamics for pristine WS$_2$, where after 0.5 ps an equilibrium distribution is reached with the highest occupation of the energetically lowest $K\Lambda$ excitons. } \label{excDynamics} \end{figure} Before we investigate the changes in the optical fingerprint of the TMD material after non-covalent functionalization with molecules, we first study the impact of molecules on the exciton dynamics. Evaluating \eqref{PGleichung} and \eqref{NGleichung}, we have a microscopic access to the time- and momentum-resolved dynamics of coherent and incoherent exciton densities and can track the molecule-induced changes in the formation and thermalization of excitons. For pristine TMDs we find (i) the formation of a coherent exciton density $|P^K|^2$ as the response to the optical excitation of the system with a weak pump pulse, (ii) radiative and non-radiative decay of the coherent exciton density and the phonon-assisted formation of an incoherent exciton density $N^{\mu}_{{\bf Q}}$, cf. the inset of Fig. \ref{excDynamics}. The timescale for the decay of coherent and the formation of incoherent excitons is rather fast with $<0.1$ ps after optical excitation, whereas the thermalization is on a slower timescale and the equilibrium is reached after approximately 0.5 ps \cite{selig2017dark}. Interestingly, after thermalization the density of the $KK$ excitons is negligibly small, since most excitons occupy the energetically lowest $K\Lambda$ states (inset of Fig. \ref{excDynamics}). As this state is dark, i.e. optically inaccessible, most incoherent excitons are lost for optics. However, molecules can principally provide the required center-of-mass momentum $\bf Q$ to reach these dark states. Now, we investigate how the attached molecules influence the processes of exciton formation and thermalization. This is achieved by including the molecules in the calculation of coherent and incoherent exciton densities, cf. the last line in \eqref{PGleichung} and \eqref{NGleichung}. Figure \ref{excDynamics} illustrates the difference of the densities $\Delta N^\mu = (N^\mu_{\text{mol}} - N^\mu_{0})/N^{tot}$ with and without the exemplary merocyanine molecules, normalized to the total occupation $N^{tot}$. We find that within the first \mbox{100 fs} the occupation of the $K\Lambda$ excitons is slightly enhanced, while the occupation of $KK$ excitons is reduced. This means that molecules support the formation of incoherent $K\Lambda$ excitons on the one hand and suppress the formation of incoherent $KK$ excitons as molecule-mediated exciton relaxation to $K\Lambda$ states is very efficient. For the exciton thermalization the behavior for $K\Lambda$ and $KK$ excitons is inverse and the molecule-induced changes are more pronounced. Note however that the the observed changes are generally rather small and are in the range of $2 \%$. This justifies well the decoupling of \eqref{PGleichung}, \eqref{NGleichung} from \eqref{SGleichung} as the influence from the molecules to the thermalized distributions is small. Furthermore, since the focus of our work lies on the energy-resolved photoluminescence, it is sufficient to take into account thermalized incoherent exciton occupations. In the following, we investigate to what extent the molecule-induced photoluminescence is sensitive to experimentally accessible knobs, such as carrier excitation density and temperature as well as molecular characteristics including dipole moment, orientation, distribution, coverage, and distance. We focus on tungsten-based TMDs (WS$_2$ and WSe$_2$), since here the dark $K\Lambda$ exciton is the energetically lowest state exhibiting a large occupation after the thermalization. As a result, we expect the largest PL signal for dark excitons in W-based TMDs. \section{Excitation density} Here, we study the impact of the excitation density on the PL of TMDs in presence of molecules. For incoherent excitons after thermalization, we assume a Bose distribution \begin{equation}\label{Bose} N^{\mu}_Q = \left[ \exp{\left(\frac{E_Q^{\mu} - \mu_{\text{chem}}}{k_{\text{B}} T}\right)} - 1 \right]^{-1} \end{equation} with $E_Q^{\mu} = \varepsilon^{\mu} + \frac{\hbar^2Q^2}{2M^{\mu}}$, the Boltzmann constant $k_{\text{B}}$ and the chemical potential \cite{Kochbuch} $ \mu_{\text{chem}} = k_{\text{B}} T \ln \left[ 1-\exp(-\frac{n_{\text{ex}}\hbar^2 2 \pi }{k_{\text{B}} T 3 M^{\mu}} ) \right] $, where $n_{\text{ex}}$ is the excitation density. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{Figure3.pdf} \end{center} \caption{\textbf{Dependence on excitation density.} (a) The PL spectrum of molecule-functionalized WS$_2$ shows a broad peak at 2.0 eV stemming from bright $KK$ excitons and a narrow peak at 1.947 eV reflecting the molecule-activated dark $K\Lambda$ excitons. The calculation is performed at 77 K and the PL intensity at different excitation densities is normalized to the the bright resonance. In the low excitation regime (purple curve) the dark peak reaches 80\% of the intensity of the bright peak. For excitation densities higher than $10 \times 10^{-11} \text{ cm}^{-2} $, the dark peak becomes even more pronounced. (b) Maximum PL intensity of the dark (purple) and the bright (orange) peak as well as (c) their ratio as a function of the excitation density $n_{\text{ex}}$ for WS$_{2}$ (solid) and WSe$_{2}$ (dashed). Both TMDs show qualitatively the same behavior, although in WS$_{2}$ the dark peak is even for low excitation densities more pronounced due to larger overlap of excitonic wave functions resulting in more efficient molecule-exciton coupling. } \label{excDensity} \end{figure} Now, we investigate the influence of $n_{\text{ex}}$ on $KK$ and $K\Lambda$ excitons and their optical fingerprint in PL spectra. The incoherent exciton density is the driving mechanism for the photon-assisted polarization and hence for the PL, cf. \eqref{PLana}. For the pristine case, $N_{\bf{0}}^{KK}$ is the crucial quantity, while the relaxation from $K\Lambda$ excitons and hence the occupation $N_{\bf{0}}^{K\Lambda}$ only contributes in presence of molecules, i.e. $G^{K\Lambda} \neq 0$. Since the excitation density $n_{\text{ex}}$ directly enters $N_{\bf{Q}}^{\mu}$, we expect it to have a large impact on the PL. Figure \ref{excDensity}(a) shows the PL spectrum for the exemplary monolayer WS$_2$ functionalized with merocyanine molecules. The broad peak located at 2.0 eV corresponds to photons emitted from the bright $KK$ excitons whereas the narrow peak at 1.947 eV corresponds to molecule-induced emission of photons from dark $K\Lambda $ excitons, cf. also Fig. \ref{schema}(b) for an schematic view of the process. The linewidth of the energetically lower lying dark exciton is smaller, since it cannot decay radiatively and since also the non-radiative channels are restricted to less efficient processes involving the absorption of a phonon. As incoherent excitons are the driving force for photoluminescence, one can think about possibilities to optimize the signal from the dark exciton state via change of the excitation density $n_{ex}$. Figure \ref{excDensity}(a) shows the PL spectra for different $n_{ex}$. For a better comparison of the relative intensities, the spectra are normalized to the intensity of the bright peak. We observe that the dark exciton peak becomes more pronounced with increasing excitation density. Figure \ref{excDensity}(b) shows the absolute maximum intensity of the bright (orange line) and the dark (purple line) peak as a function of the excitation density. We find that the bright exciton peak is decreasing, while the dark one is increasing in intensity. This can be ascribed to the fact that with higher excitation even more excitons occupy the energetically lower dark $K\Lambda$ state. Considering the PL intensity ratio of the dark to the bright exciton peak, we even find that for $n_{ex} >10 \cdot 10^{11}$ cm$^{-2}$ the dark peak becomes more pronounced than the bright one, cf. Fig. \ref{excDensity}(c). Even in the low excitation regime ($n_{ex} < 10^{11} $ cm$^{-2}$ ) the dark exciton is still clearly visible in the PL. For comparison, we also show the PL for WSe$_2$ (dashed lines). We find a similar behavior. The qualitative differences stem from different exciton-molecule coupling strengths in the two materials and different masses $M^{\mu}$ entering the chemical potential. The exciton-molecule coupling is more efficient in WS$_2$ and hence the visibility of the dark peak is stronger in general. On the other hand, the higher mass $M^{\mu}$ in WS$_2$ reduces the sensitivity to the excitation density, i.e. the Bose distribution changes slower than in WSe$_2$ and hence WSe$_2$ shows a larger slope in the intensity ratio, cf. \ref{excDensity}(c). To sum up, the excitation density is a promising experimental knob to enhance the visibility of the additional peak stemming from the molecule-activated dark excitons. \section{Molecular characteristics} Having revealed the principle mechanism of molecule-induced photoluminescence activating the dark $K\Lambda$ exciton, we now want to investigate the sensitivity of the mechanism on molecular characteristics. We have shown that the energetic position of the dark peak is determined by the internal dark-bright separation within the TMD, whereas the intensity of the dark exciton peak is given by the strength of the coupling with the molecules. In the following, we study the PL intensity ratio between the dark and the bright exciton peak, since this is the key quantity for efficiency of the activation of dark excitons and thus for the sensitivity of the molecule detection. \subsection{Molecular dipole moment and orientation} \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{Figure4.pdf} \end{center} \caption{\textbf{Dependence on molecular dipole moment and orientation.} (a) Surface plot showing the PL intensity ratio of dark and bright excitons in functionalized WS$_{2}$ as a function of the molecular dipole moment and orientation at 77 K. We find the best visibility of the dark peak for perpendicular dipole orientation and large dipole moments. However, already for molecules with a dipole moment of 5 D, the dark exciton becomes visible in case of the perpendicular dipole orientation. The dashed white line shows our standard parameters within this manuscript including a fixed orientation of 90$^\circ$ and a fixed dipole moment of 13 D (merocyanine molecule). The corresponding cuts from the surface plot are shown in (b) and (c), respectively including a direct comparison to WSe$_{2}$ (dashed line). Additionally, we show the dependence on molecular dipole moment for randomized dipole orientation (orange line in (b)). } \label{MoleDipol} \end{figure} First, we study the impact of the molecular dipole moment including its orientation with respect to the TMD surface. Figure \ref{MoleDipol}(a) shows the PL intensity ratio between the dark and the bright exciton peak as a function of the strength and the orientation of the dipole moment in the case of functionalized WS$_2$ at 77 K. We find the best visibility of the dark exciton peak for high dipole moments with a perpendicular orientation. To obtain further insights, we show in Figs. \ref{MoleDipol}(b) and (c) the dependence on the dipole moment for a fixed orientation of 90$^\circ$ and the dependence on dipole orientation for a fixed dipole moment of 13 D (merocyanine molecule), respectively (corresponding to the dashed white lines in Fig. \ref{MoleDipol}(a)). The PL intensity ratio shows a quadratic increase with the dipole moment, and even for relatively low dipole moments of approximately 5 D the visibility of the dark exciton is in the range of 20$\%$. The dipole orientation study reveals a vanishing dark exciton feature for parallel orientation of the dipole moment and a maximum impact for perpendicular orientation. Both observations can be understood in analogy to a classical dipole field. The stronger the dipole moment of the attached molecules, the stronger is the induced dipole field and hence the more efficient the molecules interact with the TMD. For the orientation of the dipole, the analogy to a classical dipole field reveals the largest overlap of the induced dipole field with the TMD surface for perpendicular dipole orientation. The study on randomly orientated molecular dipole moments (cf. orange line in Fig. \ref{MoleDipol}(b)) reveals that even though the dark-bright intensity ratio becomes smaller in case of randomization, we still obtain a visibility of $40 \%$ of the dark exciton peak for d=13 D. Finally, the dashed lines in (b) and (c) show WSe$_{2}$ which reveals the same trends. The dark exciton peak for d$<$15 D is less pronounced than in WS$_{2}$ due to the less efficient exciton-molecule coupling. Interestingly, for stronger dipole moments (d$>$15D) the dark peak in WSe$_2$ becomes more visible. This can be traced back to the smaller total mass $M^{\mu}$ in WSe$_2$, which results in a more sensitive behavior to changes and eventually to a higher slope for the dipole dependence. In summary, we predict the best visibility of the dark exciton peak for molecules with a large dipole moment and a perpendicular orientation with respect to the TMD surface. A visibility of up to 10\% is predicted for molecules with a dipole moment of 3D. \subsection{Molecule-TMD distance} \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{Figure5.pdf} \end{center} \caption{\textbf{Dependence on molecular distance.} PL of functionalized WS$_{2}$ in dependence of the distance between the TMD surface and the attached molecules $R_{z}$. The PL intensity is shown at 77 K and is normalized to the intensity of the bright peak. We find that the visibility of the dark exciton decreases with larger distance as the exciton-dipole interaction becomes weaker. The distance has no influence on the position of the dark peak. } \label{MoleDistance} \end{figure} Another crucial quantity is the distance $R_z$ between the molecules and the TMD surface. Here, we assume a non-covalent adsorption of molecules via van der Waals interaction. As a consequence, the minimal $R_z$ is given by the van der Waals radius which is approximately 0.36 nm. However, due to surface roughness, impurities, or the presence of linker molecules, this distance might be larger in a realistic experimental setup and hence it is important to shed light on the impact of $R_z$ on the visibility of the dark exciton in PL spectra. Fig. \ref{MoleDistance} illustrates the maximum PL intensity as a function of energy and molecule-TMD distance $R_z$. The bright exciton peak is located at 2.0 eV and does not show any noticeable changes as a function of $R_z$. In contrast, the visibility of the dark exciton PL peak at 1.947 eV is significantly reduced with the increasing distance. In analogy to the expectations from a classic dipole field, we find an exponential decrease of the exciton-molecule interaction with $e^{-R_z}$, cf. \eqref{molDetail}. In summary, the closer the molecules are attached to the TMD surface, the more pronounced is the dark exciton feature in the PL spectrum. One could principally exploit the exponential dependence to determine the distance between the attached molecules and the material surface. \subsection{Molecular distribution and coverage} Now, we address the molecule distribution and coverage on the TMD surface and to what extent they influence the visibility of the dark exciton in PL spectra. The molecular coverage $n_m$ corresponding to the number of molecules on a fixed surface area plays a crucial role for the activation of dark excitons as it determines the induced center of mass momentum. Note that we do not consider molecule-molecule interactions, which might become important for a very large number of attached molecules. Projecting the carrier-dipole coupling from \eqref{gg} into the excitonic basis and assuming the simplest case of a periodic molecular distribution allowing us to analytically solve the appearing integrals, we find \begin{equation}\label{Geinfach} G_{Q} \propto \sum_{x} \delta_{Q, \frac{2\pi x}{\Delta R}} n_m e^{-Q}. \end{equation} One sees immediately the connection between the distance of molecules $\Delta R$ (reflecting the molecular coverage) in real space and the induced center-of-mass momentum in the Kronecker delta. To reach the $K\Lambda$ exciton, a molecule-induced momentum transfer of approximately $Q\approx \unit[6.6]{nm^{-1}}$ is needed corresponding to the distance between the $\Lambda$ to K valley in the Brillouin zone. This translates in real space to $\Delta R=\frac{2\pi}{Q}\approx 1$ nm. For larger distances between the molecules (i.e. smaller molecular coverage), the momentum can still be provided through higher-order terms in the appearing sum in \eqref{Geinfach}, however the strength of the coupling becomes smaller. Note also that on one side the exciton-dipole coupling increases with larger molecular coverage, but on the other side it also decreases exponentially with the transferred momentum $Q$ (Eq. (\ref{Geinfach})). This results in an optimal molecular coverage, similarly to the already investigated case of carbon nanotubes \cite{malic11}. First, we investigate the case, where molecules are periodically distributed on the TMD surface and build a molecular lattice. The corresponding molecular lattice constant determines the momentum that can be provided by the molecules to address dark excitonic states. Figure \ref{MoleCoverage} shows the PL spectra normalized to the bright peak for high ($n_m=1.15 \text{ nm}^{-2}$), the standard density in this manuscript ($n_m=1.0 \text{ nm}^{-2}$), medium ($n_m=0.5 \text{ nm}^{-2}$), and low ($n_m=0.25 \text{ nm}^{-2}$) molecular coverage. Our calculations reveal that the visibility of the dark exciton is the most pronounced for $n_m=1.0 \text{ nm}^{-2}$ as the provided momentum $Q$ corresponds to the momentum needed to reach the $\Lambda$ valley. If we go to higher coverage, the peak decreases as the transferred momentum is not matching the $\Lambda$ valley and the coupling strength decreases exponentially with $Q$, cf. \eqref{Geinfach}. Note however that even for small coverage, the dark exciton peak is still clearly visible. Its intensity is in the range of 10$\%$ of the intensity of the bright peak. \begin{figure}[t!] \begin{center} \includegraphics[width=\linewidth]{Figure6.pdf} \end{center} \caption{\textbf{Dependence on molecular coverage.} PL spectra of functionalized WS$_{2}$ for a relatively (a) high, (b) perfect, (c) medium, and (d) low molecule coverage $n_m$. The spectra are normalized to the bright peak. Light purple curves shows a periodic molecular distribution, whereas the dark purple line represents the case of randomly distributed molecules on the surface of WS$_2$. We find the best visibility for a molecular coverage of n=1.0 nm$^{-2}$ and a periodic distribution. In the case of randomly distributed molecules, the dark exciton peak decreases roughly to the half and smears out to the higher energy side. } \label{MoleCoverage} \end{figure} We observe that the bright peak shifts to the red at small molecular coverage. The origin of the red shift can be understood as follows: the transferred momentum for $n_m=0.25 \text{ nm}^{-2}$ of $Q\approx 1.65 \text{ nm}^{-1}$ is rather small and can only enable indirect transitions within the dispersion of the $KK$ excitons with an energy $E^{KK}=\varepsilon^{K} + \frac{\hbar^{2}Q^{2}}{2M^{K}}$. These intravalley transitions become more favorable for small molecular coverage. The molecule-induced PL from these states is approximately 100-200 meV above the bright peak (not shown in the spectra). The coupling to the bright resonance leads to the observed red-shift. A detailed description of intravalley transitions can be found in Refs. \onlinecite{majaTechnikPaper, ermin_prl, gunnar_carbon}. The focus in this manuscript is on higher coverage, where the dark $K\Lambda$ can be reached. Now, we investigate the case of randomly distributed molecules on the TMD surface. We assume a fluctuation of the position of the molecules around their equilibrium position. We allow fluctuations of up to 10$\%$ modeled by a Gaussian random distribution. We find that now the dark exciton peak smears out to higher energies and its intensity decreases by approximately the half. The dark exciton remains visible, if the molecular coverage is not too low, cf. dark purple lines in Fig. \ref{MoleCoverage}. The random distribution of molecules weakens the efficiency of exciton-molecule coupling and reduces the visibility of the dark exciton. For low molecular coverage, the dark exciton peak even disappears (Fig. \ref{MoleCoverage}(d)), as the probability to find the required center-of-mass momentum to address the intervalley $K\Lambda$ excitons is low. However, the probability for intravalley transitions along the dispersion of the $KK$ exciton increases. This $KK$ transitions are responsible for the red shift (still observable, but less pronounced compared to the periodic case due to the reduced exciton-molecule coupling). The observed peak asymmetry in case of the randomized distribution can be traced back to dark $K\Lambda$ transitions along the dispersion of the $K \Lambda $ exciton corresponding to $E^{\Lambda\Lambda}=\varepsilon^{\Lambda} + \frac{\hbar^{2}Q^{2}}{2M^{\Lambda}}$. Due to the large effective mass $M^{\Lambda}$ the molecule induced PL from these states is approx. 50 meV above the dark peak and hence it is visible as a high-energy wing. In summary, low molecular coverage and randomized molecular distributions induce peak broadening and asymmetry as well as a reduction of the dark exciton peak intensity. Nevertheless, the dark exciton still remains visible confirming that the effect is robust also under realistic conditions. The best visibility is clearly reached for high molecular coverage and periodically distributed molecules. \section{Temperature dependence} \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{Figure7.pdf} \end{center} \caption{\textbf{Dependence on temperature. }PL intensity ratio of dark-to-bright peak of functionalized (a) WS$_{2}$ and (b) WSe$_{2}$ for different temperatures $T$ and molecular dipole moments $d$. We find the best visibility of the dark exciton for low $T$ and high $d$. Note that in case of WS$_{2}$ even at T=300 K the dark peak is visible for dipole moments down to 10 D. In case of WSe$_{2}$ the peak disappears at 250 K even for high dipole moment due to the broader linewidths in selenium-based TMDs. (c) PL intensity of the dark (dashed line) and bright (solid line) peak as a function of temperature for three exemplary dipole moments. We observe that, depending on the dipole moment, the crossing temperature between dark and bright peak changes. (d) PL intensity ratio of the dark and bright peak in WS$_{2}$ shows an exponential decrease that can be ascribed to the Bose distributions of thermalized excitons. } \label{TempiSurface} \end{figure} The investigations discussed so far have been performed at a temperature of 77 K, since here the exciton-phonon coupling is weak leading to narrow linewidths allowing a clear separation of bright and dark exciton peaks in PL spectra. Now, we show a temperature study on the visibility of the dark exciton aiming at the possibility for room temperature detection of molecules. The temperature affects both the Bose-Einstein distribution of phonons and excitons. The first has a direct impact on the efficiency of the exciton-phonon coupling and hence the exciton linewidths. The second determines the relative occupation of dark and bright exciton states directly influencing the PL spectra. Figures \ref{TempiSurface}(a) and (b) illustrate the PL intensity ratio between the dark and the bright peak as a function of temperature and molecular dipole moment for WS$_{2}$ and WSe$_{2}$, respectively. For low temperatures and high dipole moments, the visibility of the dark exciton is the best, as the peak linewidths are narrow and the exciton-dipole coupling is strong. We find a much broader temperature and dipole moment range with a good visibility of the dark exciton in WS$_{2}$. Surprisingly, we observe at room temperature and at dipole moments down to 10 D a clearly visible dark exciton peak. In contrast, for WSe$_2$ dark excitons cannot be efficiently activated at room temperature even for very high dipole moments of above 30 Debye. This is due to the enhanced carrier-phonon coupling, which leads to broader peaks in selenium-based TMDs \cite{selig2016excitonic, DominikPhonons}. For a more quantitative understanding, we show for WS$_{2}$ the maximum PL intensity for the bright (solid lines) and the dark peak (dashed lines) as a function of temperature for three exemplary dipole moments, cf. Fig. \ref{TempiSurface}(c). We see that the bright peak increases in intensity, whereas the dark peak decreases for higher temperatures. This reflects the Bose-Einstein distribution of thermalized excitons, where the occupation of the energetically higher bright state becomes larger with temperature. For our standard set of parameters including a dipole moment of 13 Debye, the bright exciton peak is more pronounced at all temperatures. However, for 20 D (30 D), the dark peak becomes more efficient at least at 77K and exceeds the bright transition by a factor of 2 (5) reflecting the strong exciton-molecule interaction. We observe that in the temperature range of $80-100$ K the dark exciton peak decreases quickly for all dipole moments, while the bright peak becomes much more pronounced. The critical temperature at which the bright exciton becomes more pronounced than the dark one is 80 (90) K for 20 (30) D. For $T>100 K$, the intensity of the bright peak saturates to 1, while the dark peak basically vanishes. Figure \ref{TempiSurface}(d) shows the PL intensity ratio of the dark and the bright exciton for WS$_{2}$ in the low temperature range of $70-150$ K. We observe a clear exponential decrease with temperature reflecting the Bose-Einstein distribution of thermalized excitons. The decay rate is the same for all molecular dipole moments, since the exciton-molecule coupling only determines the initial value for the PL intensity ratio, but not its decay. It is given by the Bose-Einstein distribution and the energetic difference between bright and dark state. The general temperature dependence can be understood on microscopic footing: At low temperatures, the majority of excitons occupies the energetically lowest $K\Lambda$ states, as the exciton-phonon scattering is too weak to scatter carriers into the higher $KK$ exciton states. The larger the temperature, the more efficient exciton-phonon scattering and the larger is the redistribution of excitons among the $K\Lambda$ and $KK$ states, i.e. $N^{K\Lambda}$ decreases and $N^{KK}$ increases. As a direct consequence, the PL intensity of bright $KK$ excitons is enhanced at higher temperatures, while the PL intensity of the dark $K\Lambda$ is reduced. Additionally, the larger the temperature, the broader are the linewidths of both bright and dark excitonic resonances eventually resulting in a vanishing visibility of the dark exciton. As shown in Fig. \ref{TempiSurface}(a), the dark exciton peak is visible in WS$_{2}$ even at room temperature. To get further insights into room temperature conditions, we show the logarithmic PL spectrum at T=300 K for pristine (orange) and functionalized WS$_2$ including three different molecular dipole moments, cf. Fig. \ref{TempiSpektrum}(a). In case of pristine WS$_{2}$, we find only one broad peak at 2.0 eV corresponding to the bright excitons resonance. With molecules, the additional peak at 1.91 eV appears. Here, we investigate free-standing WS$_{2}$ without any substrate. This shifts the dark exciton peak to lower energies, which is favorable at room temperature conditions with large excitonic linewidths. Under these conditions, we find a visibility of the dark exciton peak to be approximately 3$\%$ of the bright exciton peak in the case of our standard merocyanine molecules with 13 D. For higher molecular dipole moments of 20 D (30 D), the intensity of the dark peak increases to 5 $\%$ (9 $\%$) with respect to the pristine peak, which should be resolvable in PL experiments. Even clearer signatures can be seen in the first derivative of the PL spectrum, where we find an oscillation at 1.91 eV corresponding to the position of the dark $K\Lambda$ exciton. This presents a large advantage compared to absorption spectra, where only a small shoulder is visible at room temperature \cite{maja_sensor}. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{Figure8.pdf} \end{center} \caption{\textbf{Room temperature conditions. } (a) Room temperature PL in logarithmic plot and (b) first derivative of the PL in functionalized and pristine WS$_{2}$ for different molecular dipole moments. The PL is again normalized to the bright peak. We find that the visibility of the dark peak is in the range of 3-9 $\%$ compared to the bright peak. (b) The main limiting factor is the broad excitonic linewidth at room temperature. Hence, we also show the derivative of the PL, which shows clear traces of the dark exciton at room temperature even for 13 D. Note that the spectra are shifted along the y axes for better visibility. } \label{TempiSpektrum} \end{figure} \section{Discussion and Conclusion} Here, we summarize and discuss the obtained insights for low and room temperature conditions. In the low temperature (77 K) case, exciton-phonon coupling is relatively weak and PL spectra are characterized by narrow excitonic linewidths. These are good conditions for pronounced features stemming from the dark exciton. We find that molecules with dipole moments larger than 3 Debye can be detected. The larger the dipole moment, the more efficient is the exciton-molecule coupling and the more pronounced is the dark exciton. Furthermore, the orientation of the dipole moment also matters. We find the largest visibility of the dark exciton for the perpendicular orientation with respect to the TMD surface, since here the overlap with the dipole field is the largest. Another important property is the molecular coverage $n_M$, since it determines the possible molecule-induced momentum transfer. We find an optimal response for $n_M\approx 1 \text{ nm}^{-2}$ in the case of periodically distributed molecules. For randomized distributions, the effect becomes smaller, but the signatures of the dark exciton are still visible. Moreover, the smaller the distance of the attached molecules to the TMD surface, the more pronounced is the effect. We have shown that even for distances larger than the van der Waals radius, the molecule signatures still remain visible in PL spectra. In the room temperature case, the exciton-phonon coupling is strong resulting in a broadening of the excitonic transitions, which strongly restricts the visibility of the dark exciton. However, for molecules with a dipole moment larger than 10 Debye, we can still observe clear signatures in the PL spectra assuming that the dipole orientation, the distance to the TMD surface, and the molecular coverage are optimal. Moreover, the excitation density can be used as an additional knob to further increase the visibility of the dark peak. In summary, we have revealed a promising potential of monolayer tungsten disulfide (WS$_2$) as a novel nanomaterial for detection of molecules with a large dipole moment. We have shown that its photoluminescence is very sensitive to external molecules giving rise to a well pronounced additional peak that can be ascribed to the activation of dark excitonic states. Depending on different experimentally accessible knobs, even room temperature detection of molecules becomes possible. \section{Acknowledgement} This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 696656 within the Graphene Flagship and the Swedish Research Council. Furthermore, we acknowledge support by the Chalmers Area of Advance in Nanoscience and Nanotechnology. M.S. acknowledges financial support from the Deutsche Forschungsgemeinschaft (DFG) through SFB 787.
{ "timestamp": "2017-12-15T02:07:37", "yymm": "1712", "arxiv_id": "1712.05219", "language": "en", "url": "https://arxiv.org/abs/1712.05219" }
\section{Introduction} \label{intro} Under CPT symmetry time-reversal-invariance violating but parity conserving (TVPC) forces are considered as a possible source of CP-invariance violation, which is required to account for the matter-antimatter asymmetry in the universe \cite{sakharov}. In contrast to effects from time-reversal-invariance violation together with parity violation such as a permanent electric dipole moment (EDM) of elementary particles, so far much less attention was paid to TVPC effects. The reason why TVPC effects are interesting is that experimental limits on them are still rather weak, in particular, considerably weaker than those for the EDM. Since the intensity of TVPC interactions within the standard model is extremely small \cite{conti-khripl}, an observation of any effects at the present accuracy level of experiments would be a direct indication of physics beyond the standard model. Indeed a pertinent measurement is planned at the COSY accelerator in the Research Center in J\"ulich \cite{TRIC}. The observable in question is the integrated cross section for scattering of protons with transversal polarization $p_y^p$ on deuterons with tensor polarization $P_{xz}$. It provides a null-test signal for TVPC effects \cite{conzett} and it will be measured in $pd$ scattering at 135 MeV \cite{TRIC}. Theoretical studies of the energy dependence of the expected signal were performed at energies of the planned experiment \cite{Beyer,Lazauskas,UZTemPRC,Uzdspin15,UZEPJweb,UzTemIJMF2016, UZJHPRC16,UZEPJweb17} on the basis of the spin-dependent Glauber theory and demonstrate several unexpected effects. Among them are (i) the absense of the contribution from the lowest-mass meson-exchange ($\rho$ meson) in the TVPC $NN$ interaction, caused by its specific isospin, spin and momentum dependence; (ii) a strong impact of the deuteron $D$-wave on the null-test signal due to a destructive interference between the $S$- and $D$-wave contributions, even for zero transferred 3-momentum; (iii) oscillating behaviour of the null-test signal as a function of the beam energy, i.e. the vanishing of the TVPC signal at some specific energies is possible even when the TVPC interaction itself is nonzero; (iv) a very small influence of the Coulomb interaction on the TVPC term of the $pd$ forward scattering amplitude $\widetilde {g}$. Furthermore, certain relations between differential observables of elastic $pd$ scattering caused by time-reversal-invariance requirements were obtained and the degree of their violation by TVPC $NN$ forces was studied~\cite{TUZizv,TUZizv16}. Since the spin structure of the amplitude for $pd$- and $\bar p d$ elastic scattering is the same, it is obvious that the integrated cross section for scattering of a polarized ($p_y^{\bar p}$) antiproton on tensor polarized ($P_{xz}$) deuterons also provides a null-test signal for TVPC effects. Furthermore, the TVPC $\bar NN$ amplitude for elastic scattering contains the same operator structures as the one for TVPC $NN$ elastic scattering, except for the charge-exchange terms. Therefore, the formalism developed in Refs.~\cite{UZTemPRC,Uzdspin15,UZJHPRC16} within the Glauber theory for the calculation of the null-test signal in $pd$ scattering can be straightforwardly applied to $\bar p d$ scattering too. However, due to differences in the hadronic part of the $pN$ and $\bar p N$ scattering amplitudes and also in the electromagnetic interactions, the energy dependence of the null test signal in $pd$ and ${\bar p} d$ interaction has to be different. In the present work the energy dependence of the null-test signal in ${\bar p} d$ scattering is studied on the basis of calculations within the spin-dependent Glauber theory using the spin-dependent $\bar p N $ amplitudes from a recent partial wave analysis of $\bar pp$ scattering \cite{Nijmegen}. \section {Null-test signal for time-reversal-invariance violation} \label{sec-1} The total cross section for $\bar pd$ scattering with TVPC forces included can be written in the same form as for $pd$ scattering \cite{UZTemPRC} \begin{equation} \label{totalspin} { \sigma_{tot}= {\sigma_0^t+\sigma_1^t{{\bf p}^{\bar p}\cdot {\bf p}^d}+ \sigma_2^t {({\bf p}^{\bar p}\cdot {{\bf m}}) ({\bf p}^d\cdot { {\bf m}})}+ \sigma_3^t { P_{zz}}} +{\widetilde \sigma} {p_y^{\bar p} P_{xz}^d} \, . } \end{equation} Here ${\bf p}^{\bar p}$ (${\bf p}^d$) is the vector polarization of the initial antiproton (deuteron), $P_{zz}$ and $P_{xz}$ are the tensor polarizations of the deuteron, and $p_y^{\bar p}$ is the transversal component of the antiproton vector polarization. The OZ axis is directed along the beam direction ${{\bf m}}$, the OY axis is directed along the vector polarization of the antiproton beam ${\bf p}^{\bar p}$ and the OX axis is chosen to form a right-handed reference frame. The integrated cross sections $\sigma_i^t$ ($i=0,1,2,3$) are those which arise from a standard time-reversal invariant and parity conserving interaction, while the last term ${\widetilde \sigma}$ appears only in the presence of the TVPC interactions and constitutes the TVPC null-test signal. The result (\ref{totalspin}) can be derived using phenomenological $\bar p d$ forward scattering amplitudes and the generalized optical theorem. The evaluation of the integrated cross sections $\sigma_i^t$ and $\widetilde {\sigma}$ at beam energies $>100$ MeV can be done on the basis of the spin-dependent Glauber theory of ${\bar p}d$ scattering which is formulated similarly to the theory of $pd$ scattering given in Ref.~\cite{PK}. Indeed, as shown in Ref.~\cite{TUZyaf}, this theory allows one to describe rather well available data on differential spin observables of $pd$ scattering in the forward hemisphere at beam energies of $135-200$ MeV. For the antiproton-deuteron scattering this theory can be applied at even lower energies due to the presence of strong annihilation effects. In the Glauber theory one uses the elastic (on-shell) ${\bar N}N$ scattering amplitudes as input. Hadronic amplitudes of the ${\bar p}N$ scattering are taken here in the same form as for $pN$ scattering \cite{PK} \begin{eqnarray} \label{pnamp} M_N({\bf p}, {\bf q};\bfg \sigma, {\bfg \sigma}_N)= A_N+C_N\bfg \sigma \hat{\bf n} +C_N^\prime\bfg \sigma_N \hat{\bf n }+ B_N(\bfg \sigma \hat {\bf k}) (\bfg \sigma_N \hat {\bf k})+\\ \nonumber + (G_N+H_N)(\bfg \sigma \hat {\bf q}) (\bfg \sigma_N \hat {\bf q}) +(G_N-H_N)(\bfg \sigma \hat {\bf n}) (\bfg \sigma_N \hat {\bf n}) \, , \end{eqnarray} where ${\hat {\bf q}}$, ${\hat {\bf k}}$ and ${\hat {\bf n}}$ are defined as unit vectors along the vectors ${ {\bf q}}=({\bf p}-{\bf p}')$, ${ {\bf k}}=({\bf p}+{\bf p}')$ and ${ {\bf n}}=[ {\bf k}\times {\bf q}]$, respectively; ${\bf p}$ (${\bf p}'$) is the initial (final) antiproton momentum. In general, the TVPC $NN$ interaction contains 18 different terms~\cite{herczeg}. In the case of the on-shell $NN$ scattering amplitude there are only three terms with different (independent) spin-momentum structures. In the present study we consider the following two terms for the TVPC (on-shell) $t$-matrix of elastic ${\bar p}N$ scattering which have the same structure as those in TVPC $pN$ scattering \begin{eqnarray} \label{TVbNN} t_{{\bar p}N}= {h_N[({\bfg \sigma} \cdot {\bf k})({\bfg \sigma}_N \cdot {\bf q})+ ({\bfg \sigma}_N \cdot {\bf k})({\bfg \sigma} \cdot {\bf q})- \frac{2}{3}({\bfg \sigma}_N \cdot{\bfg \sigma}) ({\bf k}\cdot {\bf q}) ]}/m_p^2 + \\ \nonumber +g_N [{\bfg \sigma} \times {\bfg \sigma}_N]\cdot [{\bf q }\times{\bf k}] [{\bfg \tau} -{\bfg \tau}_N]_z/m_p^2. \end{eqnarray} Here ${\bfg \sigma}$ (${\bfg \sigma}_N$) is the Pauli matrix acting on the spin state of the antiproton (nucleon $N=p,n$) and ${\bfg \tau}$ (${\bfg \tau}_N$) is the isospin matrix acting on the isospin state of the antiproton (nucleon). The momenta $\bf q$ and $\bf k$ were already defined above in the context of Eq.~(\ref{pnamp}). Both terms in Eq.~(\ref{TVbNN}), $h_N$ and $g_N$, occur in the TVPC $pn$ interaction. The TVPC $pN$ scattering amplitude contains also the charge-exchange term \begin{eqnarray} \label{chargex} t^{ch}= {g_N^\prime ({\bfg \sigma} - {\bfg \sigma}_N)\cdot i\,[{\bf q}\times {\bf k}] [{\bfg \tau} \times{\bfg \tau}_N]_z}/m_p^2, \end{eqnarray} which describes the elastic transitions $pn\to np$ and $np\to pn$. Within a picture of one-meson-exchange interaction this $g^\prime$-term corresponds to the charged $\rho$-meson exchange \cite{simonius75}. The same term (\ref{chargex}) corresponds to the charge-exchange processes $\bar p p\to \bar n n$ or $ \bar n n \to \bar p p$. However, in contrast to $pn$ scattering these processes are inelastic and therefore the operation of time-reversal invariance transforms, for example, the $\bar p p\to \bar n n$ amplitude to the $\bar n n \to \bar p p$ amplitude and does not impose any restrictions on these amplitudes. The $h_N$-term in Eq.~(\ref{TVbNN}) can be associated with the axial $h_1$-meson exchange. As shown in Ref. \cite{simonius75}, contributions of the $\pi$- and $\sigma$-meson to the TVPC $NN$ interaction are excluded, which is obviously true for the TVPC $\bar NN$ interaction as well. \subsection{TVPC amplitude of ${\bar p}d$ forward scattering} \label{widetildeg} One can write the ${\bar p}d$ forward elastic scattering amplitude in general form taking into account the TVPC $\bar N N$ interactions, as it was done for $pd$ elastic scattering \cite{TUZyaf,UZTemPRC}, and then apply the generalized optical theorem to derive Eq.~(\ref{totalspin}) for the total ${\bar p}d$ scattering cross section. As in Ref. \cite{UZTemPRC}, the integrated cross section ${\widetilde \sigma}$ is related to the TVPC term $\widetilde g$ of the ${\bar p}d$ forward elastic scattering amplitude by ${\widetilde\sigma}=-4\sqrt{\pi}\,{\rm Im}\frac{2}{3}{\widetilde g}$. Furthermore, the TVPC forward amplitude of $\bar p d$ elastic scattering $\widetilde g$ can be found within the Glauber theory \cite{UZTemPRC}. We consider the $h_N$- and $g_N$-terms and take into account both the $S$- and $D$-wave components of the deuteron. Taking into account that the $g_N$-term is excluded in the process $\bar p n\to \bar p n$ due to the isospin operator in Eq.~(\ref{TVbNN}), we obtain the following result for the TVPC forward amplitude from the corresponding equation in Ref.~\cite{UZJHPRC16}: \begin{eqnarray} \label{g5} {\widetilde g}=\frac{i}{4{\pi}m_p} \int_0^\infty dq q^2 \Bigl[S_0^{(0)}(q)-\sqrt{8} S_2^{(1)}(q) -4 S_0^{(2)}(q)+ \sqrt{2}\frac{4}{3} S_2^{(2)}(q)+ 9 S_1^{(2)}(q)\Bigr]\\ \nonumber [-C^\prime_n(q)(h_p+g_p)-C^\prime_p(q)h_n] \ . \end{eqnarray} Here $S_i^{(j)}$ are the elastic form factors of the deuteron defined in Ref.~\cite{UZJHPRC16}. The first term in the (big) squared brackets in Eq.~(\ref{g5}), $S_0^{(0)}(q)$, corresponds to the $S$-wave approximation, the second term, $S_2^{(1)}(q)$, accounts for the $S$-$D$ interference, and the last three terms contain the pure $D$-wave contributions. As was shown in Ref.~\cite{UZJHPRC16}, the contribution of the $g'$-term to the null-test signal vanishes in $pd$ scattering due to the specific spin-isospin structure of the $g'$-interaction. Formally, for the same reason the charge-exchange $g'$-term given by Eq. (\ref{chargex}) vanishes in the $\bar p d$ forward elastic scattering amplitude. In the first theoretical work \cite{Beyer} where the null-test signal was calculated within the impulse approximation, the Coulomb interaction was not considered. In Ref. \cite{Lazauskas} Faddeev calculations were performed, but only for $nd$ scattering and at rather low energies of $\sim 100 $ keV. The Coulomb interaction was taken into account for the first time in Ref.~\cite{UZTemPRC} in a calculation of the null-test signal of $pd$ scattering within Glauber theory and found to be negligible. A similar result was found in Ref.~\cite{LazGud2016} using Faddeev calculations. \begin{figure} \centering \includegraphics[width=10cm,clip]{antsigFc.eps} \caption{The TVPC signal $\widetilde\sigma$ for the $h$-term, in units of the ratio ($\phi_h$, see Ref.~\cite{UZTemPRC}) of the TVPC and the strong $h_1NN$ coupling constants, versus the antiproton beam energy $T$. Results of our calculations accounting for different terms of the deuteron wave function in Eq.~(\ref{g5}) are shown, based on the deuteron wave function of the CD Bonn potential and the hadronic $\bar p N$ amplitudes from Ref.~\cite{Nijmegen}: $S$-wave (black), $S$-$D$ interference (blue), $S$ + $S$-$D$ waves (green), full result (red). } \label{fig1} \end{figure} \subsection{Numerical results} \label{numresults} Results of numerical calculations of the energy dependence of the null test-signal for the $h$-term are presented in Fig.~\ref{fig1}, in units of the unknown TVPC coupling strength. One can see from this figure that the deuteron $S$-wave contribution (dashed line) leads to a smooth energy dependence and has a node at an antiproton beam energy of about $50$ MeV. The inclusion of the $D$-wave changes this behaviour considerably (solid line) due to a destructive $S$-$D$ interference (cf. dash-dotted line). As a result, a second zero of the null-test signal $\widetilde {\sigma}$ appears at higher energies, i.e. at $T\approx 300$ MeV. The maximal value of $\widetilde {\sigma}$ is expected at $100-150$ MeV. Note that the actual position of the nodes changes only slightly when deuteron wave functions from other $NN$ models are used for the calculation. Let us consider possible spurious effects that could mimic a TVPC signal. One source for a spurious signal is associated with a nonzero deuteron vector polarization $p_d^y\not =0$ (in the direction of the incident-proton-beam polarization ${\bf p}^p$). In this case, the term $\sigma_1P_y^{\bar p}p_y^d$ in Eq.~(\ref{totalspin}) contributes to the asymmetry corresponding to the difference of the event counting rates for the cases of $p_y^{\bar p}P_{xz}>0$ and $p_y^pP_{xz}<0$ (with the fixed sign of $P_{xz}$), which is planned to be measured at COSY \cite{TRIC}. According to our calculations, the integrated cross section $\sigma_1$ could be equal to zero at antiproton beam energies of $\sim 100$~MeV (see results for the J\"ulich $\bar NN$ interaction model in Refs.~\cite{YUJH-PRC87,YUJH-PRC88}). Therefore, at this energy the spurios signal caused by a nonzero value of the deuteron vector polariziation $p_y^d$ could be minimized. \section{Concluding remarks} \label{conclusion} We have performed a study of time-reversal-invariance violating but parity conserving effects in antiproton-deuteron scattering. Specifically, we have evaluated the null-test TVPC signal for scattering of antiprotons with transversal polarization $p_y^p$ on deuterons with tensor polarization $P_{xz}$ on the basis of the spin-dependent Glauber theory. The observed effects turned out to be similar to those in $pd$ scattering: (i) There is a strong impact of the deuteron $D$-wave on the null-test signal that arises from a destructive interference between the $S$- and $D$-wave contributions; (ii) There is an oscillating behaviour of the null-test signal as a function of the beam energy. Accordingly, it is possible that the signal for TVPC effects is zero at some specific energies, even when the TVPC interaction itself is nonzero. \vskip 0.1cm {\bf Acknowledgement.} This work was supported in part by the Heisenberg-Landau program.
{ "timestamp": "2017-12-15T02:06:48", "yymm": "1712", "arxiv_id": "1712.05184", "language": "en", "url": "https://arxiv.org/abs/1712.05184" }
\section{Introduction} Early-type dwarf galaxies (dEs) play a key role in understanding galaxy cluster evolution. dEs\footnote[16]{The term dE has traditionally been used to refer to dwarf elliptical galaxies, whereas we loosely use the term here to include dwarf ellipticals and dwarf lenticulars (dS0).}, the low luminosity (${M}_\text{B}$ > -18) and low surface brightness ($\mu_\text{B}$ > 22 {\,mag\,arcsec$^{-2}$}) population of the Early Type Galaxy (ETGs) class are found in high-density environments and are very rare in isolation \citep{2010AA...517A..73G,2012ApJ...757...85G,blantonetal.2005ApJ...629..143B}. dEs are found abundantly in groups and clusters of galaxies where they dominate in numbers \citep{binggelietal.1988ARA&A..26..509B}. The Lambda cold dark matter (CDM) hierarchical merging scenario predicts that CDM haloes are formed because of gravitational instabilities and evolve hierarchically via mergers (\citealt*{1978MNRAS.183..341W}; \citealt{1988ApJ...327..507F}; \citealt*{1991ApJ...379...52W,1993ASPC...51...192L}; \citealt{2000MNRAS.319..168C}). These models predict that dwarf-size dark matter haloes form first and then merge forming more massive haloes. The class of dEs contains objects covering a wide range of internal properties, with sometimes rather complicated structures. Taking advantage of deep photometric studies, we know that several of them contain substructures such as disks, sprial arms and irregular features (e.g. \citealt{jerjenetal.2000A&A...358..845J,barazzaetal.2002A&A...391..823B,gehaetal.2003AJ....126.1794G}; \citealt*{grahamandguzman.2003AJ....125.2936G}; \citealp{derijckeetal.2003A&A...400..119D,liskeretal.2006AJ....132..497L,ferrareseetal.2006ApJS..164..334F,janzetal.2012ApJ...745L..24J,janzetal.2014ApJ...786..105J}). Apart from this, dEs also show a complicated variety of internal kinematics and dynamics. dEs with similar photometric properties can have different stellar populations (\citealt{michielsenetal.2008MNRAS.385.1374M,paudeletal.2010MNRAS.405..800P,kolevaetal.2009MNRAS.396.2133K,kolevaetal.2011MNRAS.417.1643K,rysetal.2015MNRAS.452.1888R}) and different rotation speeds (\citealt{pedrazetal.2002MNRAS.332L..59P}; \citealt*{simienandprugniel.2002A&A...384..371S}; \citealt{gehaetal.2002AJ....124.3073G,gehaetal.2003AJ....126.1794G,vanzeeetal.2004AJ....128.2797V,chilingarian.2009MNRAS.394.1229C,tolobaetal.2009ApJ...707L..17T,tolobaetal.2011A&A...526A.114T,tolobaetal.2014ApJ...783..120T,tolobaetal.2015ApJ...799..172T,kolevaetal.2009MNRAS.396.2133K,kolevaetal.2011MNRAS.417.1643K,rysetal.2013MNRAS.428.2980R,rysetal.2014MNRAS.439..284R}). \citet{kormendy.1985ApJ...295...73K} suggested that they developed their spheroidal non-star forming appearance, that is probably highly flattened (\citealt{liskeretal.2006AJ....132..497L,liskeretal.2007ApJ...660.1186L}), during a transformation from a late-type galaxy that fell into a cluster; it is thought that this transformation is induced by the environment because the morphology-density relation largely depends on the environment (e.g. \citealt*{boselliandgavazzi_2014A&ARv..22...74B}). Processes for that transformation include ram-pressure stripping (\citealt*{gunnandgott.1972ApJ...176....1G,linandfaber.1983ApJ...266L..21L}) and galaxy harassment(\citealt{mooreetal.1998ApJ...495..139M}). Ram-pressure stripping should be able to remove the galaxy's remaining gas from the system on short time scales, so that star formation stops quickly. The effect of ram pressure stripping depends strongly on the density of the environment and it is expected that their angular momentum and structure should be preserved (\citealt{rysetal.2014MNRAS.439..284R}) while galaxy harassment by tidal interactions between a galaxy and the potential of the cluster can heat up the object, increasing the velocity dispersion, slow its rotation down and remove stellar mass so that disks are transformed into more spheroidal objects (\citealt{mooreetal.1998ApJ...495..139M}). In this case, a galaxy can lose some of its intrinsic angular momentum. For a more detailed review on these effects, see \citet*{boselliandgavazzi.2006PASP..118..517B,boselliandgavazzi_2014A&ARv..22...74B}. How exactly ram pressure stripping and harassment transform objects is still rather unclear. \citet{rysetal.2013MNRAS.428.2980R} concluded that a transformation mechanism should be able to not only lower the angular momentum but also increase the stellar concentration of dEs compared to their presumed progenitors. \citet{tolobaetal.2015ApJ...799..172T} show that even a combination of these two mechanisms can not easily remove all of the angular momentum, something which is needed to explain some observations. Since ram pressure stripping is happening on short timescales, it might be a standard mechanism to transform late type star-forming galaxies into dwarf early-type galaxies. After being in the cluster for a long time, the galaxy goes through its center several times, during which it can heat up, lose stellar rotation and also lose its disky structure. Since fast rotators in the outer parts of the cluster are rotating faster than the fast rotators found in the inner part of the cluster (\citealt{tolobaetal.2014ApJS..215...17T}, here after T14), \citet{springeletal.2005Natur.435..629S} suggested that clusters were formed by the accretion of small groups of galaxies. According to this scenario, the properties of slow and non-rotating dEs in the center of the cluster can be explained as well as the existence of kinematically decoupled cores observed in some of the SMAKCED (Stellar content, MAss and Kinematics of Cluster Early-type Dwarfs) dEs (\citealt{tolobaetal.2014ApJ...783..120T}) Therefore, galaxy clusters, as a place with many dwarf ellipticals with a range of environmental properties, are excellent places to study the evolution and formation of the dEs. Not only can be used the morphology, or their kinematics, to study the evolution of galaxies: more detailed information can be obtained by studying the stellar populations, since the distribution of ages, metallicities and abundance ratios provide important information that can be used to study the evolutionary history of galaxies since chemical abundances of the gas are locked into the stars when they form. This information can be obtained via two general techniques. The first is by studying ages and abundances from observations of individual stars, which can be done for nearby galaxies where individual stars can be resolved. The second is by studying the integrated light from more distant galaxies to derive star formation histories and abundance distributions. This second technique is the only one currently available for galaxies at the distance of the Virgo cluster (16.5 Mpc, \citealt{meietal.2007ApJ...655..144M}). A non-trivial problem when analyzing the spectra of galaxies is the degeneracy between age and metallicity. One can break the age-metallicity degeneracy using a wide wavelength baseline, a combination of line indices, and accurate data. These then are compared to evolutionary stellar population models (e.g., \citealt{vazdekis.1999ApJ...513..224V}; \citealt*{bruzualandcharlot.2003MNRAS.344.1000B}; \citealt{thomasetal.2003MNRAS.343..279T,maraston.2005MNRAS.362..799M,schiavon.2007ApJS..171..146S,marastonetal.2009A&A...493..425M,vazdekisetal.2010MNRAS.404.1639V}; \citealt*{conroyandvandokkum.2012ApJ...747...69C}). By comparing model predictions with observational galaxy parameters age and metallicity distributions of the stars in that galaxy can be obtained. One can compare observations with models of a single age and metallicity, obtaining SSP-equivalent parameters. More complicated approaches, (e.g. STECKMAP (\citealt{ocvirketal.2006MNRAS.365...46O}), STARLIGHT (\citealt{cidfernandesetal.2005MNRAS.358..363C})) provide full star formation histories. A problem still remains the unicity of the solutions. As a rule of thumb one can say that the larger the wavelength range is, the more unique the solution. Stellar population studies show that dEs have on average a lower metal content than giant ellipticals, as expected from the metallicity-luminosity relation (\citealt{michielsenetal.2008MNRAS.385.1374M,skillmanetal.1989ApJ...347..875S,sybliskaetal.2017MNRAS}). Their ages are somewhat younger on average. However, recent studies show that the stellar populations of dEs show indications of both young and old ages and a range in gradients (e.g. \citealt{kolevaetal.2009MNRAS.396.2133K,kolevaetal.2011MNRAS.417.1643K,denbroketal.2011MNRAS.414.3052D}; \citealt{rysetal.2015MNRAS.452.1888R}). Studies about detailed abundance ratios in dEs are scarce. \citet{gorgasetal.1997ApJ...481L..19G,michielsenetal.2008MNRAS.385.1374M} and \citet{sybliskaetal.2017MNRAS} show that [Mg/Fe] is similar to solar, lower than what is found in giant ellipticals. [Mg/Fe] can, however provide important information about the formation of a galaxy. Individual galaxy abundances are the result of chemical evolution, involving element enrichment in stars, supernova explosions and galactic winds from e.g. AGB stars. As a result, the measurement of abundances of many elements can give us a very detailed picture of the formation and evolution of a galaxy (see e.g. \citealt{tolstoyetal.2009ARA&A..47..371T} for Local group galaxies). We often call this way of studying galaxies galactic archaeology. Measured abundances of various elements allow us in principle to understand which enrichment processes have been dominant at different epochs of galaxy formation because of their different nucleosynthetic origin. It is thought that a group of lighter elements, the so called $\alpha$-elements, such as O and Mg, are produced by type II supernovae, supernovae originated from massive stars, which therefore occur on short timescales (\citealt*{wortheyetal.1992ApJ...398...69W}). Most of the Fe, on the other hand, is predominantly produced by a different group of supernovae, those of type Ia, which occur on a much longer timescale. As a result, elemental ratios of [$\alpha$/Fe] give us information about the relative contribution from the two types of supernovae at a given time, i.e., about the timescale of star formation. The observed correlation of [$\alpha$/Fe] abundance ratio and galaxy mass is an indication of the downsizing (\citealt*{vazdekisetal.2004ApJ...601L..33V}; \citealt{nelanetal.2005ApJ...632..137N,thomasetal.2005ApJ...621..673T}). For dEs, \citet{gorgasetal.1997ApJ...481L..19G}, measured that Virgo dEs are consistent with solar [$\alpha$/Fe] abundance ratio and showed that star formation must have happened on longer time scales in these systems. Several works confirmed these results and found that dEs have younger ages and lower metallicities than normal Es (\citealt{gehaetal.2003AJ....126.1794G,vanzeeetal.2004AJ....128.2797V}). Interpretation of abundance ratios of other elements is more complicated, and has been limited mostly to the Local Group. They are also used to obtain a more detailed picture and information on the IMF and SFH (\citealt{mcwilliam.1997ARA&A..35..503M}). In dEs, at the moment, very little information is available on abundance ratios of elements (apart from Mg and Fe), mainly because of the lack of high S/N spectra, but also because of the lack of methods to analyse them. Since this has changed in recent years, we have been able to start a program to obtain and analyze abundance ratios in dwarf ellipticals. The results on the pilot galaxy NGC 1396 are presented in \citet{mentzetal.2016MNRAS.463.2819M}. In the current paper a sample of 37 galaxies from the SMAKCED sample is analyzed. To determine the abundance ratios from integrated spectra, we use the hybrid model calibration by \citet*{conroyetal.2014ApJ...780...33C}(CvD hereafter). They calculate spectra using standard stellar population models with solar abundance ratios, which are modified by using theoretical responses of spectra to abundance ratio variations, following a method developed by \citet{walcheretal.2009MNRAS.398L..44W}. Here, we will study the abundance ratios of a few elements in dwarf ellipticals, obtaining data which allow us to compare the formation history of dEs with those of giant ellipticals, the Milky Way, and other galaxies of the Local Group. We will focus on the Na doublet absorption features in the optical wavelength range at 5890 and 5896 \AA\ (NaD hereafter) and the Ca4227 line-strength indices, to study the abundances of Na and Ca, as well as the better studied Mg. Although interpreting the observational results is at present extremely difficult, already many conclusions can be derived by comparing them to other types of galaxies. This work is part of the SMAKCED project, aimed at studying the nature of dEs in the Virgo Cluster. More details about the galaxies discussed here, and their properties, can be found in the other SMAKCED papers (\citealp{janzetal.2012ApJ...745L..24J,janzetal.2014ApJ...786..105J, tolobaetal.2014ApJ...783..120T, tolobaetal.2014ApJS..215...17T, tolobaetal.2015ApJ...799..172T}). In \citet{tolobaetal.2014ApJS..215...17T} a description is given of the spectroscopic part of the survey, while the H-band photometry is described in \citet{janzetal.2014ApJ...786..105J}. In \citet{tolobaetal.2014ApJS..215...17T}, kinematics of two dEs are presented that show kinematically decoupled cores. In \citet{tolobaetal.2015ApJ...799..172T} the stellar kinematics of dEs is presented as a function of projected distance to the center of the Virgo cluster. The Virgo cluster is an ideal laboratory to study dEs because it contains hundreds of them, is close enough to resolve their detailed structure, and is a dynamically young cluster that is still evolving today \citep{binggellietal.1993A&AS...98..275B,bosellietal.2014A&A...570A..69B}. In this paper we focus on the abundance ratio distribution of dEs and compare them with other types of galaxies. This paper is organized as follows. In Section 2, we present the general properties of our samples, the observations, and the main data reduction steps. We describe the measurements of age-sensitive and metallicity-sensitive Lick indices. In Section 3, we derive the ages and metallicity based on the Lick indices and the abundance ratios. In Section 4, our results are summarized and discussed. In Section 5, conclusions are given. \section{Observations and Data Reduction} Our sample consists of 37 galaxies, the full spectroscopic sample of the SMAKCED project. For each of them, we obtained data suitable for a detailed stellar population study, with relatively high spectral resolution and high signal-to-noise (S/N) ratio. Two galaxies in the sample (VCC1684, VCC2083) were not included, since no ages could be determined because of lack of observed Balmer lines. The spectroscopic data were obtained at three different telescopes. Twenty six dEs were observed at the 4.2m WHT telescope using the double-arm ISIS spectrograph, of which the blue-arm covered the wavelength range 4200 - 5000 \AA\ and the red-arm covered the wavelength range 5500 - 6700 \AA\ . Ten dEs were observed at the 2.5m INT telescope using the IDS spectrograph covering the wavelength range 4600 - 5600 \AA\ , while the remaining three dEs were observed at the 8m VLT telescope using the FORS2 spectrograph that covers the wavelength range 4500 - 5600 \AA\ . Table~\ref{tab:properties} summarizes the main properties of these 37 dEs (see also T14). The data were reduced following the standard procedure for long-slit spectra using the package REDUCEME (\citealp{cardiel.1999PhDT........12C}). Details on sample selection, observations and data reduction are presented in T14. \begin{table*} \caption{Properties of the SMAKCED dEs. Column 1: galaxy name. Columns 2 and 3: right ascension and declination in J2000. Columns 4 and 5: r-band magnitude (in the AB system) and half-light radius \protect\citet{janzandlisker.2008ApJ...689L..25J,janzandlisker.2009ApJ...696L.102J}. Column 6: velocity dispersion.} \label{tab:properties} \begin{tabular}{cccccc} \hline Galaxy & RA & Dec & M$_{r}$ & R$_{e}$ & $\sigma_\text{e}$ \\ ~ & (J2000) & (J2000) & (mag) & (arcsec) & (km/s) \\ \hline VCC0009 & 12:09:22.25 & 13:59:32.74 & -18.2 & 37.2 & 26.0$\pm3.9$\\ VCC0021 & 12:10:23.15 & 10:11:19.04 & -17.1 & 15.2 & 28.9$\pm2.9$\\ VCC0033 & 12:11:07.79 & 14:16:29.19 & -16.9 & 09.8 & 20.8$\pm4.9$\\ VCC0170 & 12:15:56.34 & 14:26:00.33 & -17.6 & 31.3 & 26.6$\pm4.6$\\ VCC0308 & 12:18:50.90 & 07:51:43.38 & -18.0 & 18.6 & 24.1$\pm2.4$\\ VCC0389 & 12:20:03.29 & 14:57:41.70 & -18.1 & 18.0 & 30.9$\pm1.2$\\ VCC0397 & 12:20:12.18 & 06:37:23.51 & -16.8 & 13.6 & 35.7$\pm1.9$\\ VCC0437 & 12:20:48.10 & 17:29:16.00 & -18.0 & 29.5 & 40.9$\pm4.0$\\ VCC0523 & 12:22:04.14 & 12:47:14.60 & -18.7 & 26.1 & 42.2$\pm1.0$\\ VCC0543 & 12:22:19.54 & 14:45:38.59 & -17.8 & 23.6 & 35.1$\pm1.4$\\ VCC0634 & 12:23:20.01 & 15:49:13.25 & -18.5 & 37.2 & 31.3$\pm1.6$\\ VCC0750 & 12:24:49.58 & 06:45:34.49 & -17.0 & 19.5 & 43.5$\pm2.9$\\ VCC0751 & 12:24:48.30 & 18:11:47.00 & -17.5 & 12.3 & 32.1$\pm2.4$\\ VCC0781 & 12:25:15.17 & 12:42:52.59 & -17.2 & 13.4 & 38.0$\pm2.8$\\ VCC0794 & 12:25:22.10 & 16:25:47.00 & -17.3 & 37.0 & 29.0$\pm3.9$\\ VCC0856 & 12:25:57.93 & 10:03:13.54 & -17.8 & 16.5 & 31.3$\pm4.1$\\ VCC0917 & 12:26:32.39 & 13:34:43.54 & -16.6 & 09.9 & 28.4$\pm1.4$\\ VCC0940 & 12:26:47.07 & 12:27:14.17 & -17.4 & 19.8 & 40.4$\pm1.3$\\ VCC0990 & 12:27:16.94 & 16:01:27.92 & -17.5 & 10.2 & 38.7$\pm1.3$\\ VCC1010 & 12:27:27.39 & 12:17:25.09 & -18.4 & 22.2 & 44.6$\pm0.9$\\ VCC1087 & 12:28:14.90 & 11:47:23.58 & -18.6 & 35.4 & 42.0$\pm1.5$\\ VCC1122 & 12:28:41.71 & 12:54:57.08 & -17.2 & 17.3 & 32.1$\pm1.7$\\ VCC1183 & 12:29:22.51 & 11:26:01.73 & -17.9 & 21.1 & 44.3$\pm2.4$\\ VCC1261 & 12:30:10.32 & 10:46:46.51 & -18.5 & 23.8 & 44.8$\pm1.4$\\ VCC1304 & 12:30:39.90 & 15:07:46.68 & -16.9 & 16.5 & 25.9$\pm2.7$\\ VCC1355 & 12:31:20.21 & 14:06:54.93 & -17.6 & 30.3 & 20.3$\pm4.7$\\ VCC1407 & 12:32:02.73 & 11:53:24.46 & -17.0 & 12.1 & 31.9$\pm2.1$\\ VCC1431 & 12:32:23.41 & 11:15:46.94 & -17.8 & 09.8 & 52.4$\pm1.6$\\ VCC1453 & 12:32:44.22 & 14:11:46.17 & -17.9 & 18.9 & 35.6$\pm1.4$\\ VCC1528 & 12:33:51.61 & 13:19:21.03 & -17.5 & 09.6 & 47.0$\pm1.4$\\ VCC1549 & 12:34:14.83 & 11:04:17.51 & -17.3 & 12.1 & 36.7$\pm2.3$\\ VCC1695 & 12:36:54.85 & 12:31:11.93 & -17.7 & 24.0 & 24.4$\pm2.2$\\ VCC1861 & 12:40:58.57 & 11:11:04.34 & -17.9 & 19.0 & 31.3$\pm1.5$\\ VCC1895 & 12:41:51.97 & 09:24:10.28 & -17.0 & 16.3 & 23.8$\pm3.0$\\ VCC1910 & 12:42:08.67 & 11:45:15.19 & -17.9 & 13.4 & 37.0$\pm1.2$\\ VCC1912 & 12:42:09.07 & 12:35:47.93 & -17.9 & 22.5 & 36.0$\pm1.5$\\ VCC1947 & 12:42:56.34 & 03:40:35.78 & -17.6 & 09.3 & 48.3$\pm1.3$\\ \hline \end{tabular} \end{table*} \subsection{Line-strength measurements} \label{sec:maths} Observed spectral data can be studied either by fitting the full spectrum or by focusing on selected line indices. In this work we study selected line indices. We measured Lick indices (\citealp{wortheyetal.1994ApJS...94..687W}) in the LIS-5 \AA\ flux calibrated system (\citealp{vazdekisetal.2010MNRAS.404.1639V}). The new Line Index System (LIS) has numerous advantages over the Lick system. It is defined at three different resolutions, namely 5.0 \AA, 8.4 \AA, and 14.0 \AA. As such, it is particularly useful for analyzing small galaxies and globular clusters. As the resolution of the Lick/IDS library (FWHM $\sim$ 8-11 \AA) is much lower than what is available with our high resolution spectra, we broadened the spectra to the LIS-5 \AA\ system. The LIS-5 \AA\ system choice expresses a factor of 2 improvement in resolution over Lick/IDS system. Also, the fact that the LIS system is flux calibrated, makes it easier to reproduce data from other authors. The spectra were broadened to the LIS-5 \AA\ system taking into account the velocity dispersion of the spectra. This ensures that every galaxy spectrum has the same spectral resolution, which is important and necessary to compare them with each other. We measured a total of 23 Lick indices (\citealp{faberetal.1985ApJS...57..711F, gorgasetal.1993ApJS...86..153G, wortheyetal.1994ApJS...94..687W}, \citealp*{wortheyandottaviani.1997ApJS..111..377W}). In this paper we use ${H}\gamma_\text{F}$, ${H}\beta$, Fe4383, Fe4531, Fe5270, Fe5335, Fe5406, Fe5709, Mgb, Ca4227 and NaD. The summary of the indices used are tabulated Table~\ref{tab:indicator}. However, because of the different wavelength range covered by our spectra, not all lines are used for all galaxies. Galaxies observed at the VLT do not cover the ${H}\gamma_\text{F}$, Fe4383 and NaD lines. Also, in some spectra which are obtained by WHT, ${H}\beta$ is located at the end of the spectrum, and since the spectra suffer from vignetting there, we could not determine this index here. In Table~\ref{tab:results_WHT} and Table~\ref{tab:results_INTandVLT}, we show the line-strength index measurements for each galaxy used. \begin{table} \caption{A summary of the indices used} \label{tab:indicator} \begin{tabular}{ccc} \hline Telescope & WHT & INT and VLT \\ \hline Age indicator & ${H}\gamma_\text{F}$ & ${H}\beta $ \\ Metal indicator & Fe4383, Fe4531 & Fe4531, Fe5270, Fe5335 \\ & Fe5709 & Fe 5406, Fe5709, Mgb \\ Na abundances & NaD & . . . \\ Ca abundances & Ca4227 & . . . \\ Mg abundance & . . . & Mgb \\ \hline \end{tabular} \end{table} \section{Results} \subsection{Derived ages and metallicities} Luminosity-weighted ages and metallicities are estimated using age-sensitive (${H}\beta$ and ${H}\gamma_\text{F}$) and metallicity-sensitive (Fe4383, Fe4531, Fe5270, Fe5335, Fe5406, Fe5709 and Mgb) Lick spectral indices (\citealp{worthey.1994ApJS...95..107W}) measured in the LIS-5 \AA\ system (\citealp{vazdekisetal.2010MNRAS.404.1639V}). For this we used an MCMC (Markov Chain Monte Carlo) code to derive the age and metallicity of the best fitting MILES single stellar population model. We estimate the best luminosity weighted age and metallicity from all available index-index combinations by effectively computing the "distance" from our measured indices to all predicted values on the model grids, and finding the age and metallicity combination with the minimum total distance. The age and metallicity values for each index-index diagram and associated uncertainties were derived using 1000 MCMC iteration of the fit. To reduce the effects of the grid discretization, the two relevant parameters (e.g. age and metallicity) were interpolated. Uncertainties were calculated by performing Monte Carlo simulations, making use of the observational error in each index. In Table~\ref{tab:abundances} we list the best fitting parameters for ages and metallicities that are determined using combinations of all age sensitive lines and all metal indicators. Figure~\ref{fig:index-index} shows index-index plots where we have restricted the age to the interval 1.0 - 14.0 Gyr, and the metallicity range from -1.76 to 0.26, which includes the range covered by the galaxies in our sample. We use the solar-scaled theoretical isochrones in the model grids from \citet{vazdekisetal.2010MNRAS.404.1639V} in Figure ~\ref{fig:index-index}. \begin{table*} \caption{Lick spectral indices measured at LIS-5 \AA\ resolution for WHT objects.} \label{tab:results_WHT} \begin{tabular}{ccccccc} \hline Galaxy & Ca4227 & ${H}\gamma_\text{F}$ & Fe4383 & Fe4531 & Fe5709 & NaD \\ & (\AA) & (\AA) & (\AA) & (\AA) & (\AA) & (\AA) \\ \hline VCC0033 & 1.25$\pm0.38$ & 1.55 $\pm0.39$ & 2.62$\pm0.85$ & 3.09$\pm0.65$ & 0.60$\pm0.22$ & 0.83$\pm0.373$ \\ VCC0170 & 0.75$\pm0.21$ & 2.43 $\pm0.22$ & 1.48$\pm0.51$ & 1.70$\pm0.40$ & 0.48$\pm0.09$ & 1.21$\pm0.173$ \\ VCC0308 & 0.99$\pm0.12$ & 1.69 $\pm0.14$ & 2.53$\pm0.32$ & 3.02$\pm0.24$ & 0.85$\pm0.09$ & 1.57$\pm0.125$ \\ VCC0389 & 1.17$\pm0.14$ & 0.33 $\pm0.14$ & 3.90$\pm0.29$ & 2.87$\pm0.21$ & 0.99$\pm0.07$ & 1.68$\pm0.095$ \\ VCC0397 & 1.11$\pm0.12$ & 1.33 $\pm0.14$ & 3.92$\pm0.31$ & 3.50$\pm0.24$ & 0.94$\pm0.11$ & 1.92$\pm0.136$ \\ VCC0437 & 0.87$\pm0.25$ & -0.76$\pm0.27$ & 2.49$\pm0.57$ & 2.01$\pm0.41$ & 0.80$\pm0.10$ & 2.08$\pm0.165$ \\ VCC0523 & 1.13$\pm0.07$ & 0.93 $\pm0.09$ & 3.25$\pm0.19$ & 3.14$\pm0.15$ & 0.91$\pm0.10$ & 1.50$\pm0.177$ \\ VCC0543 & 1.41$\pm0.13$ & -0.17$\pm0.16$ & 3.96$\pm0.32$ & 3.23$\pm0.23$ & 0.77$\pm0.06$ & 1.95$\pm0.093$ \\ VCC0634 & 1.21$\pm0.15$ & 0.55$\pm0.16$ & 3.70$\pm0.33$ & 3.05$\pm0.25$ & 0.75$\pm0.13$ & . . . \\ VCC0750 & 1.09$\pm0.20$ & 0.75 $\pm0.21$ & 3.36$\pm0.44$ & 3.00$\pm0.33$ & 0.80$\pm0.08$ & 1.61$\pm0.122$ \\ VCC0751 & 1.62$\pm0.25$ & -0.23$\pm0.27$ & 4.80$\pm0.53$ & 3.93$\pm0.40$ & 1.16$\pm0.16$ & . . . \\ VCC0781 & 0.66$\pm0.22$ & 2.74 $\pm0.21$ & 1.31$\pm0.49$ & 2.19$\pm0.37$ & . . . & . . . \\ VCC0794 & 1.06$\pm0.24$ & 0.43 $\pm0.26$ & 2.84$\pm0.55$ & 2.18$\pm0.41$ & 0.62$\pm0.08$ & 1.45$\pm0.117$ \\ VCC0917 & 0.99$\pm0.10$ & 0.41 $\pm0.13$ & 3.28$\pm0.27$ & 2.96$\pm0.21$ & 0.65$\pm0.09$ & 1.26$\pm0.129$ \\ VCC1010 & 1.42$\pm0.07$ & -0.83$\pm0.09$ & 4.49$\pm0.17$ & 3.25$\pm0.13$ & 0.85$\pm0.04$ & 2.33$\pm0.054$ \\ VCC1087 & 1.33$\pm0.10$ & -0.51$\pm0.12$ & 4.79$\pm0.23$ & 2.98$\pm0.18$ & 0.79$\pm0.08$ & . . . \\ VCC1122 & 1.11$\pm0.10$ & 0.86 $\pm0.12$ & 3.20$\pm0.26$ & 2.52$\pm0.21$ & 0.71$\pm0.20$ & . . . \\ VCC1304 & 0.81$\pm0.13$ & 1.72 $\pm0.15$ & 2.31$\pm0.34$ & 2.60$\pm0.26$ & 0.52$\pm0.07$ & 1.95$\pm0.097$ \\ VCC1355 & 1.39$\pm0.33$ & 0.59 $\pm0.35$ & 2.77$\pm0.74$ & 3.06$\pm0.55$ & 0.78$\pm0.10$ & 1.32$\pm0.188$ \\ VCC1407 & 0.83$\pm0.18$ & 0.12 $\pm0.20$ & 3.35$\pm0.40$ & 2.81$\pm0.30$ & 0.53$\pm0.08$ & 1.55$\pm0.129$ \\ VCC1453 & 1.52$\pm0.13$ & -0.43$\pm0.14$ & 4.38$\pm0.29$ & 3.40$\pm0.21$ & 0.99$\pm0.06$ & 2.08$\pm0.065$ \\ VCC1528 & 1.27$\pm0.18$ & -0.40$\pm0.19$ & 4.49$\pm0.38$ & 3.28$\pm0.28$ & 0.96$\pm0.07$ & 2.33$\pm0.093$ \\ VCC1695 & 1.03$\pm0.09$ & 1.47 $\pm0.10$ & 2.82$\pm0.23$ & 2.75$\pm0.18$ & 0.80$\pm0.08$ & 1.63$\pm0.123$ \\ VCC1861 & 1.43$\pm0.11$ & -0.66$\pm0.14$ & 3.71$\pm0.29$ & 2.87$\pm0.22$ & 0.70$\pm0.10$ & . . . \\ VCC1895 & 1.09$\pm0.20$ & 0.83 $\pm0.23$ & 2.73$\pm0.49$ & 2.75$\pm0.37$ & 0.70$\pm0.10$ & 1.26$\pm0.155$ \\ \hline \end{tabular} \end{table*} \begin{table*} \caption{Lick spectral indices measured at LIS-5 \AA\ resolution for INT and VLT objects.} \label{tab:results_INTandVLT} \begin{tabular}{cccccccc} \hline Galaxy & Fe4531 & ${H}\beta $ & Mgb & Fe5270 & Fe5335 & Fe5406 & Fe5709\\ & (\AA) & (\AA) & (\AA) & (\AA) & (\AA) & (\AA) & (\AA) \\ \hline VCC0021 & 1.23$\pm0.40$ & 3.98$\pm0.19$ & 0.96$\pm0.23$ & 1.25$\pm0.25$ & 1.52$\pm0.29$ & 0.51$\pm0.23$ & 0.11$\pm0.20$\\ VCC0856 & 3.35$\pm0.62$ & 2.30$\pm0.29$ & 2.44$\pm0.33$ & 2.38$\pm0.36$ & 1.87$\pm0.41$ & 0.67$\pm0.32$ & 1.39$\pm0.27$\\ VCC0940 & 2.33$\pm0.17$ & 2.22$\pm0.09$ & 2.64$\pm0.08$ & 2.45$\pm0.09$ & 1.78$\pm0.10$ & 1.30$\pm0.07$ & ...\\ VCC0990 & 3.20$\pm0.30$ & 2.81$\pm0.15$ & 2.49$\pm0.17$ & 2.62$\pm0.19$ & 2.29$\pm0.21$ & 1.58$\pm0.16$ & 0.83$\pm0.15$\\ VCC1183 & 3.49$\pm0.35$ & 2.61$\pm0.16$ & 2.95$\pm0.18$ & 2.82$\pm0.20$ & 2.23$\pm0.23$ & 1.59$\pm0.17$ & 0.95$\pm0.15$\\ VCC1261 & 2.33$\pm0.21$ & 2.47$\pm0.11$ & 2.19$\pm0.12$ & 2.52$\pm0.13$ & 2.13$\pm0.15$ & 1.47$\pm0.12$ & 0.94$\pm0.11$\\ VCC1431 & 3.53$\pm0.33$ & 1.95$\pm0.16$ & 3.17$\pm0.18$ & 2.46$\pm0.19$ & 1.93$\pm0.22$ & 1.41$\pm0.17$ & 0.46$\pm0.15$\\ VCC1549 & 2.72$\pm0.46$ & 1.73$\pm0.22$ & 3.02$\pm0.25$ & 2.85$\pm0.26$ & 2.48$\pm0.30$ & 1.74$\pm0.22$ & 0.99$\pm0.20$\\ VCC1910 & 3.24$\pm0.33$ & 1.75$\pm0.15$ & 2.82$\pm0.16$ & 2.50$\pm0.18$ & 2.84$\pm0.19$ & 2.06$\pm0.15$ & 0.30$\pm0.13$\\ VCC1912 & 2.31$\pm0.24$ & 3.66$\pm0.11$ & 1.32$\pm0.13$ & 2.21$\pm0.14$ & 2.30$\pm0.16$ & 1.15$\pm0.13$ & 0.16$\pm0.11$\\ VCC1947 & 3.26$\pm0.32$ & 1.82$\pm0.15$ & 3.18$\pm0.17$ & 2.99$\pm0.18$ & 2.69$\pm0.20$ & 1.85$\pm0.16$ & 0.94$\pm0.14$\\ \hline \end{tabular} \end{table*} \begin{figure*} \includegraphics[width=1\textwidth]{merged.pdf} \caption{Spectral index-index diagrams used to estimate the stellar populations using solar-scaled theoretical isochrone grids with IMF slope of 1.3 from Vazdekis et al. (2010) in the system LIS-5 \AA, solid lines indicate constant age 1.0, 2.0, 3.5, 5.5, 10.0 and 14.0 Gyr, respectively while dotted lines indicate constant [M/H] -1.76, -1.26, -0.65, -0.35, +0.06 and +0.26, respectively.} \label{fig:index-index} \end{figure*} \subsection{Elemental abundance ratios} {To calculate the abundance ratios [E/Fe] from the indices we first calculate age and metallicity as described above, assuming that the galaxies can be represented by an SSP model. We then measure the difference between the observed index related to the element E and the index value expected from stellar population models for the age and metallicity that we measure for the galaxy, and divide this difference by the sensitivity of the index to variations in [E/Fe] (equation 1). For Na (NaD) and Mg (Mgb) we use the Na-MILES model of (\citealt{labarberaetal.2017MNRAS.464.3597L}) based on Teramo isochrones, with variable Mg and Na abundance ratios, and with a (standard) bimodal IMF with $\Gamma_b$=1.3. Unfortunately, Mgb was measured only in a few galaxies, those observed with the VLT and the INT. For Ca, we do the same using the Ca4227 index. The problem here is that for this element no stellar population models are available. The only available model, by CvD, has solar metallicity, and a fixed, old, age of 13.5 Gyr, which is different from the ages and metallicities of the objects we discuss here. In this paper we will use this latter model, but move the analysis and discussion to Appendix A, since we do not know how appropriate this is. So, for element {\it E} based on index {\it i} the elemental abundance ratios were calculated using \begin{equation} [E_{i}/Fe]=\frac{i_{observed} - i_{model} }{\frac{\Delta{i_{model}}}{\Delta{[E_{i}/Fe]_{model}}}}. \label{eq:quadratic} \end{equation} where, $\it{i_{observed}}$ is the value of the index measured from the observations, $\it{i_{model}}$ is the index expected from the model and $\it{E_{i}}$ is the elemental abundance. In Table~\ref{tab:abundances}, we provide the elemental abundances for each galaxy determined according to the procedure described above. The spectral range covered allowed the measurement of [Ca/Fe] and [Na/Fe] for most galaxies that are observed by WHT, and [Mg/Fe] for some that are observed by INT and VLT. We do not have galaxies for which all three abundance ratios have been measured. To establish the reliability of these abundance ratios, we made a comparison of the sensitivity of $\Delta{NaD}$\ to variations in [Na/Fe] and $\Delta{Mgb}$\ to variations in [Mg/Fe] between the SSP models of CvD and MILES. For a meaningful comparison, we have computed some additional $\alpha$ enhancement SSP models for a total metallicity of [M/H]=0.3 (for [Fe/H]~0.0, from equation 4 of \citealt{vazdekisetal.2012MNRAS.424..157V}). For the Na comparison, a Na-MILES model (\citealt{labarberaetal.2017MNRAS.464.3597L}) is computed for the same age and metallicity as CvD. We find that the Na-MILES model give a slightly larger sensitivity for Na than the one of CvD and that we get similar results from both CvD and MILES for Mgb. Note, however, that the sensitivities vary considerably when going to lower ages and metallicities. We report the values of the $\Delta{NaD}$\ and $\Delta{Mgb}$\ variations as a function of these elemental abundance ratios for models with different ages and metallicities in Table~\ref{tab:comparision}. When we compare the sensitivities at the same age, we find that the NaD sensitivity generally increases with metallicity, while the one for Mgb generally decreases. When one increases the age, both sensitivities tend to increase. \begin{figure*} \includegraphics[width=.95\textwidth]{index_index_1aug_hgf_nad-eps-converted-to.pdf} \caption{Example of a spectral index-index diagram for ${H}\gamma_\text{F}$ versus NaD, showing that the NaD values are much lower than the predicted by models. Models from Vazdekis et al. 2010. Constant age and [M/H] as in Figure~\ref{fig:index-index}.} \label{fig:index_index_1aug_hgf_nad} \end{figure*} \begin{figure*} \includegraphics[width=.93\textwidth]{index_index_1aug_hgf_ca4227-eps-converted-to.pdf} \caption{Example of spectral index-index diagram for ${H}\gamma_\text{F}$ versus Ca4227, showing that the Ca4227 values are slightly higher than predicted by models. Models from Vazdekis et al. 2010. Constant age and [M/H] as in Figure~\ref{fig:index-index}.} \label{fig:index_index_1aug_hgf_ca4227} \end{figure*} \begin{figure*} \includegraphics[width=0.95\textwidth]{abundances_error_newnaandmg.pdf} \caption{Mg and Na abundances as a function of metallicity [Fe/H]. Blue pluses are the Milky Way stars from Venn et al. (2004), red crosses are from the Fornax local dwarf from \citet{Letarteetal.2010A&A...523A..17L} and \citet{2003AJ....125..684S}, green asterisks are the LMC red giants from Pomp\'eia et al. (2008), purple triangles are giant ellipticals from Conroy et al. (2014), orange circles are various radial bins of the NGC1396 in the Fornax cluster from Metz et al. (2016) and black circles are the dEs in Virgo cluster analyzed here.} \label{fig:abundances_error} \end{figure*} \begin{table*} \caption{Elemental abundances, metallicity and ages. Column 1: galaxy name. Column 2 and 3: measurement of [Ca/Fe] and [Na/Fe] for galaxies that are observed with the WHT. Column 4: measurement of [Mg/Fe] for galaxies that are observed using the INT and VLT. Column 5 and 6: metallicity [M/H] and logarithmic ages in Gyr with errors.} \label{tab:abundances} \begin{tabular}{cccccc} \hline Galaxy & [Ca/Fe] & [Na/Fe] & [Mg/Fe] & [Fe/H] & log (age) \\ & & & & & (Gyr)\\ \hline VCC0009 & 0.05 & -0.52 & . . . & -1.06 $\pm 0.15 $ & 0.94 $\pm 0.03 $ \\ VCC0021 & . . . & . . . & 0.03 & -1.17 $\pm 0.18 $ & 0.20 $\pm 0.07 $ \\ VCC0033 & 0.32 & -0.57 & . . . & -1.00 $\pm 0.01 $ & 0.83 $\pm 0.17 $ \\ VCC0170 & 0.17 & -0.32 & . . . & -1.05 $\pm 0.08 $ & 0.47 $\pm 0.09 $ \\ VCC0308 & 0.14 & -0.24 & . . . & -0.40 $\pm 0.24 $ & 0.34 $\pm 0.04 $ \\ VCC0389 & 0.15 & -0.23 & . . . & -0.37 $\pm 0.16 $ & 0.70 $\pm 0.04 $ \\ VCC0397 & 0.05 & -0.20 & . . . & 0.06 $\pm 0.35 $ & 0.23 $\pm 0.04 $ \\ VCC0437 & 0.12 & -0.06 & . . . & -0.71 $\pm 0.01 $ & 0.98 $\pm 0.07 $ \\ VCC0523 & 0.17 & -0.33 & . . . & -0.19 $\pm 0.29 $ & 0.41 $\pm 0.03 $ \\ VCC0543 & 0.21 & -0.17 & . . . & -0.44 $\pm 0.07 $ & 0.83 $\pm 0.05 $ \\ VCC0634 & 0.30 & . . . & . . . & -0.93 $\pm 0.35 $ & 0.84 $\pm 0.30 $ \\ VCC0750 & 0.13 & -0.24 & . . . & -0.47 $\pm 0.10 $ & 0.62 $\pm 0.06 $ \\ VCC0751 & 0.45 & . . . & . . . & -0.93 $\pm 0.36 $ & 0.85 $\pm 0.29 $ \\ VCC0781 & 0.11 & . . . & . . . & -0.90 $\pm 0.24 $ & 0.84 $\pm 0.30 $ \\ VCC0794 & 0.25 & -0.31 & . . . & -0.81 $\pm 0.03 $ & 0.90 $\pm 0.08 $ \\ VCC0856 & . . . & . . . & 0.05 & -0.64 $\pm 0.31 $ & 0.92 $\pm 0.16 $ \\ VCC0917 & 0.08 & -0.43 & . . . & -0.47 $\pm 0.12 $ & 0.69 $\pm 0.05 $ \\ VCC0940 & . . . & . . . & 0.40 & -0.97 $\pm 0.35 $ & 0.84 $\pm 0.29 $ \\ VCC0990 & . . . & . . . & 0.07 & -0.23 $\pm 0.12 $ & 0.43 $\pm 0.08 $ \\ VCC1010 & 0.18 & -0.07 & . . . & -0.31 $\pm 0.10 $ & 0.94 $\pm 0.03 $ \\ VCC1087 & 0.35 & . . . & . . . & -0.94 $\pm 0.25 $ & 0.84 $\pm 0.30 $ \\ VCC1122 & 0.27 & . . . & . . . & -0.93 $\pm 0.30 $ & 0.85 $\pm 0.29 $ \\ VCC1183 & . . . & . . . & 0.15 & -0.26 $\pm 0.02 $ & 0.67 $\pm 0.11 $ \\ VCC1261 & . . . & . . . & 0.04 & -0.33 $\pm 0.13 $ & 0.52 $\pm 0.06 $ \\ VCC1304 & 0.15 & 0.06 & . . . & -0.68 $\pm 0.10 $ & 0.55 $\pm 0.07 $ \\ VCC1355 & 0.34 & -0.32 & . . . & -0.64 $\pm 0.01 $ & 0.75 $\pm 0.12 $ \\ VCC1407 & 0.11 & -0.27 & . . . & -0.73 $\pm 0.07 $ & 0.95 $\pm 0.05 $ \\ VCC1431 & . . . & . . . & 0.11 & -0.49 $\pm 0.02 $ & 1.07 $\pm 0.06 $ \\ VCC1453 & 0.23 & -0.15 & . . . & -0.27 $\pm 0.13 $ & 0.76 $\pm 0.05 $ \\ VCC1528 & 0.14 & -0.05 & . . . & -0.25 $\pm 0.05 $ & 0.79 $\pm 0.06 $ \\ VCC1549 & . . . & . . . & 0.08 & -0.41 $\pm 0.09 $ & 1.04 $\pm 0.09 $ \\ VCC1695 & 0.15 & -0.21 & . . . & -0.40 $\pm 0.22 $ & 0.39 $\pm 0.05 $ \\ VCC1861 & 0.38 & . . . & . . . & -0.90 $\pm 0.57 $ & 0.85 $\pm 0.29 $ \\ VCC1895 & 0.22 & -0.37 & . . . & -0.77 $\pm 0.15 $ & 0.82 $\pm 0.11 $ \\ VCC1910 & . . . & . . . & 0.05 & -0.37 $\pm 0.40 $ & 0.93 $\pm 0.11 $ \\ VCC1912 & . . . & . . . & -0.07 & -0.07 $\pm 0.28 $ & 0.14 $\pm 0.01 $ \\ VCC1947 & . . . & . . . & -0.09 & -0.93 $\pm 0.47 $ & 0.88 $\pm 0.11 $ \\ \hline \end{tabular} \end{table*} \begin{table}[H] \caption{Comparison of MILES and CvD model predictions (column 1) with varying metallicity (column 2) and age (column 3). Columns 4 and 5: $\Delta{NaD}$ variations in [Na/Fe], $\Delta{Ca4227}$ variations in [Ca/Fe] and $\Delta{Mgb}$ variations in [Mg/Fe] based on the models shown in col.1} \label{tab:comparision} \begin{tabular}{cccccc} \hline Model & [M/H] & Age & $\frac{\Delta{NaD_{*}}}{\Delta{[Na/Fe]_{*}}} $ & $\frac{\Delta{Mgb_{*}}}{\Delta{[Mg/Fe]_{*}}} $ &$\frac{\Delta{Ca4227_{*}}}{\Delta{[Ca/Fe]_{*}}} $ \\ (*) & dex & (Gyr) & & &\\ \hline CvD & 0.00 & 13.50 & 3.085 & 4.260 & 2.790 \\ MILES & 0.06 & 14.00 & 3.547 & 4.350& ... \\ \hline MILES & -0.35 & 1.00 & 0.973 & 3.133 & ...\\ MILES & -0.35 & 7.00 & 2.317 & 4.830 & ... \\ MILES & -0.35 & 14.00 & 2.683 & 5.609 & ... \\ MILES & 0.06 & 1.00 & 1.453 & 3.009 & ... \\ MILES & 0.06 & 7.00 & 3.167 & 3.298 & ... \\ MILES & 0.26 & 1.00 & 1.583 & 3.463 & ... \\ MILES & 0.26 & 7.00 & 3.380 & 3.336 & ... \\ MILES & 0.26 & 14.00 & 3.830 & 3.768 & ... \\ \hline \end{tabular} \end{table} \section{Discussion} We present ages, metallicities and abundance ratios obtained for 37 dEs within an aperture size of R$_e$/8. This aperture size is commonly used in conventional long-slit studies. The radius has been chosen to provide a measure of the stellar populations in the central regions of these galaxies, in a region with a constant relative size. Following the classification of \citet{liskeretal.2006AJ....132..497L,liskeretal.2006AJ....132.2432L,liskeretal.2007ApJ...660.1186L} using high-pass filtered Sloan Digital Sky Survey (SDSS; \citealt{2006ApJS..162...38A}), 36 of the galaxies of our sample are classified as nucleated. VCC0021 and VCC1431 are galaxies with large blue core. All the analysis comes from the central regions, in which the nuclear cluster's light is contributing significantly. The resulting abundance ratios can thus provide insight about the luminosity-weighted stellar population in this region. Although we could have tried to derive two-burst or more complicated Star Formation Histories, it would have been very hard with the current low-resolution data to be able to distinguish between a one- or two-burst solution (see e.g. \citealt{rysetal.2015MNRAS.452.1888R,mentzetal.2016MNRAS.463.2819M}). For that reason, we postpone this to a future paper, in which we analyze spectra at a spectral resolution of R=5000 using the SAMI instrument at the AAT. One of the main results of this paper is the unusual behaviour of the Na abundances in dEs, when compared with massive ellipticals and Local Group dwarfs. For a sample of quiescent dwarfs with effective velocity dispersion in the range 20 - 55\,km\,s$^{-1}$ and with absolute r-band magnitude ranging from -19 to -16, we find that our sample of dEs is underabundant in Na when compared to the solar neighbourhood (Figure~\ref{fig:abundances_error}). At the same time, the Mg abundances are around solar. In the following, we try to argue what this means for the evolution of dwarf ellipticals, based on what we know from theory, and observations of individual stars in the Milky Way and the Local Group, and integrated abundance ratios in giant elliptical galaxies. For dwarf ellipticals, such an analysis using integrated light has not been done before, although several papers have tried to derive abundance ratios in giant ellipticals (e.g. \citealt{worthey.1998PASP..110..888W, thomasetal.2010MNRAS.404.1775T,wortheyetal.2011ApJ...729..148W,conroyetal.2014ApJ...780...33C,spinielloetal.2014MNRAS.438.1483S,smithetal.2015MNRAS.454L..71S,yamadaetal.2006ApJ...637..200Y}). In this paper we are analyzing light elements. It is our current belief that in the most-studied environments the relative abundance [X/Fe] of an element X at low [Fe/ H] represents the abundance ratio from only those sources of material processed through the nucleosynthesis channels that were active at very early times, i.e. SNII from core collapse of massive stars, events characteristic of the earliest epochs of star formation \citep*{cohenandhuang.2009ApJ...701.1053C}. After these early times, [X/Fe] is modified by other sources, such as SNIa, SNII, AGB stars, novae, etc., and also by the accretion of primordial material and of galactic winds. This gives a knee-like pattern in the relation between [X/Fe] and [Fe/ H], of which the position of the bend is determined by the delay time between the core collapse SNII and the other processes. This delay depends on various parameters (IMF, star formation efficiency, the rate of mass loss in stars, and the rate of accretion of primordial gas consisting mostly of H), as well as on the element production yields (e.g. \citealt{greggioetal.2008MNRAS.388..829G}). These processes lead to the characteristic knee in the Milky Way, where low metallicity halo stars generally have [Mg/Fe] values around 0.4, a value reached by SNII enrichment only, going down to solar mass disk stars, with [Mg/Fe] values around solar, an equilibrium values reached when all processes have contributed. From these solar abundance ratios in the disk of our Milky Way, one can then conclude that star formation has been taken place on long timescales compared to the halo. We will now have a more detailed look at the individual abundance ratios. \subsection{Na abundances} Na is believed to be made in the interiors of massive stars and to depend on the neutron excess, which in turn depends on the initial heavy element abundance in the star. Na thus has both a primary and a secondary nucleosynthesis channel (\citealt{arnett.1971ApJ...166..153A,clayton.2003Ap&SS.285..353C}). Ni is assumed to originate predominantly from SNe Ia. However, the production of Ni might also be linked to the production of Na in SNe II (\citealt{thielemannetal.1990ApJ...349..222T,timmesetal.1995ApJS...98..617T}). The amount of Na produced is controlled by the neutron excess, where ${}^{23}Na$ is the only stable neutron rich isotope produced in significant quantity during the C and O burning stage. The production of Ni depends on the neutron excess and the neutron excess will depend primarily on the amount of ${}^{23}Na$ previously produced. Hence, a Na-Ni correlation is expected when the chemical enrichment is dominated by SNe II. The advent of the SNe Ia explosions can break (or flatten) this relationship, as Ni is produced without Na in the standard model of SN Ia (e.g., \citealt{Iwamotoetal.1999ApJS..125..439I}). Since the neutron excess is strongly metallicity-dependent, this could explain the low [Na/Fe] that are being found for the Fornax dwarf (\citealt{Letarteetal.2010A&A...523A..17L}). At the same time, it would explain the high values in giant ellipticals (see later). It is important to mention is that the abundance ratios found here are very different from those in other stellar systems, most massive Galactic globular clusters, and giant elliptical galaxies. In most massive globular clusters a strong Na-O anticorrelation is observed in the RGB stars (\citealt{kraft.1994PASP..106..553K}, and see reviews of \citealt{carretta.2016arXiv161104728C,grattonetal.2001A&A...369...87G}). For these stars, Oxygen is depleted, and Na enhanced, just like N. Since this effect is not see in the interior of the presently observed GC low mass stars, it is thought that this is a second generation enrichment effect from massive stars, who have enhanced their Na-abundance during C-burning through the NeNa cycle (\citealt{langeretal.1993PASP..105..301L}). It is thought that second generation (SG) stars are formed by nuclear ejecta processed in the most massive first generation (FG) stars, diluted with different amounts of unprocessed gas, generating a number of anticorrelations, including the one of Na and O. This process, as far as we know, does not take place in the halo of our Milky Way, and in field stars of local group galaxies. In this paper we see that this is also not the case in field stars of dwarf ellipticals in nearby galaxy clusters. Four strong Na absorption features can be found in the optical and NIR wavelength range: NaD (5890 and 5896 \AA), NaI 0.82 (8183 and 8195 \AA), NaI1.14 (11400 \AA) and NaI2.21 (22100 \AA). The equivalent widths(EWs) of NaD were studied first by \citet{oconnel.1976ApJ...206..370O} and \citet{peterson.1976ApJ...210L.123P}, where they reported that NaD was much stronger than expected from Ca and Fe indices in giant early-type galaxies. \citet{spinielloetal.2015ApJ...803...87S} reported that the NaD feature is very sensitive to [Na/Fe] variations, while the NaI index seems to depend mainly on the IMF (e.g. \citealt{vazdekisetal.2012MNRAS.424..157V}). \citet{labarberaetal.2017MNRAS.464.3597L} perform detailed fits to all 4 Na-lines in a number of giant ellipticals, and find that both [Na/Fe] needs to be considerably larger than solar, and the IMF-slope needs to be dwarf-enhanced (\citealt{smithetal.2015MNRAS.454L..71S}). For dwarf ellipticals, however, there are no indications that the IMF-slope is different from our Galaxy (\citealt{mentzetal.2016MNRAS.463.2819M}). Here, we find the behaviour that [Na/Fe] is lower than solar, opposite to the behavior in giant ellipticals. Remarkable is the strong trend between [Na/Fe] and [Fe/H], already mentioned by \citet{mentzetal.2016MNRAS.463.2819M} when joining the Fornax dwarf with our dwarf ellipticals and the giant ellipticals. For [Fe/H] values of $\sim$ -0.8 very low [Na/Fe] values are obtained of $\sim$ -0.6 - -0.8 in the Fornax dwarf. This contrasts with the high, positive [Na/Fe] values that are found in giant ellipticals. Such a strong correlation is what you could expect if the Na-abundance depends strongly on the neutron excess, or equivalently, the metallicity. \subsection{Abundance ratios and the formation of dEs} In this paper we find [Mg/Fe] values that are very close to solar (the mean value of the [Mg/Fe] is 0.07), or slightly larger. [Na/Fe] is (the mean value of the [Na/Fe] is -0.25), however, considerably lower than solar. When comparing to stellar populations that show similar abundance patterns, there are not many, but one could consider the younger, and more metal-rich stellar populations in the center of the Fornax Dwarf galaxy, published by \citet{Letarteetal.2010A&A...523A..17L}. The majority of those stars are 1-4 Gyr old, and have unusually low [Mg/Fe], [Ca/Fe] and [Na/Fe] compared to the Milky Way stellar populations at the same [Fe/H], and are therefore at the end of their chemical evolution. The difficulty is that, although [Mg/Fe] is approximately solar for these stars, [Ca/Fe] lies considerably below this value. \citet{Letarteetal.2010A&A...523A..17L} hypothesize that this means that a large fraction of Ca and Ti is produced in processes that do not produce much Mg, such in SNe Ia. In this way, the low [Ca/Fe] and [Ti/Fe] could be a consequence of the low-metallicity of their progenitors compared to the Milky Way. At the same time, [Na/Fe] is found to be rather low (between -0.9 and -0.4), but correlating strongly with [Ni/Fe] \citep*{nissenandschuster.1997A&A...326..751N,nissenandschuster.2009IAUS..254..103N}. For the dwarf ellipticals in Virgo analyzed in this paper, we conclude, analogous to \citet{Letarteetal.2010A&A...523A..17L} that the low [Mg/Fe] values (w.r.t. the thick disk of the Milky Way and giant ellipticals) show that the galaxies have undergone a considerable amount of chemical evolution. This means that the galaxies are not uniformly old, but have extended star formation histories, similar to many of the Local Group galaxies. [Na/Fe] is lower than solar, but still higher than in the Fornax dwarf. This is expectable, since the metallicities of the dwarf ellipticals in Virgo are a bit larger, resulting in a larger neutron excess and higher Na-abundances. For dEs we find that (a) [Mg/Fe] $\sim$ 0 and (b) [Na/Fe] < 0. Result (a) implies that star formation is slow, like in the Milky Way disk. Result (b) is consistent with the same formation mechanism. Just like the stars of \citet{Letarteetal.2010A&A...523A..17L} in the center of the Fornax dwarf galaxy, the stars in the dEs have undergone a considerable amount of enrichment and have an extended star formation history. The dependence of [Na/Fe] on the neutron-excess causes [Na/Fe] to be below 0, since for the Virgo dwarfs [Fe/H] is lower than solar ($\sim$ -0.5). The extended star formation history then could also cause considerable Ca-enrichment, leading for the dwarf ellipticals to larger [Ca/Fe] values, since they accrete material from the metal-rich cluster environment, rather than the Fornax dwarf, which accretes more pristine gas in the Local Group, which causes lower [Ca/Fe] ratios. An important clue might be strong correlation between [Na/Fe] and [Fe/H], when one considers Local Group dwarf galaxies like Fornax and the LMC, dwarf elliptical galaxies, the disk of the Milky Way, and the centers of giant elliptical galaxies. All this could be due to Na-yields that depend strongly on metallicity. The abundance of sodium influences the electron pressure so that the strength of many other features are affected. For example, \citet*{conroyandvandokkum.2012ApJ...747...69C} show that for massive galaxies an increase in the sodium abundance causes a decrease in the abundance of CaII, which causes a decrease in the EW of CaT. Increasing the sodium abundance can mimic the effects of a more bottom-heavy IMF. For dwarfs, however, we see a different behavior comparing Ca and Na, although there is no evidence that the IMF is responsible here. And also, for the LMC both [Na/Fe] and [Ca/Fe] have the same sign, implying that another parameter has to responsible for the difference between LMC and the dEs, presumably the star formation history. However, with this considerable ($\sim$ 0.2) difference in [Ca/Fe] between the LMC and the dEs, it is not so obvious that objects like the LMC should be the progenitors of dEs, unless Ca-enrichment by SN Ia in the cluster environment was particularly effective. What is clear is that the abundance ratio pattern in dwarf ellipticals is very different from that in massive Galactic globular clusters, which show enhanced [Na/Fe], a strong Na-O anticorrelation etc. in many stars. It is of course still possible that a fraction of the stars displays these effects, but that fraction is so small that it cannot be detected in integrated light. This difference probably indicates that star formation timescales in dwarf ellipticals are long, on the order of Gyrs, since these globular clusters must have been formed on very small timescales, with their ages being so large (\citealt{grattonetal.2001A&A...369...87G}). \section{Conclusions} \begin{itemize} \item In this paper, we determine abundance ratios of a sample 37 dEs in the Virgo cluster as a part of the SMAKCED project. This sample is representative of the analysis of the kinematic properties (\citealp{tolobaetal.2014ApJS..215...17T}) and also all morphological sub-classes found by \citet{liskeretal.2006AJ....132..497L,liskeretal.2006AJ....132.2432L,liskeretal.2007ApJ...660.1186L}. \item We use optical spectroscopy to measure a total of 23 Lick indices in the LIS-5 \AA\ flux calibrated system and apply the MILES models to interpret them. We derive new age and metallicity estimates for these galaxies. Taking advantage of high resolution spectral data we are able to calculate the abundance ratios of Na and Mg using the models of MILES. \item We find the unusual behaviour that [Na/Fe] is under-abundant w.r.t. solar. This is opposite to what is found in massive giant elliptical galaxies. We also find that dEs fall on a relatively tight relation between [Na/Fe] and [Fe/H], which we recently presented in \citet{mentzetal.2016MNRAS.463.2819M}, including Local Group dwarf galaxies, the Milky Way and giant elliptical galaxies. From our results, we try to sketch a possible scenario for the evolution of dEs in the Virgo cluster. We find that dEs show disk-like star formation histories favouring them to originate from star forming spirals or dwarfs. \item Na-yields appear to be very metal-dependent, in agreement with studies of giant ellipticals, probably due to the large dependence on the neutron-excess in stars. \item We conclude that dEs have undergone a considerable amount of chemical evolution, they are therefore not uniformly old, but have extended star formation histories, similar to many of the Local Group galaxies. \end{itemize} \section*{Acknowledgements} We thank an anonymous referee for his critical comments, helping to improve the paper. We also thank to F. La Barbera for kindly providing the new models. The research was supported by The Scientific and Technological Research Council of Turkey (TUBITAK) under project number 1059B141401204 and 1649B031406124. RFP, AV and JF-B acknowledge support from grant AYA2016-77237-C3-1-P from the Spanish Ministry of Economy and Competitiveness (MINECO). Paudel acknowledges the support by Samsung Science \& Technology Foundation under Project Number SSTF-BA1501-0. \bibliographystyle{mnras}
{ "timestamp": "2017-12-19T02:14:11", "yymm": "1712", "arxiv_id": "1712.04953", "language": "en", "url": "https://arxiv.org/abs/1712.04953" }
\section{Background} \subsection{Paired Comparison Experiments and the Bradley-Terry Model} A paired comparison experiment is a set of binary comparisons between pairs out of a set of $t$ objects. The Bradley-Terry model \citep{Zermelo1929,BRADLEY01121952} assigns to each object $i$ ($i=1,\ldots t$) a strength parameter $\pi_i$, and defines \begin{equation} \theta_{ij} = \frac{\pi_i}{\pi_i+\pi_j} \end{equation} as the probability that object $i$ will be preferred in any given comparison with object $j$. Note that $\theta_{ji}=1-\theta_{ij}$. If $n_{ij}$ the number of comparisons between $i$ and $j$, the probability of any particular set of outcomes $D$ which includes object $i$ being chosen over $j$ $w_{ij}$ times is \begin{equation} \label{e:likelihood} p(D|\{\theta_{ij}\}) = p(D|\{\pi_i\}) = \prod_{i=1}^{t} \prod_{j=i+1}^{t} \theta_{ij}^{w_{ij}} (1-\theta_{ij})^{n_{ij}-w_{ij}} \end{equation} The model has been used in a number of contexts, ranging from taste tests between different foods to games between chess players. In the context of the present paper, we are interested in sporting competitions, so we will henceforth refer to the objects as ``teams'' and the comparisons as ``games''. $w_{ij}$ is thus the number of games won by team $i$ against team $j$, and $n_{ij}=w_{ij}+w_{ji}$ is the number of games between them. \subsection{Bayesian Approach} A typical problem is to make inferences about the strengths $\{\pi_i\}$, or equivalently the log-strengths $\{\lambda_i\}$, given the results $D$. Under a Bayesian approach, in terms of the vector $\boldsymbol{\pteam}$ of team strengths $\{\pi_i|i=1,\ldots,t\}$, the posterior probability distribution will be, up to a $\boldsymbol{\pteam}$-independent normalization constant, \begin{equation} f_{\boldsymbol{\Pteam}|D}(\boldsymbol{\pteam}|D,I) \propto p(D|\{\pi_i\}) f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I) \end{equation} We are concerned with choices of the prior distribution $f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I)$, with a given choice represented symbolically by $I$. It is useful to define $\lambda_i=\ln\pi_i$ and note that \begin{equation} \ln\frac{\theta_{ij}}{1-\theta_{ij}} = \lambda_i-\lambda_j =: \gamma_{ij} \end{equation} and \begin{equation} \theta_{ij} = (1+e^{-\gamma_{ij}})^{-1}, \qquad (1-\theta_{ij}) = (1+e^{\gamma_{ij}})^{-1} \end{equation} Since the parameters are continuous, the probability density functions transform as follows: \begin{equation} \label{e:fltpt} f_{\Lambda_i}(\lambda_i) = e^{\lambda_i} f_{\Pi_i}(e^{\lambda_i}) \end{equation} \begin{equation} \label{e:fptlt} f_{\Pi_i}(\pi_i) = \frac{1}{\pi_i} f_{\Lambda_i}(\ln\pi_i) \end{equation} and likewise \begin{equation} \label{e:flppp} f_{\Gamma_{ij}}(\gamma_{ij}) = (1+e^{-\gamma_{ij}})^{-1}(1+e^{\gamma_{ij}})^{-1} f_{\Theta_{ij}}([1+e^{-\gamma_{ij}}]^{-1}) \end{equation} \begin{equation} \label{e:fpplp} f_{\Theta_{ij}}(\theta_{ij}) = \theta_{ij}^{-1}(1-\theta_{ij})^{-1} \,f_{\Gamma_{ij}}(-\ln[\theta_{ij}^{-1}-1]) \end{equation} Note that the $t$ strengths $\{\pi_i\}$ are only relevant in their use to determine the probabilities $\{\theta_{ij}\}$ (of which $t-1$ are independent), so we consider two probability distributions $f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_1)$ and $f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_2)$ equivalent if they produce the same marginalized distribution for the $\{\theta_{ij}\}$. \begin{defn} Let $\boldsymbol{\lpair}$ represent a $t-1$-dimensional vector of linearly independent combinations of the log-odds-ratios $\{\gamma_{ij}\}$ from which all $t(t-1)$ can be constructed according to \begin{equation} \label{e:lptbasis} \gamma_{ij} = \sum_{\alpha=1}^{t-1} C_{ij,\alpha} \gamma_{\alpha} \end{equation} \end{defn} Possible choices are \begin{equation} \gamma_{12}, \gamma_{23}, \ldots, \gamma_{(t-1),t} \end{equation} or \begin{equation} \gamma_{1t}, \gamma_{2t}, \ldots, \gamma_{(t-1),t} \end{equation} or \begin{equation} \frac{1}{\sqrt{2}}\gamma_{12}, \frac{1}{\sqrt{6}}(\gamma_{13}+\gamma_{23}), \ldots, \frac{1}{\sqrt{t(t-1)}}[\gamma_{12}+\gamma_{23} +\ldots-(t-1)\gamma_{(t-1),t}] \end{equation} The advantage of working with the $\{\gamma_{ij}\}$ is that we need not specify which basis we are using for $\boldsymbol{\lpair}$ because the Jacobian determinants for transformations between different bases are constant. \begin{defn} Two probability distributions are equivalent, $f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_1)\cong f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_2)$ (or $f_{\boldsymbol{\Lteam}}(\boldsymbol{\lteam}|I_1)\cong f_{\boldsymbol{\Lteam}}(\boldsymbol{\lteam}|I_2)$) if and only if $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_1) = f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_2)$. \end{defn} \begin{lem} \label{l:scaling} A sufficient condition for $f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_1)\cong f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_2)$ is that there exists a scalar function $C(\boldsymbol{\pteam})$ such that the transformation \begin{equation} \boldsymbol{\pteam}' = \boldsymbol{\pteam} C(\boldsymbol{\pteam}) \end{equation} converts the probability density $f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_1)$ into $f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_2)$, i.e., \begin{equation} f_{\boldsymbol{\Pteam}'}(\boldsymbol{\pteam}'|I_1) = \frac{f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_1)} { \det\left\{\frac{\partial\pi'_i}{\partial\pi_j}\right\} } = f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}'|I_2) \end{equation} \end{lem} \begin{proof} The transformation leaves $\theta_{ij}$ unchanged \begin{equation} \theta'_{ij} = \frac{\pi'_i}{\pi'_i+\pi'_j} = \frac{\pi_iC(\boldsymbol{\pteam})}{\pi_iC(\boldsymbol{\pteam})+\pi_jC(\boldsymbol{\pteam})} = \frac{\pi_i}{\pi_i+\pi_j} = \theta_{ij} \end{equation} and therefore $\gamma'_{ij}=\gamma_{ij}$ and the transformation $\boldsymbol{\lpair}\rightarrow\boldsymbol{\lpair}'$ leaves $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I)$ unchanged, and \begin{equation} f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_2) = f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}'|I_2) = f_{\boldsymbol{\Lpair}'}(\boldsymbol{\lpair}'|I_1) = f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_1) \end{equation} \end{proof} \subsection{Motivation and Desiderata} \label{s:motivation} The primary interest motivating this work is design of rating systems to evaluate teams based on the outcome of games between them. To that end, prior information which distinguishes between the teams is inappropriate, as it would be considered ``unfair'' to build such information into the rating system. We are interested in rating systems which obey as many as possibly of the following desiderata. \begin{des} Invariance under interchange of teams. \label{d:interchange} A transformation $\boldsymbol{\pteam}\rightarrow\boldsymbol{\pteam}'$ which, for some $i$ and $j$, obeys $\pi'_i=\pi_j$, $\pi'_j=\pi_i$, $\pi'_k=\pi_k$ for all other $k$, should transform $f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_1)$ into an equivalent distribution $f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_2)\cong f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_1)$. \end{des} \begin{des} Invariance under interchange of winning and losing. \label{d:reflection} The transformation $\forall i: \pi_i\rightarrow\pi_i'=\frac{1}{\pi_i}$, which corresponds to $\boldsymbol{\lteam}'=-\boldsymbol{\lteam}$, $\forall i,j:\theta'_{ij}=1-\theta_{ij}$, and $\boldsymbol{\lpair}'=-\boldsymbol{\lpair}$, should transform $f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_1)$ into an equivalent distribution $f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_2)\cong f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_1)$. A distribution obeying this desideratum will satisfy $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_1)=f_{\boldsymbol{\Lpair}}(-\boldsymbol{\lpair}|I_1)$. \end{des} \begin{des} Normalizability. \label{d:proper} $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I)$ should be a proper prior, which can be normalized to $\int_{-\infty}^{\infty}\cdots\int_{-\infty}^{\infty} d^{t-1}\gamma\,f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I)=1$. \end{des} \begin{des} Invariant under elimination of teams. \label{d:elim} This desideratum assumes that a given principle can be used to generate prior distributions for any number of teams. Let $t>2$, and define $\boldsymbol{\pteam}$ to be the vector of $t$ strengths, and $\boldsymbol{\pteam}'$ to be the $(t-1)$-element vector with $\pi'_i=\pi_i$ for $i=0,\ldots,t-1$. Suppose the principle generates priors $f_{\boldsymbol{\Pteam}'}(\boldsymbol{\pteam}'|I_{t-1})$ when there are $t-1$ teams and $f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_{t}) = f_{\boldsymbol{\Pteam}',\Pi_{t}}(\boldsymbol{\pteam}',\pi_{t}|I_{t})$ when there are $t$. The prior associated with $I_{t-1}$ should be equivalent to that associated with $I_{t}$, marginalized over $\pi_{t}$, i.e. \begin{equation} f_{\boldsymbol{\Pteam}'}(\boldsymbol{\pteam}'|I_{t-1}) \cong \int_0^{\infty}d\pi_{t} \,f_{\boldsymbol{\Pteam}',\Pi_{t}}(\boldsymbol{\pteam}',\pi_{t}|I_{t}) = \int_0^{\infty}d\pi_{t} \,f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_{t}) \end{equation} \end{des} \subsection{Comparison via Prior Predictive Distribution} A convenient way to quantify the effects of a prior, and thus to compare different priors, is to construct the prior predictive distribution \begin{equation} \label{e:priorpred} p(D|\mathbf{\nnum},I) = \int_{0}^{\infty}\cdots \int_{0}^{\infty} d^{t}\pi\, p(D|\boldsymbol{\pteam},\mathbf{\nnum})\,f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I) = \int_{-\infty}^{\infty}\cdots \int_{-\infty}^{\infty} d^{t-1}\gamma\, p(D|\boldsymbol{\lpair},\mathbf{\nnum})\,f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I) \end{equation} where the second equality holds because the $t-1$ log-rating differences in $\boldsymbol{\lpair}$ determine the sampling distribution $p(D|\boldsymbol{\pteam},\mathbf{\nnum})$. \begin{lem} \label{l:pred} For any $\mathbf{\nnum}$, $p(D|\mathbf{\nnum},I)=p(D|\mathbf{\nnum},I')$ is a necessary condition for $f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I)\cong f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I')$. \end{lem} \begin{proof} Assume $f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I)\cong f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I')$. Then, by definition $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I)= f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I')$. By \eqref{e:priorpred}, $p(D|\mathbf{\nnum},I)$ can be constructed from $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I)$ and $p(D|\boldsymbol{\lpair},\mathbf{\nnum})$, and therefore $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I)= f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I')$ implies $p(D|\mathbf{\nnum},I)=p(D|\mathbf{\nnum},I')$. \end{proof} We can use the prior predictive distribution to check the desiderata. \begin{lem} \label{l:predinterchange} Given a $\mathbf{\nnum}$ where $n_{k\ell}=n$ for all $k\ne\ell$, desideratum~\ref{d:interchange} implies $p(D|\mathbf{\nnum},I)=p(D'|\mathbf{\nnum},I)$ where $D$ and $D'$ differ only by the interchange of a pair of teams $i$ and $j$, i.e., $w'_{ij}=w_{ji}$, $w'_{ik}=w_{jk}$, and $w'_{jk}=w_{ik}$, for all $k\not\in\{i,j\}$. \end{lem} \begin{proof} Defining $n'_{k\ell}=w'_{k\ell}+w'_{\ell k}$ we see that for all $k\ne\ell$, $n'_{k\ell}=n=n_{k\ell}$, i.e., $\mathbf{\nnum}'=\mathbf{\nnum}$. If we define $\boldsymbol{\pteam}\rightarrow\boldsymbol{\pteam}'$ as in the statement of desideratum~\ref{d:interchange} ($\pi'_i=\pi_j$, $\pi'_j=\pi_i$, $\pi'_k=\pi_k$ for all $k\ne i,j$), we can see $p(D'|\boldsymbol{\lpair}',\mathbf{\nnum},I) = p(D'|\boldsymbol{\pteam}',\mathbf{\nnum},I) = p(D'|\boldsymbol{\pteam}',\mathbf{\nnum}',I) = p(D|\boldsymbol{\pteam},\mathbf{\nnum},I) = p(D|\boldsymbol{\lpair},\mathbf{\nnum},I)$ where as usual $\gamma'_{ij}=\ln(\pi'_i/\pi'_j)$. If desideratum~\ref{d:interchange} holds, we have $f_{\boldsymbol{\Lpair}'}(\boldsymbol{\lpair}'|I) = f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I)$ and so \begin{equation} \begin{split} p(D'|\mathbf{\nnum},I) &= \int_{-\infty}^{\infty}\cdots \int_{-\infty}^{\infty} d^{t-1}\gamma'\, p(D'|\boldsymbol{\lpair}',\mathbf{\nnum})\,f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}'|I) \\ &= \int_{-\infty}^{\infty}\cdots \int_{-\infty}^{\infty} d^{t-1}\gamma'\, p(D|\boldsymbol{\lpair},\mathbf{\nnum})\,f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I) \\ &= \int_{-\infty}^{\infty}\cdots \int_{-\infty}^{\infty} d^{t-1}\gamma\, p(D|\boldsymbol{\lpair},\mathbf{\nnum})\,f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I) = p(D|\mathbf{\nnum},I) \end{split} \end{equation} because the change of variables $\boldsymbol{\lpair}\rightarrow\boldsymbol{\lpair}'$ has unit Jacobian determinant and leaves the range of the integration variables unchanged. \end{proof} \begin{lem} For any $\mathbf{\nnum}$, desideratum~\ref{d:reflection} implies $p(D|\mathbf{\nnum},I)=p(D'|\mathbf{\nnum},I)$ where $w'_{ij}=n_{ij}-w_{ij}$. \end{lem} \begin{proof} Since $w'_{ij}=n_{ij}-w_{ij}=w_{ji}$, $n'_{ij} = w'_{ij}+w'_{ji} = w_{ji}+w_{ij} = n_{ij}$. The rest of the proof proceeds as with lemma~\ref{l:predinterchange}, but with the appropriate definitions of $D\rightarrow D'$ and $\boldsymbol{\pteam}\rightarrow\boldsymbol{\pteam}'$. \end{proof} \begin{lem} For any $\mathbf{\nnum}$, desideratum~\ref{d:proper} implies $p(D|\mathbf{\nnum},I)>0$ if $D$ is a set of results consistent with $\mathbf{\nnum}$. \end{lem} \begin{proof} Since $p(D|\boldsymbol{\ppair},I)>0$ for all $\boldsymbol{\ppair}$ with $0<\theta_{ij}<1$, and $p(D|\boldsymbol{\lpair},I)=p(D|\boldsymbol{\ppair},I)$, we have $p(D|\boldsymbol{\lpair},I)>0$ for all $\boldsymbol{\lpair}$ with $-\infty<\gamma_{ij}<\infty$. Since $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I)\ge 0$ and $\int_{-\infty}^{\infty}\cdots\int_{-\infty}^{\infty} d^{t-1}\gamma\,f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I)=1$, we must have \begin{equation} p(D|\mathbf{\nnum},I) = \int_{-\infty}^{\infty}\cdots \int_{-\infty}^{\infty} d^{t-1}\gamma\, p(D|\boldsymbol{\lpair},\mathbf{\nnum})\,f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I) \ge 0 \end{equation} \end{proof} \begin{lem}\label{l:predelim} Given $t$ teams and a $\mathbf{\nnum}$ with $n_{it}=0$, desideratum~\ref{d:elim} implies $p(D|\mathbf{\nnum},I_{t})=p(D|\mathbf{\nnum},I_{t-1})$ if $D$ is a set of results consistent with $\mathbf{\nnum}$. \end{lem} \begin{proof} If $n_{it}=0$, $\pi_{t}$ is irrelevant to the sampling distribution, and $p(D|\boldsymbol{\pteam},\mathbf{\nnum})=p(D|\boldsymbol{\pteam}',\mathbf{\nnum})$ where $\boldsymbol{\pteam}'$ is the $(t-1)$-element vector with $\pi'_i=\pi_i$ for $i=0,\ldots,t-1$, as in the statement of desideratum~\ref{d:elim}. Thus \begin{equation} \begin{split} p(D|\mathbf{\nnum},I_{t}) &= \int_{0}^{\infty}\cdots \int_{0}^{\infty} d^{t}\pi\, p(D|\boldsymbol{\pteam},\mathbf{\nnum})\,f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_{t}) \\ &= \int_{0}^{\infty}\cdots \int_{0}^{\infty} d^{t-1}\pi'\, p(D|\boldsymbol{\pteam}',\mathbf{\nnum}) \int_{0}^{\infty} d\pi_{t}\,f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_{t}) \\ &= \int_{0}^{\infty}\cdots \int_{0}^{\infty} d^{t-1}\pi'\, p(D|\boldsymbol{\pteam}',\mathbf{\nnum}) f_{\boldsymbol{\Pteam}'}(\boldsymbol{\pteam}'|I_{t}) \end{split} \end{equation} Desideratum~\ref{d:elim} says that $f_{\boldsymbol{\Pteam}'}(\boldsymbol{\pteam}'|I_{t})\cong f_{\boldsymbol{\Pteam}'}(\boldsymbol{\pteam}'|I_{t-1})$, and lemma~\ref{l:pred} states that this implies $p(D|\mathbf{\nnum},I_{t})=p(D|\mathbf{\nnum},I_{t-1})$. \end{proof} \section{Choice of Prior Distribution} \subsection{General Considerations for Special Cases} \subsubsection{Two Teams} \label{s:2teams} When $t=2$, there is only one independent probability $\theta_{12}=\frac{\pi_1}{\pi_1+\pi_2}$, so any distribution $f_{\boldsymbol{\Pteam}}(\pi_1,\pi_2)$ reduces to a function $f_{\Theta_{12}}(\theta_{12})$ via marginalization \begin{equation} f_{\Gamma_{12}}(\gamma_{12}) = \int_{-\infty}^{\infty} d\lambda_2\,f_{\boldsymbol{\Lteam}}(\gamma_{12}+\lambda_2,\lambda_2) \end{equation} where the transformation \eqref{e:fltpt} means \begin{equation} f_{\boldsymbol{\Lteam}}(\lambda_1,\lambda_2) = e^{\lambda_1+\lambda_2} f_{\boldsymbol{\Pteam}}(e^{\lambda_1},e^{\lambda_2}) \end{equation} and \eqref{e:fpplp} means \begin{equation} f_{\Theta_{12}}(\theta_{12}) = \theta_{12}^{-1}(1-\theta_{12})^{-1}\,f_{\Gamma_{12}}(-\ln[\theta_{12}^{-1}-1]) \end{equation} For the case of two teams, desiderata \ref{d:interchange} and \ref{d:reflection} are equivalent, as both transformations reduce to $\theta_{12}\rightarrow 1-\theta_{12}$, or equivalently $\gamma_{12}\rightarrow -\gamma_{12}$. They will be satisfied if and only if $f_{\Gamma_{12}}(\gamma_{12})$ is an even function, or equivalently if $f_{\Theta_{12}}(\theta_{12})=f_{\Theta_{12}}(1-\theta_{12})$. Desideratum~\ref{d:proper} will be satisfied if and only if \begin{equation} \int_0^1 d\theta_{12}\,f_{\Theta_{12}}(\theta_{12}) < \infty \end{equation} or equivalently \begin{equation} \int_{-\infty}^{\infty} d\gamma_{12}\,f_{\Gamma_{12}}(\gamma_{12}) < \infty \end{equation} Suppose $f_{\Theta_{12}}(\theta_{12})$ belongs to the family of beta distributions (which is the conjugate prior family for the likelihood \eqref{e:likelihood}), \begin{equation} \label{e:betadist} f_{\Theta_{12}}(\theta_{12}) \propto \theta_{12}^{\alpha-1}(1-\theta_{12})^{\beta-1} \end{equation} then $f_{\Gamma_{12}}(\gamma_{12})$ is a generalized logistic distribution of Type IV \citep{Prentice:1976,Nassar:2012} \begin{equation} \label{e:betalog} f_{\Gamma_{12}}(\gamma_{12}) \propto (1+e^{-\gamma_{12}})^{-\alpha}(1+e^{\gamma_{12}})^{-\beta} \end{equation} Included in this family are \begin{enumerate} \item The Haldane prior \citep{Haldane1932,Jeffreys1939} $\alpha=\beta=0$, which is uniform $\gamma_{12}$. This improper prior corresponds to ``total ignorance''. \item The Jeffreys prior \citep{Jeffreys1939,Jeffreys:1946} $\alpha=\beta=\frac{1}{2}$. \item The Bayes-Laplace prior $\alpha=\beta=1$, which is uniform in $\theta_{12}$. This is also the maximum entropy prior, if we assume a measure uniform in $\theta_{12}$. \end{enumerate} For the beta family, desiderata \ref{d:interchange} and \ref{d:reflection} will be satisfied if $\alpha=\beta$. Desideratum \ref{d:proper} will be satisfied if $\alpha,\beta>0$. With $t=2$, the prior predictive probability for a set of results which include $w_{12}$ wins for team $1$ and $w_{21}=n_{12}-w_{12}$ wins for team $2$ will be \begin{equation} p(D|n_{12}) = \int_{0}^{1} d\theta_{12}\,\theta_{12}^{w_{12}} (1-\theta_{12})^{w_{21}}\,f_{\Theta_{12}}(\theta_{12}) \end{equation} If $f_{\Theta_{12}}(\theta_{12})$ is in the Beta family \eqref{e:betadist}, it will be \begin{equation} \label{e:predbeta} p(D|n_{12},I_{\alpha,\beta}) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)} \frac{\Gamma(\alpha+w_{12})\Gamma(\beta+w_{21})} {\Gamma(\alpha+\beta+n_{12})} \end{equation} In particular, for the Bayes-Laplace prior, \begin{equation} p(D|n_{12},I_{1,1}) = \frac{\Gamma(1+w_{12})\Gamma(1+w_{21})} {\Gamma(2+n_{12})} = \left((n_{12}+1)\binom{n_{12}}{w_{12}}\right)^{-1} \end{equation} For the Jeffreys prior, \begin{equation} \label{e:predjeff} p(D|n_{12},I_{1/2,1/2}) = \frac{\Gamma(\frac{1}{2}+w_{12})\Gamma(\frac{1}{2}+w_{21})} {\pi\Gamma(1+n_{12})} = \frac{(2w_{12}-1)!!(2w_{21}-1)!!}{2^{n_{12}}n_{12}!} \end{equation} Viewing the Haldane prior as a limiting case, $p(D|n_{12},I_{1/2,1/2})=0$ unless $w_{12}=n_{12}$ or $w_{21}=n_{12}$. The prior predictive probabilities for $w_{12}=n_{12}$ and $w_{21}=n_{12}$ depend on the order in which the limits $\alpha\rightarrow 0$ and $\beta\rightarrow 0$ are taken. \subsubsection{Three Teams} In the case $t=3$, the $\{\theta_{ij}\}$ are related by $\theta_{ji}=1-\theta_{ij}$ as well as \begin{equation} \label{e:pconstraint} \theta_{13}^{-1} - 1 = (\theta_{12}^{-1} - 1)(\theta_{23}^{-1} - 1) \end{equation} Although each $\theta_{ij}$ is confined to the finite range $[0,1]$, the surface defined by \eqref{e:pconstraint} is curved, which makes it difficult to display the two-dimensional probability distribution $f_{\boldsymbol{\Ppair}}(\boldsymbol{\ppair})$ while preserving its intuitive interpretation. On the other hand, in terms of the $\{\gamma_{ij}\}$ the constraints are $\gamma_{ji}=-\gamma_{ij}$ and \begin{equation} \gamma_{13} = \gamma_{12} + \gamma_{23} \end{equation} An especially convenient set of {co\"{o}rdinate}s for displaying probability distributions $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair})$ is \begin{equation} \label{e:xycoords} x = \frac{1}{\sqrt{3}}(\gamma_{12}+\gamma_{13}),\qquad y = \gamma_{23}\ , \end{equation} which can be inverted to give \begin{equation} \label{e:gammaxy} \gamma_{12} = \frac{\sqrt{3}}{2}\,x - \frac{1}{2}\,y ,\qquad \gamma_{23} = y ,\qquad \gamma_{13} = \frac{\sqrt{3}}{2}\,x + \frac{1}{2}\,y \end{equation} These two-dimensional {co\"{o}rdinate}s on the space of log-odds-ratios $\boldsymbol{\lpair}$ are also two of the three {co\"{o}rdinate}s on the space of strengths $\boldsymbol{\lteam}$, according to \begin{equation} \label{e:xylteam} x = \frac{1}{\sqrt{3}} (2\lambda_1 - \lambda_2 - \lambda_3) ,\qquad y = \lambda_2 - \lambda_3 ,\qquad z = \sqrt{\frac{2}{3}}\, ( \lambda_1 + \lambda_2 + \lambda_3 ) \end{equation} In these {co\"{o}rdinate}s, to go from a distribution $f_{\boldsymbol{\Lteam}}(\boldsymbol{\lteam})$ to $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair})$ one simply converts $f_{\boldsymbol{\Lteam}}(\boldsymbol{\lteam})$ into $f_{XYZ}(x,y,z)$ (which involves a constant Jacobian determinant) and then marginalizes over $z$. An example of such a plot is shown in \fref{f:t3ment}. \begin{figure}[t!] \centering \includegraphics[width=0.8\columnwidth]{t3ment.pdf} \caption{Contour plot of the prior probability distribution $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{S}3})$ arising from the Maximum entropy prescription of \sref{s:maxent} when $t=3$. The {co\"{o}rdinate}s for the plot are the $x$ and $y$ defined in \eqref{e:xycoords}, which determine all of the $\{\gamma_{ij}\}$ and thus the predicted probabilities. The orthogonal direction $z=\sqrt{\frac{2}{3}}\, (\lambda_1 + \lambda_2 + \lambda_3)$ defined in \eqref{e:xylteam} is irrelevant to the predictions of the model.} \label{f:t3ment} \end{figure} \subsection{Evaluation of Prior Distributions} We now consider several families of prior distributions which have been proposed, and evaluate them according to the desiderata in \sref{s:motivation}. \subsubsection{Haldane Prior} Perhaps the simplest prior that can be chosen is the improper prior \begin{equation} f_{\boldsymbol{\Lteam}}(\boldsymbol{\lteam}|I_0)=\text{constant}\ , \end{equation} uniform in all of the log-strengths, which is the generalization of the Haldane prior considered in \sref{s:2teams}. Then the marginalized prior probability distribution for the log-odds-ratios is $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_0)=\text{constant}$. This prior obviously satisfies desiderata \ref{d:interchange} and \ref{d:reflection}, as well as desideratum~\ref{d:elim}. Of course, since the prior $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_0)$ is improper, it violates desideratum \ref{d:proper}. The mode of the posterior $f_{\boldsymbol{\Lteam}|D}(\boldsymbol{\lteam}|D,I_0)$ will be the maximum likelihood solution, and the posterior will be normalizable under the conditions given by \citet{Ford:1957} for the existence of the maximum likelihood solution. Note that while the improper prior $f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_0')=\text{constant}$ produces a different prior on the individual strengths, since \begin{equation} f_{\boldsymbol{\Lteam}}(\boldsymbol{\lteam}|I_0') \propto \exp\left(\sum_{i=1}^{t} \lambda_i\right) \end{equation} the two are equivalent, $f_{\boldsymbol{\Lteam}}(\boldsymbol{\lteam}|I_0')\cong f_{\boldsymbol{\Lteam}}(\boldsymbol{\lteam}|I_0)$, because $f_{\boldsymbol{\Lteam}}(\boldsymbol{\lteam}|I_0')$ depends only on the sum of the log-strengths. \subsubsection{Maximum Entropy} \label{s:maxent} The next prior of interest is one which maximizes the Shannon entropy. For the case $t=2$, we defined the entropy as \begin{equation} S_2 = - \int_0^1 d\theta_{12}\, f_{\Theta_{12}}(\theta_{12}) \ln f_{\Theta_{12}}(\theta_{12}) \end{equation} which made the maximum entropy prior $f_{\Theta_{12}}(\theta_{12}|I_{\text{S}2})=\text{const}$. In order for the entropy of a continuous distribution $f(x)$ to be invariant, it needs to be defined with a measure $\mu(x)$ which transforms as a density under reparametrization. Thus we can write \begin{equation} S_2 = - \int_0^1 d\theta_{12}\, f_{\Theta_{12}}(\theta_{12}) \ln \frac{f_{\Theta_{12}}(\theta_{12})}{\mu_{\Theta_{12}}(\theta_{12})} = - \int_0^1 d\gamma_{12}\, f_{\Gamma_{12}}(\gamma_{12}) \ln \frac{f_{\Gamma_{12}}(\gamma_{12})}{\mu_{\Gamma_{12}}(\gamma_{12})} \end{equation} where we have assumed that $\mu_{\Theta_{12}}(\theta_{12})=\text{constant}$ and thus, using \eqref{e:flppp} to transform the measure, \begin{equation} \mu_{\Gamma_{12}}(\gamma_{12}) \propto (1+e^{-\gamma_{12}})^{-1}(1+e^{\gamma_{12}})^{-1} \end{equation} If we maximize the entropy of a continuous distribution with normalization as the only constraint, the probability density is proportional to the measure, so \begin{equation} f_{\Gamma_{12}}(\gamma_{12}|I_{\text{S}2}) \propto (1+e^{-\gamma_{12}})^{-1}(1+e^{\gamma_{12}})^{-1} \end{equation} which is indeed of the form \eqref{e:betalog} with $\alpha=\beta=1$. For general $t$, we could define by analogy a measure uniform in the $\{\theta_{ij}\}$, $\mu_{\boldsymbol{\Ppair}}(\boldsymbol{\ppair})=\text{constant}$, and then minimize the entropy \begin{equation} S_{t} = - \int_0^1\cdots\int_0^1 d^{t(t-1)/2}\theta\, f_{\boldsymbol{\Ppair}}(\boldsymbol{\ppair})\ln f_{\boldsymbol{\Ppair}}(\boldsymbol{\ppair}) \end{equation} subject to the constraints that the probability density vanishes unless the arguments satisfy \begin{equation} \theta^{-1}_{ij} = (\theta^{-1}_{ik}-1)(\theta^{-1}_{kj}-1)\qquad i=1,\ldots,t;\ j=i+1,\ldots,t; k=i+1,\ldots,j-1 \end{equation} It is equivalent, and more straightforward, to confine the distribution to the constraint surface by writing it in terms of the $t-1$ unique $\gamma_\alpha$ parameters: \begin{equation} S_{t}' = - \int_{-\infty}^{\infty}\cdots\int_{-\infty}^{\infty} d^{t-1}\gamma\, f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair})\ln \frac{f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair})}{\mu_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair})} \end{equation} where the measure is \begin{equation} \label{e:meast} \mu_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}) \propto \prod_{i=1}^{t}\prod_{j=i+1}^{t} (1+e^{-\gamma_{ij}})^{-1}(1+e^{\gamma_{ij}})^{-1} \end{equation} As before, the maximum entropy distribution is $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{S}t})\propto\mu_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair})$, or \begin{equation} \label{e:priorme} f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{S}t}) \propto \prod_{i=1}^{t}\prod_{j=i+1}^{t} \left[ 1 + \exp \left( -\sum_{\alpha=1}^{t-1} C_{ij,\alpha} \gamma_\alpha \right) \right]^{-1} \left[ 1 + \exp \left( \sum_{\beta=1}^{t-1} C_{ij,\beta} \gamma_\beta \right) \right]^{-1} \end{equation} We can see from the form of \eqref{e:meast} that desiderata \ref{d:interchange} and \ref{d:reflection} are satisfied. It is also easy to see that $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{S}t})$ is exponentially suppressed as any linear combination of the $\{\gamma_\alpha\}$ goes to infinity, and therefore desideratum~\ref{d:proper}. For example, as $\gamma_\alpha\rightarrow\infty$, \begin{equation} f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{S}t}) \rightarrow \prod_{i=1}^{t}\prod_{j=i+1}^{t} e^{-\abs{C_{ij,\alpha}}\gamma_\alpha} = \exp \left( -\gamma_\alpha\sum_{i=1}^{t}\sum_{j=i+1}^{t} \abs{C_{ij,\alpha}} \right) \end{equation} However, we can see that desideratum~\ref{d:elim} is \emph{not} satisfied by considering the case $t=3$ and showing that the marginal distribution \begin{equation} f_{\Gamma_{23}}(\gamma_{23}|I_{\text{S}3})\ne f_{\Gamma_{23}}(\gamma_{23}|I_{\text{S}2}) \end{equation} Explicitly. using the {co\"{o}rdinate}s \eqref{e:xycoords}, in which $y=\gamma_{23}$, \begin{equation} \begin{split} f_{\boldsymbol{\Lpair}}(x,y|I_{\text{S}3}) &\propto (1+e^{\frac{\sqrt{3}}{2}x}e^{-y/2})^{-1} (1+e^{-\frac{\sqrt{3}}{2}x}e^{y/2})^{-1} (1+e^{-y})^{-1}(1+e^{y})^{-1} \\ &\phantom{\propto}\times (1+e^{\frac{\sqrt{3}}{2}x}e^{y/2})^{-1} (1+e^{-\frac{\sqrt{3}}{2}x}e^{-y/2})^{-1} \ , \end{split} \end{equation} which is plotted in \fref{f:t3ment}. The marginalization integral can be done by partial fractions to give \begin{equation} f_{\Gamma_{23}}(\gamma_{23}|I_{\text{S}3}) = \int_{-\infty}^{\infty}f_{\boldsymbol{\Lpair}}(x,\gamma_{23}|I_{\text{S}3})\,dx \propto \frac{e^{\gamma_{23}}[2(1-e^{\gamma_{23}})+\gamma_{23}(1+e^{\gamma_{23}})]} {(e^{\gamma_{23}}-1)^3(1+e^{-\gamma_{23}})(1+e^{\gamma_{23}})} \end{equation} which is manifestly different from \begin{equation} f_{\Gamma_{23}}(\gamma_{23}|I_{\text{S}2}) \propto \frac{1}{(1+e^{-\gamma_{23}})(1+e^{\gamma_{23}})} \end{equation} by more than just a normalization constant. \subsubsection{Jeffreys Prior} \label{s:jeff} The Jeffreys prior construction \citep{Jeffreys:1946} can be carried out using the likelihood \eqref{e:likelihood}, to produce a prior \begin{equation} f_{\boldsymbol{\Lteam}}(\boldsymbol{\lteam}|I_{\text{J}}) \propto \sqrt{\mc{I}(\boldsymbol{\lteam})} \end{equation} where $\mc{I}(\boldsymbol{\lteam})$ is the Fisher information associated with the likelihood. Since the likelihood is written in terms of the $\{\theta_{ij}\}$, or equivalently in terms of the $\{\gamma_{ij}\}$, it is simpler to generate \begin{equation} f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{J}}) \propto \sqrt{\mc{I}(\boldsymbol{\lpair})} \end{equation} directly. If we write the $t-1$ independent elements of $\boldsymbol{\lpair}$ as \begin{equation} \tag{\ref{e:lptbasis}} \gamma_{ij} = \sum_{\alpha=1}^{t-1} C_{ij,\alpha} \gamma_{\alpha} \end{equation} we can write \begin{equation} \mc{I}(\boldsymbol{\lpair}) = -E\left[ \det\left\{ \frac{\partial^2\ell(\boldsymbol{\lpair};D)} {\partial\gamma_{\alpha}\partial\gamma_{\beta}} \right\} \right] \end{equation} where \begin{equation} \ell(\boldsymbol{\lpair};D) = \ln p_{D|\boldsymbol{\lpair}}(D|\boldsymbol{\lpair}) = \sum_{i=1}^{t}\sum_{j=i+1}^{t} (w_{ij}\gamma_{ij}-n_{ij}\ln[1+e^{\gamma_{ij}}]) \end{equation} The linear form of \eqref{e:lptbasis} allows us to determine the Fisher information matrix for the $\{\gamma_\alpha\}$ from that for the $\{\gamma_{ij}\}$ as \begin{equation} \frac{\partial^2\ell(\boldsymbol{\lpair};D)} {\partial\gamma_{\alpha}\partial\gamma_{\beta}} = \sum_{i=1}^{t}\sum_{j=i+1}^{t} \sum_{i'=1}^{t}\sum_{j'=i'+1}^{t} C_{ij,\alpha} C_{i'j',\beta} \frac{\partial^2\ell(\boldsymbol{\lpair};D)} {\partial\gamma_{ij}\partial\gamma_{i'j'}} \end{equation} Since the log-likelihood is relatively simple written in terms of the $\{\gamma_{ij}\}$, we can write \begin{equation} \frac{\partial\ell(\boldsymbol{\lpair};D)} {\partial\gamma_{ij}} = w_{ij} - n_{ij}e^{\gamma_{ij}}(1+e^{\gamma_{ij}})^{-1} = w_{ij} - n_{ij}(1+e^{-\gamma_{ij}})^{-1} \end{equation} and \begin{equation} \frac{\partial^2\ell(\boldsymbol{\lpair};D)} {\partial\gamma_{ij}\partial\gamma_{i'j'}} = - \delta_{ii'}\delta_{jj'} n_{ij}e^{-\gamma_{ij}}(1+e^{-\gamma_{ij}})^{-2} = - \delta_{ii'}\delta_{jj'} n_{ij}(1+e^{-\gamma_{ij}})^{-1}(1+e^{\gamma_{ij}})^{-1} \end{equation} so \begin{equation} \label{e:fisheralpha} - \frac{\partial^2\ell(\boldsymbol{\lpair};D)} {\partial\gamma_{\alpha}\partial\gamma_{\beta}} = \sum_{i=1}^{t}\sum_{j=i+1}^{t} C_{ij,\alpha} C_{ij,\beta} n_{ij}(1+e^{-\gamma_{ij}})^{-1}(1+e^{\gamma_{ij}})^{-1} \end{equation} We can see by inspection of \eqref{e:fisheralpha} that the Jeffreys prior always satisfies desideratum~\ref{d:reflection}. We can verify that in the case $t=2$, for which there is only one independent $\gamma_{\alpha}$, the Jeffreys prior becomes \begin{equation} f_{\boldsymbol{\Lpair}}(\gamma_{12}|I_{\text{J}2}) \propto (1+e^{-\gamma_{ij}})^{-1/2}(1+e^{\gamma_{ij}})^{-1/2} \end{equation} which is of the form \eqref{e:betalog} with $\alpha=\beta=\frac{1}{2}$ as before. If we write \begin{equation} - \frac{\partial^2\ell(\boldsymbol{\lpair};D)} {\partial\gamma_{\alpha}\partial\gamma_{\beta}} = \sum_{i=1}^{t}\sum_{j=i+1}^{t} C_{ij,\alpha} C_{ij,\beta} n_{ij}(e^{\gamma_{ij}/2}+e^{-\gamma_{ij}/2})^{-2} \end{equation} Note that if we make the specific choice $\gamma_{\alpha}=\gamma_{\alpha,\alpha+1}$, we have \begin{equation} \label{e:gammachoice} \gamma_{ij} = \sum_{\alpha=i}^{j-1} \gamma_\alpha \end{equation} which makes \begin{equation} \label{e:Cijdiff} C_{ij,\alpha} = \begin{cases} 1 & i \le \alpha \le j-1\\ 0 & \text{otherwise} \end{cases} \end{equation} and then the Fisher information matrix \eqref{e:fisheralpha} is \begin{equation} - \frac{\partial^2\ell(\boldsymbol{\lpair};D)} {\partial\gamma_{\alpha}\partial\gamma_{\beta}} = \sum_{\substack{i,j\\ i \le \alpha \le j-1 \\ i \le \beta \le j-1}} n_{ij}(e^{\gamma_{ij}/2}+e^{-\gamma_{ij}/2})^{-2} \end{equation} For $t>2$, the Fisher information matrix \eqref{e:fisheralpha} depends on the number of games $n_{ij}$ to be played between each pair of teams. However, in order to satisfy desideratum~\ref{d:interchange}, we need to have the same $n_{ij}$ for each pair of teams, in which case this $n_{ij}$ becomes a constant which can be absorbed into the normalization, and the prescription becomes unique. By explicitly examining $t=3$, we will show that the Jeffreys prior with all $\{n_{ij}\}$ equal fails to satisfy desideratum \ref{d:elim}.\footnote{It can be seen to satisfy \ref{d:proper} by noting that as a linear combination of the $\{\gamma_\alpha\}$ goes to infinity, at most one of the $\{m_{ij}\}$ can remain finite; the other two will be exponentially suppressed, and thus each term in the square root in \eqref{e:priorjeff3} will go to zero exponentially.} In the case $t=3$, the vector of independent log-odds-ratios $\boldsymbol{\lpair}$ is two-dimensional, and the elements of the Fisher information matrix are \begin{subequations} \begin{gather} - \frac{\partial^2\ell(\boldsymbol{\lpair};D)} {\partial\gamma_{1}\partial\gamma_{1}} = M_{12} + M_{13} \\ - \frac{\partial^2\ell(\boldsymbol{\lpair};D)} {\partial\gamma_{1}\partial\gamma_{2}} = M_{13} \\ - \frac{\partial^2\ell(\boldsymbol{\lpair};D)} {\partial\gamma_{2}\partial\gamma_{2}} = M_{13} + M_{23} \end{gather} \end{subequations} where \begin{equation} M_{ij} := n_{ij}(e^{\gamma_{ij}/2}+e^{-\gamma_{ij}/2})^{-2} =: n_{ij} m_{ij} \end{equation} The Fisher information is the determinant of this matrix \begin{equation} \mc{I}(\boldsymbol{\lpair}) = (M_{12}+M_{13})(M_{13}+M_{23})-M_{13}^2 = M_{12}M_{13}+M_{12}M_{23}+M_{13}M_{23} \end{equation} so the Jeffreys prior is \begin{equation} \label{e:priorjeff3} f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{J}3}) \propto \sqrt{M_{12}M_{13}+M_{12}M_{23}+M_{13}M_{23}} \propto \sqrt{m_{12}m_{13}+m_{12}m_{23}+m_{13}m_{23}} \end{equation} We can show that the Jeffreys prior fails to satisfy desideratum~\ref{d:elim} by using the posterior predictive distribution and Lemma~\ref{l:predelim}. Suppose $n_{12}=2$ and $n_{i3}=0$. Then \eqref{e:predjeff} implies \begin{equation} p(D|\mathbf{\nnum},I_{\text{J}2}) = \begin{cases} 0.125 & w{12}=1 \\ 0.375 & w{12}=0 \hbox{ or } 2 \end{cases} \end{equation} We can evaluate $p(D|\mathbf{\nnum},I_{\text{J}3})$ numerically, and find \begin{equation} p(D|\mathbf{\nnum},I_{\text{J}3}) \approx \begin{cases} 0.108 & w{12}=1 \\ 0.392 & w{12}=0 \hbox{ or } 2 \end{cases} \end{equation} showing explicitly that $p(D|\mathbf{\nnum},I_{\text{J}3})\ne p(D|\mathbf{\nnum},I_{\text{J}2})$ and therefore desideratum~\ref{d:elim} is violated. \subsubsection{Dirichlet Distribution} \label{s:dirichlet} \citet{Chen19849} discuss Bayesian estimators for the Bradley-Terry model starting with a Dirichlet distribution \begin{equation} \label{e:dirichlet} f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_{\text{D}t}) = \frac{\Gamma\left(\sum_{i=1}^{t} \alpha_i\right)} {\prod_{i=1}^{t} \Gamma(\alpha_i)} \left(\prod_{i=1}^{t} \pi_i^{\alpha_i-1}\right) \delta\left(1-\sum_{i=1}^{t} \pi_i\right) \end{equation} where $\delta(x)$ is the Dirac delta function. In particular, they note that the marginal distribution for any of the $\{\theta_{ij}\}$ is a beta distribution with parameters $\alpha=\alpha_i$ and $\beta=\alpha_j$: \begin{equation} \label{e:dirichletmarg} f_{\Theta_{ij}}(\theta_{ij}|I_{\text{D}t}) = \frac{\Gamma(\alpha_i+\alpha_j)}{\Gamma(\alpha_i)\Gamma(\alpha_j)} \theta_{ij}^{\alpha_i-1}(1-\theta_{ij})^{\alpha_j-1} \end{equation} The Dirichlet prior satisfies desideratum~\ref{d:proper} as long as all of the $\{\alpha_i\}$ are positive; it also satisfies desideratum \ref{d:interchange} if all of the parameters $\{\alpha_i\}$ are equal to a the same value $\alpha$. Although the delta function enforcing the constraint $\sum_{i=1}^{t} \pi_i^{\alpha_i}=1$ means that the different $\{\pi_i\}$ are not independently distributed under $I_{\text{D}t}$, we can see that desideratum~\ref{d:elim} is satisfied by defining a change of variables \begin{equation} \pi'_i = \frac{\pi_i}{1-\pi_{t}} \qquad i = 1,\ldots,t-1 \end{equation} under which the probability density \eqref{e:dirichlet} becomes \begin{equation} f_{\boldsymbol{\Pteam}',\Pi_{t}}(\boldsymbol{\pteam}',\pi_{t}|I_{\text{D}t}) = \frac{\Gamma\left(\sum_{i=1}^{t} \alpha_i\right)} {\prod_{i=1}^{t} \Gamma(\alpha_i)} \left(\prod_{i=1}^{t-1} {\pi'_i}^{\alpha_i-1}\right) \delta\left(1-\sum_{i=1}^{t-1} \pi'_i\right) (1-\pi_{t})^{\sum_{i=1}^{t-1}\alpha_i-1}{\pi_{t}}^{\alpha_{t}-1} \end{equation} which, when we marginalize over $\pi_{t}$, gives \begin{equation} f_{\boldsymbol{\Pteam}'}(\boldsymbol{\pteam}'|I_{\text{D}t}) = \frac{\Gamma\left(\sum_{i=1}^{t-1} \alpha_i\right)} {\prod_{i=1}^{t-1} \Gamma(\alpha_i)} \left(\prod_{i=1}^{t-1} {\pi'_i}^{\alpha_i-1}\right) \delta\left(1-\sum_{i=1}^{t-1} \pi'_i\right) \end{equation} which is a Dirichlet distribution with the same parameters $\{\alpha_1,\ldots,\alpha_{t-1}\}$. However, desideratum~\ref{d:reflection} is not satisfied, which we can see explicitly by considering $t=3$ and assuming $\alpha_1=\alpha_2=\alpha_3\equiv\alpha$, so that \begin{equation} f_{\boldsymbol{\Pteam}}(\pi_1,\pi_2,\pi_3|I_{\text{D}3}) \propto (\pi_1\pi_2\pi_3)^{\alpha-1} \,\delta(\pi_1+\pi_2+\pi_3-1) \end{equation} or equivalently \begin{equation} f_{\boldsymbol{\Lteam}}(\lambda_1,\lambda_2,\lambda_3|I_{\text{D}3}) \propto e^{\alpha(\lambda_1+\lambda_2+\lambda_3)} \,\delta\left(e^{\lambda_1}+e^{\lambda_2}+e^{\lambda_3}-1\right) \end{equation} If we write \begin{multline} \pi_1+\pi_2+\pi_3 = (\pi_1\pi_2\pi_3)^{1/3} \left( \left(\frac{\pi_1}{\pi_2}\frac{\pi_1}{\pi_3}\right)^{1/3} + \left(\frac{\pi_2}{\pi_1}\frac{\pi_2}{\pi_3}\right)^{1/3} + \left(\frac{\pi_3}{\pi_1}\frac{\pi_3}{\pi_2}\right)^{1/3} \right) \\ = e^{\lambda_1}+e^{\lambda_2}+e^{\lambda_3} = e^{\frac{\lambda_1+\lambda_2+\lambda_3}{3}} \left( e^{\frac{\gamma_{12}+\gamma_{13}}{3}} + e^{\frac{-\gamma_{12}+\gamma_{23}}{3}} + e^{\frac{-\gamma_{13}-\gamma_{23}}{3}} \right) \end{multline} we can see \begin{equation} \delta\left(e^{\lambda_1}+e^{\lambda_2}+e^{\lambda_3}-1\right) \propto \delta\left(\lambda_1+\lambda_2+\lambda_3 +3\ln\left[ e^{\frac{\gamma_{12}+\gamma_{13}}{3}} + e^{\frac{-\gamma_{12}+\gamma_{23}}{3}} + e^{\frac{-\gamma_{13}-\gamma_{23}}{3}} \right] \right) \end{equation} and so marginalizing over the combination $\lambda_1+\lambda_2+\lambda_3$ leaves a prior distribution \begin{equation} f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{D}3}) \propto \left( e^{\frac{\gamma_{12}+\gamma_{13}}{3}} + e^{\frac{-\gamma_{12}+\gamma_{23}}{3}} + e^{\frac{-\gamma_{13}-\gamma_{23}}{3}} \right)^{-3\alpha} \end{equation} We see that, for non-zero $\alpha$, $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{D}3}) \ne f_{\boldsymbol{\Lpair}}(-\boldsymbol{\lpair}|I_{\text{D}3})$. This is illustrated explicitly for $\alpha=\frac{1}{2}$ in \fref{f:t3dir1}. Another way of expressing this asymmetry if we'd started with an ``anti-Dirichlet'' prior, i.e., requiring $\{1/\pi_i\}$ to be Dirichlet distributed, we'd have got the distribution \begin{equation} f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{D}'3}) \propto \left( e^{\frac{-\gamma_{12}-\gamma_{13}}{3}} + e^{\frac{\gamma_{12}-\gamma_{23}}{3}} + e^{\frac{\gamma_{13}+\gamma_{23}}{3}} \right)^{-3\alpha} \end{equation} \begin{figure}[t!] \centering \includegraphics[width=0.8\columnwidth]{t3dir1.pdf} \caption{Contour plot of the prior probability distribution $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{D}3})$ arising from the Dirichlet prior described in \sref{s:dirichlet} when $\alpha=1$ and $t=3$. We see that the distribution is not invariant under the inversion $\boldsymbol{\lpair}\rightarrow -\boldsymbol{\lpair}$, and therefore desideratum~\ref{d:reflection} is not satisfied.} \label{f:t3dir1} \end{figure} \subsubsection{Conjugate Prior Families} \citet{DAVIDSON01121973} construct a conjugate prior family of the form \begin{equation} f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_{\text{C}}) \propto \left(\prod_{i=1}^{t}\pi_i^{v_i^0}\right) \left( \prod_{i=1}^{t}\prod_{j=i+1}^{t}(\pi_i+\pi_j)^{-n_{ij}^0} \right) \delta\left(1-\sum_{i=1}^{t} \pi_i\right) \end{equation} where $n_{ii}^0=0$. In order to satisfy desideratum~\ref{d:interchange} (interchange of teams), we require that $v_i^0=v^0$ and $n_{ij}^0=n^0$ for all $i$ and $j\ne i$, so the prior becomes \begin{equation} f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_{\text{C}}) \propto \left(\prod_{i=1}^{t}\pi_i\right)^{v^0} \left( \prod_{i=1}^{t}\prod_{j=i+1}^{t}(\pi_i+\pi_j) \right)^{-n^0} \delta\left(1-\sum_{i=1}^{t} \pi_i\right) \ . \end{equation} Note that \citet{DAVIDSON01121973} motivate $v_i^0$ and $n_{ij}^0$ as coming from a matrix $w_{ij}^0$ (with $w_{ii}^0=0$) via $v_i^0=\sum_{i=1}^{t}w_{ij}^0$ and $n_{ij}^0=w_{ij}^0+w_{ji}^0$, which means that in particular $\sum_{i=1}^{t}\sum_{j=1}^{t}n_{ij}^0=2\sum_{i=1}^{t}v_i^0$, which in the case of single $n^0$ and $v^0$ parameters would require $v^0=(t-1)n^0/2$. We will not impose that condition at this stage, however. To convert $f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_{\text{C}})$ into $f_{\boldsymbol{\Lteam}}(\boldsymbol{\lteam}|I_{\text{C}})$, we note that \begin{equation} \sum_{i=1}^{t}\pi_i = \left(\prod_{k=1}^{t}\pi_k\right)^{1/t} \sum_{i=1}^{t}\left( \prod_{j=1}^{t} \frac{\pi_i}{\pi_j} \right)^{1/t} = \exp\left(\frac{1}{t}\sum_{k=1}^{t}\lambda_k\right) \sum_{i=1}^{t} \exp\left(\frac{1}{t}\sum_{j=1}^{t}\gamma_{ij}\right) \end{equation} and since, for $t>0$, \begin{equation} \delta\left(1-e^{u/t}\right) = \frac{\delta(u)}{\frac{1}{t}e^{u/t}} = t\delta(u) \ , \end{equation} \begin{equation} \delta\left(1-\sum_{i=1}^{t} \pi_i\right) = t\ \delta\left( \sum_{k=1}^{t}\lambda_k + t \ln \sum_{i=1}^{t} \exp\left(\frac{1}{t}\sum_{j=1}^{t}\gamma_{ij}\right) \right) \ . \end{equation} Similarly, we can write \begin{equation} \begin{split} \prod_{i=1}^{t}\prod_{j=i+1}^{t}(\pi_i+\pi_j) &= \frac{1}{\prod_{k=1}^{t}2\pi_k} \sqrt{ \prod_{i=1}^{t}\prod_{j=1}^{t}(\pi_i+\pi_j) } \\ &= \frac{1}{2^{t}\prod_{k=1}^{t}\pi_k} \sqrt{ \left(\prod_{k=1}^{t}\pi_k\right)^{t} \left[ \prod_{i=1}^{t}\prod_{j=1}^{t} \left(1+\frac{\pi_i}{\pi_j}\right) \right] } \\ &= \frac{1}{2^{t}} \exp\left(\left(\frac{t}{2}-1\right)\sum_{k=1}^{t}\lambda_k\right) \left( \prod_{i=1}^{t}\prod_{j=1}^{t} \left[1+e^{\gamma_{ij}}\right] \right)^{1/2} \end{split} \end{equation} which makes the prior (recalling \eqref{e:fltpt}) \begin{equation} \begin{split} f_{\boldsymbol{\Lteam}}(\boldsymbol{\lteam}|I_{\text{C}}) \propto& \exp\left( \left[v^0+1-n^0\left(\frac{t}{2}-1\right)\right] \sum_{k=1}^{t}\lambda_k \right) \left( \prod_{i=1}^{t}\prod_{j=1}^{t} \left[1+e^{\gamma_{ij}}\right] \right)^{-n^0/2} \\ &\times \delta\left( \sum_{k=1}^{t}\lambda_k + t \ln \sum_{i=1}^{t} \exp\left(\frac{1}{t}\sum_{j=1}^{t}\gamma_{ij}\right) \right) \end{split} \end{equation} When we marginalize over $\sum_{k=1}^{t}\lambda_k$, the Dirac delta function sets $\exp\left(\sum_{k=1}^{t}\lambda_k\right)$ to $\left(\sum_{i=1}^{t} \exp\left(\frac{1}{t}\sum_{j=1}^{t}\gamma_{ij}\right)\right)^{-t}$ and the prior becomes \begin{equation} f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{C}}) \propto \left[ \sum_{i=1}^{t} \exp\left(\frac{1}{t}\sum_{j=1}^{t}\gamma_{ij}\right) \right]^{-t(v^0+1-n^0[t/2-1])} \left( \prod_{i=1}^{t}\prod_{j=1}^{t} \left[1+e^{\gamma_{ij}}\right] \right)^{-n^0/2} \end{equation} The quantity in large square brackets is in general not symmetric under the transformation $\boldsymbol{\lpair}\rightarrow -\boldsymbol{\lpair}$, while the remainder of the expression is. This means, to satisfy desideratum \ref{d:reflection} (win-loss inversion), we should have $v^0=(t-2)n^0/2-1$. Note that this is not the same as the condition $v^0=(t-1)n^0/2$ implied by \citet{DAVIDSON01121973}'s conditions on $v_i^0$ and $n_{ij}^0$. The precise form of their restriction, however, comes from the fact they wrote down a ``natural'' conjugate prior family for $f_{\boldsymbol{\Pteam}}(\boldsymbol{\pteam}|I_{\text{C}})$; if they had started with $f_{\boldsymbol{\Lteam}}(\boldsymbol{\lteam}|I_{\text{C}})$, they would have ended up with, in the present notation, $v^0+1=(t-1)n^0/2$, which would also not satisfy desideratum~\ref{d:reflection}. However, if we hadn't imposed the constraint $\sum_{i=1}^{t}\pi_i=1$ (which is clearly not invariant under $\pi_i\rightarrow 1/\pi_i$) in the first place, the marginalization over $\sum_{k=1}^{t}\lambda_k$, would have rendered $v^0$ irrelevant and left us with \begin{equation} \label{e:priorDS} f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{C}}) \propto \left( \prod_{i=1}^{t}\prod_{j=1}^{t} \left[1+e^{\gamma_{ij}}\right] \right)^{-n^0/2} \end{equation} in any event. We therefore take \eqref{e:priorDS} as the form of the conjugate prior $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{C}})$, and note that if $n^0=2$, this reduces to the maximum entropy distribution \eqref{e:priorme} [see also \eqref{e:meast}] considered in \ref{s:maxent}. We can show that \eqref{e:priorDS} violates desideratum~\ref{d:elim} for any $n^0\in(0,\infty)$ by considering the prior predictive distribution and invoking Lemma~\ref{l:predelim}. First, note that for $t=2$, \eqref{e:priorDS} becomes \begin{equation} \label{e:priorDS2} f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{C}2}) \propto \left( \left[1+e^{\gamma_{12}}\right] \left[1+e^{\gamma_{21}}\right] \right)^{-n^0/2} \end{equation} which is just the beta/generalized logistic prior \eqref{e:betalog} with $\alpha=\beta=n^0/2$. Therefore, if we define $D$ to be a set of results for which $w_{12}=1=w_{21}$, and let $n_{12}=2$ and $n_{i3}=0$, \eqref{e:predbeta} implies \begin{equation} p(D|\mathbf{\nnum},I_{\text{C}2}) = \frac{\Gamma(n^0)\Gamma(\frac{n^0}{2}+1)\Gamma(\frac{n^0}{2}+1)} {\Gamma(\frac{n^0}{2})\Gamma(\frac{n^0}{2})\Gamma(n^0+2)} = \frac{1}{4(1+\frac{1}{n^0})} \end{equation} We evaluate $p(D|\mathbf{\nnum},I_{\text{C}3})$ numerically for a range of $n^0$ values, plotted in \fref{f:predDS}, and find that for any $0<n^0<\infty$, $p(D|\mathbf{\nnum},I_{\text{C}3})>p(D|\mathbf{\nnum},I_{\text{C}2})$ \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{predDS.pdf} \caption{Prior predictive probability $p(D|\mathbf{\nnum},I_{\text{C}})$ for the conjugate prior family of \citet{DAVIDSON01121973}, for a sequence of results in which $w_{12}=w_{21}=1$ and $n_{12}=2$. In order to satisfy desiderata~\ref{d:interchange} and~\ref{d:reflection}, we require $n_{ij}^0=n^0$ and $v^0=(t-2)n^0/2-1$, which produces the prior \eqref{e:priorDS}. For any finite positive $n^0$, the prior predictive probability from the three-team prior is larger than that of the two-team prior, indicating that desideratum~\ref{d:elim} is not satisfied. (For $n^0=0$ both versions reduce to the Haldane prior and $p(D|\mathbf{\nnum},I_{\text{C}})=0$. As $n^0\rightarrow\infty$, they become delta functions at $\boldsymbol{\lpair}=\mathbf{0}$ and $p(D|\mathbf{\nnum},I_{\text{C}})\rightarrow 0.25$.)} \label{f:predDS} \end{figure} \subsubsection{Multivariate Gaussian Distribution} \citet{Leonard1977} proposed a multivariate Gaussian prior on the $\{\lambda_i\}$ of the form \begin{equation} f_{\boldsymbol{\Lteam}}(\boldsymbol{\lteam}|I_{\text{G}}) \propto \exp\left( -\frac{1}{2}\sum_{i=1}^{t}\sum_{j=1}^{t} (\lambda_i-\mu_i)[\sigma^{-2}]_{ij}(\lambda_j-\mu_j) \right) \end{equation} where $\{[\sigma^{-2}]_{ij}\}$ are the elements of the inverse of a positive semi-definite covariance matrix with elements $\{[\sigma^2]_{ij}\}$. Desideratum~\ref{d:proper} will be satisfied if the covariance matrix $\{[\sigma^2]_{ij}\}$ is positive definite, so that the prior is normalizable. Desideratum~\ref{d:interchange} requires that all of the $\{\mu_i\}$ have the same value $\mu$, all of the variances $\{[\sigma^2]_{ii}\}$ have the same value $\sigma^2$, and all of the cross-covariances $\{[\sigma^2]_{ij}|i\ne j\}$ have the same value $\rho\sigma^2$. In order for the matrix $\{[\sigma^2]_{ij}\}$ to be positive definite we must have $\sigma^2>0$ and $-\frac{1}{t-1}<\rho<1$. These conditions guarantee that desideratum~\ref{d:reflection} is satisfied. Since the distribution $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{G}})$ is unchanged by the transformation $\lambda_i\rightarrow\lambda_i-\mu$, we can assume without loss of generality that $\mu=0$. We are thus left with $\rho$ and $\sigma^2$ as the adjustable parameters of the distribution. However, if we make the transformation $\lambda_i\rightarrow\lambda_i+a\sum_{j=1}^{t}\lambda_j$, which leaves $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{G}})$ unchanged, $f_{\boldsymbol{\Lteam}}(\boldsymbol{\lteam}|I_{\text{G}})$ becomes a multivariate Gaussian with covariance matrix \begin{equation} \sigma^2 \left[ \rho+\delta_{ij}(1-\rho) + a(2+at)(t+1-\rho) \right] \end{equation} if $a=-\frac{1}{t}\left(1+\sqrt{\frac{1-\rho}{t\rho+1-\rho}}\right)$ or $a=-\frac{1}{t}\left(1-\sqrt{\frac{1-\rho}{t\rho+1-\rho}}\right)$ the covariance matrix becomes diagonal, with a variance equal to $(1-\rho)\sigma^2$. Either value for $a$ is guaranteed to be real by the conditions on $\rho$ which ensure a positive definite correlation matrix. Thus $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{G}})$ is equivalent to a product of independent Gaussian distributions for each $\lambda_i$. For simplicity, we refer to the variance of each of these distributions as $\sigma^2$ rather than $(1-\rho)\sigma^2$. Since the prior $f_{\boldsymbol{\Lpair}}(\boldsymbol{\lpair}|I_{\text{G}})$ is equivalent to independent distributions on the $\{\lambda_i\}$, it is invariant under elimination of teams, and satisfies desideratum~\ref{d:elim}. \subsubsection{Separable Priors} Thus far, the only prior considered to satisfy all four of our desiderata is the Gaussian prior which is equivalent to \begin{equation} f_{\boldsymbol{\Lteam}}(\boldsymbol{\lteam}|I_{\text{G}}) = \frac{1}{(2\pi\sigma^2)^{t/2}} \exp \left( -\sum_{i=1}^{t}\frac{\lambda_i^2}{2\sigma^2} \right) \end{equation} This is an example of a separable prior of the form \begin{equation} f_{\boldsymbol{\Lteam}}(\boldsymbol{\lteam}|I) = \prod_{i=1}^{t}f_{\Lambda_i}(\boldsymbol{\lteam}_i|I) \end{equation} A prior of this form, which assigns prior pdfs to the strengths (or log-strengths) of the teams is guaranteed by its construction to satisfy desideratum~\ref{d:elim} (invariance under team-elimination). It will also satisfy desideratum~\ref{d:interchange} (interchange) if the distributions for the different $\{\boldsymbol{\lteam}_i\}$ are identical ($f_{\Lambda_i}(\boldsymbol{\lteam}_i|I)=f_{\Lambda}(\boldsymbol{\lteam}_i|I)$), desideratum~\ref{d:reflection} (win-loss interchange) if the distribution $f_{\Lambda}(\boldsymbol{\lteam}_i|I)$ is even ($f_{\Lambda}(-\boldsymbol{\lteam}_i|I)=f_{\Lambda}(\boldsymbol{\lteam}_i|I)$) and desideratum~\ref{d:proper} (normalizable) if $f_{\Lambda}(-\boldsymbol{\lteam}|I)$ is a proper prior. \subsubsection{Beta-Separable Priors} One family of separable priors can be constructed by defining \begin{equation} \zeta_i=\frac{\pi_i}{1+\pi_i}=(1+e^{-\lambda_i})^{-1} \end{equation} and assuming that $\zeta_i$ obeys a Beta distribution. Explicitly, \begin{equation} f_{\mathrm{Z}_i}(\zeta_i|I_{\text{B}}) \propto \zeta_i^{\alpha_i-1}(1-\zeta_i)^{\beta_i-1} \end{equation} Since $\zeta_i=\text{logistic}(\lambda_i)$, just as $\theta_{ij}=\text{logistic}(\gamma_{ij})$, the prior on $\lambda_i$ is a generalized logistic distribution of Type IV \citep{Prentice:1976,Nassar:2012}: \begin{equation} f_{\Lambda_i}(\lambda_i|I_{\text{B}}) \propto (1+e^{-\lambda_i})^{-\alpha_i}(1+e^{\lambda_i})^{-\beta_i} \end{equation} To enforce desideratum~\ref{d:reflection}, we require $\alpha_i=\beta_i$, and for desideratum~\ref{d:interchange}, we require that all $\alpha_i$ and $\beta_i$ parameters are the same, $\alpha_i=\beta_i=\eta$, which makes the prior a generalized logistic distribution of Type III. \begin{equation} f_{\Lambda_i}(\lambda_i|I_{\text{B}}) \propto (1+e^{-\lambda_i})^{-\eta}(1+e^{\lambda_i})^{-\eta} \end{equation} An appealing feature of Beta-separable prior is that its functional form is similar to the likelihood \begin{equation} p(D|\boldsymbol{\lteam}) = \prod_{i=1}^{t} \prod_{j=i+1}^{t} (1+e^{-\gamma_{ij}})^{-w_{ij}}(1+e^{\gamma_{ij}})^{n_{ij}-w_{ij}} \end{equation} In particular, the posterior is proportional to the likelihood function which would arise by adding to the actual game results a set of ``fictitious games'' corresponding to $\eta$ wins and $\eta$ losses for each actual team against a fictitious team assumed to have a strength of $1$. So any method for obtaining maximum likelihood estimates such as that of \citet{Ford:1957} could be adapted to obtaining maximum a priori estimates with this prior. This method, with $\eta=\frac{1}{2}$, has been used by \citet{KRACH1993} to ensure regularity of the estimates of Bradley-Terry strengths. Another ``obvious'' choice is $\eta=1$, which is equivalent to a uniform prior on $\zeta_i$. (The Haldane prior is $\eta=0$.) \section{Conclusions} We have considered various families of prior distributions for team strengths in the Bradley-Terry model. Motivated by the application of a Bayesian Bradley-Terry model to rate teams based only on their game results, we have evaluated these priors according to the desiderata of invariance under interchange of teams (\ref{d:interchange}), interchange of winning and losing (\ref{d:reflection}) and elimination of irrelevant teams from the model (\ref{d:elim}), as well as normalizability (\ref{d:proper}). A Haldane-like prior of complete ignorance is not normalizable (violation of \ref{d:proper}), although it satisfies the other desiderata. A prior based on maximum entropy arguments, as well as one from the conjugate family of \citet{DAVIDSON01121973} which is required to obey the other desiderata, will depend on the number of teams for which it was constructed (violation of \ref{d:elim}). The same is true for the Jeffreys prior. A Dirichlet prior on the team strengths \citep{Chen19849} can be made to satisfy the other desiderata, but will not be invariant under interchange of the definitions of winning and losing (violation of \ref{d:reflection}). Distributions can be constructed which satisfy the desiderata by imposing independent priors on the strengths of all of the teams. In particular, a multivariate Gaussian in the log-strengths \citep{Leonard1977} which satisfies the desiderata is equivalent to identical independent Gaussian priors on each of the log-strengths. Another simple family of separable prior distributions imposes independent generalized logistic distributions on the log-strengths. In each of these last two cases, a single parameter remains. \citet{Phelan2017} consider the relationship between the two, and propose a hierarchical method to estimate these parameters rather than assuming values for them. \section*{Acknowledgments} The author wishes to thank Kenneth Butler and Gabriel Phelan for useful discussions.
{ "timestamp": "2017-12-15T02:09:47", "yymm": "1712", "arxiv_id": "1712.05311", "language": "en", "url": "https://arxiv.org/abs/1712.05311" }
\section{Introduction} In recent years and after the standardization of coherent optical transmission systems, phase modulation formats have been widely investigated \cite{Karlsson2009,Kojima2017}. Different studies concluded that designing high-dimensional modulation formats can increase the product of reach and capacity of the transmission system by either increasing the Euclidean distance, or mitigating fiber nonlinear impairments \cite{Eriksson2014b,Reimer2016,Millar2014a}. These formats also allow an increase in the spectral efficiency granularity \cite{Reimer2016,Kojima2017}. For instance, Shiner et al. designed an eight-dimensional (8D) nonlinearity-tolerant modulation format using the polarization-balance concept at a spectral efficiency of 2bits/4D-symbol \cite{Shiner2014}. They experimentally demonstrated a 1dB gain in net system margin with comparison to Polarization-Division-Multiplexed Binary-Phase-Shift-Keying (PDM-BPSK) standard format. Later, Reimer et al. used the same concept to design a modulation format with a higher spectral efficiency of 3bits/4D-symbol \cite{Reimer2016}. In this paper, we propose a unified approach to construct nonlinearity-tolerant modulation formats in the spectral efficiency range of 2-4bits/4D-symbol by set-partitioning Polarization-Division-Multiplexed Quadrature-Phase-Shift-Keying (PDM-QPSK) in 8D. The aforementioned modulation formats can be seen as special cases of this approach. We construct two new 8D formats at spectral efficiencies of 2.5 and 3.5bits/4D-symbol. Using numerical simulations we demonstrate an increased nonlinearity tolerance and distance-capacity product compared to standard formats. \section{Design of 8D modulation formats based on set-partitioned PDM-QSPK} PDM-QPSK symbols exhibit four distinct States of Polarization (SOPs) and a constant modulus. In eight dimensions (considering two consecutive time slots $T_1$ and $T_2$), PDM-QPSK symbols can be grouped into three disjoint sets according to the relative orientation of their SOPs in consecutive time slots: $^{(i)}$ $64$ symbols with opposite SOPs in $T_1$ and $T_2$ (Polarization Balanced, PB); $^{(ii)}$ $128$ symbols with orthogonal SOPs (Polarization Alternating, PA); $^{(iii)}$ and $64$ symbols with identical SOPs (Polarization Identical, PI). We obtain 8D modulation formats with a high nonlinearity tolerance by selecting symbols from the three sets until the desired spectral efficiency is attained, while prioritizing PB over PA over PI symbols. Symbols are further selected such that $^{(a)}$ the minimum Euclidean distance is maximized and $^{(b)}$ the constellation is symmetrical: each constellation point has the same number of neighbors at a given Euclidean distance. \section{PB-5B8D and PA-7B8D modulation formats} The $2^8=256$ symbols of PDM-QPSK in 8D are labeled by $8$ bits ($b_1,~b_2,~...,~b_8$ with $b_1$ being the most significant bit), where the first four bits determine the constellation point of a Gray-mapped standard PDM-QPSK in $T_1$, and the last four the one in $T_2$. We obtain a new modulation format at a spectral efficiency of 2.5~bits/4D-symbol, by selecting $2^5=32$ from 64 PB symbols of set $^{(i)}$. They are labeled using $5$ information bits and 3 overhead bits in 8D. We therefore refer to this format as polarization-balanced 5 bits in 8D (PB-5B8D). The three overhead bits, whose values are determined by choosing symbols from the set $^{(i)}$ according to the constraints $^{(a)}$ and $^{(b)}$, are obtained through the following formulas: \begin{equation} \label{equation:PA-5B8D_parity_bit} b_6~=~b_3\oplus \left( b_4\oplus b_5\right)~~~~;~~~~ b_7~=~\overline{b_2}\oplus \left( b_4\oplus b_5\right)~~~~;~~~~ b_8~=~\overline{b_1}\oplus \left( b_4\oplus b_5\right). \end{equation} Here, ``$\oplus$" denotes the logical XOR and an overline indicates negation. A spectral efficiency of 3.5bits/4D-symbol or $7$ information bits ($b_1~...~b_7$) in 8D requires $2^7=128$ symbols, which exhausts the available PB symbols. We add $64$ PA symbols and correspondingly refer to this format as polarization alternating 7 bits in 8D (PA-7B8D). Here, selecting the symbols from the sets $^{(i)}$ and $^{(ii)}$ subject to constraints $^{(a)}$ and $^{(b)}$ translates into a single condition on the overhead $b_8$, which is computed as follows, \label{equation:PA-7B8D_parity_bit} \begin{align} b_8={} & \overline{b_1\oplus b_4\oplus b_6\oplus \left( b_1 \cdot b_3\right) \oplus \left( b_1 \cdot b_4\right) \oplus \left( b_1 \cdot b_5\right) \oplus \left( b_1 \cdot b_6\right) \oplus \left( b_2 \cdot b_3\right) \oplus} \\ & \overline{\left( b_2 \cdot b_4\right) \oplus \left( b_2 \cdot b_5\right) \oplus \left( b_2 \cdot b_6\right) \oplus \left( b_3 \cdot b_5\right) \oplus \left( b_3 \cdot b_6\right) \oplus \left( b_4 \cdot b_5\right) \oplus \left( b_4 \cdot b_6\right)}, \end{align} where ``$\cdot$" denotes the logical AND operation. The above formulas are examples of nonlinear coding reflecting nonlinear constraints on the symbols. We emphasize that the PDM-QPSK devices and components can be reused in implementing these formats. Complexity is increased at the transmitter because the overhead bits have to be computed according to eq.\eqref{equation:PA-5B8D_parity_bit} or eq.\eqref{equation:PA-7B8D_parity_bit}, respectively. \section{Linear and Nonlinear channel performance} The simulation setup is as follows: five PDM and Wavelength Division Multiplexed (WDM) channels are generated at the transmitter, where a binary sequence of length $2^{16}$ is carried in each polarization. Signals are sampled at 64samples/symbol, and modulated at a baudrate of 32Gbaud with a Root-Raised Cosine (RRC) pulse shape with a roll-off factor of 0.1. After fixing the center channel at the wavelength 1550nm and multiplexing all channels on a 37.5GHz grid, the resulting optical signal at the output of the multiplexer is launched into the optical link. The latter consists of several spans of a 75km long Large Effective Area Fiber (LEAF), an ideal 100\% in-line chromatic Dispersion Compensation Fiber (DCF) and a noiseless flat-gain in-line amplifier. The Polarization Mode Dispersion (PMD) of the fiber is not considered. In front of the receiver, White Gaussian Noise (WGN) is loaded to the signal assuming a Noise Figure of the amplifiers NF=7dB. At the receiver, an RRC matched filter is applied to the digitized signal of the center channel, followed by an 11-tap butterfly-FIR equalizer trained by 1024 symbols, which recovers both polarizations without any phase ambiguity. A Maximum Likelihood (ML) decision is performed in 8D. Finally the $Q^2$ factor is computed using a Monte Carlo loop while counting at least 400 errors.\\ \begin{figure*}[!!!ht] \centering \includegraphics[scale=0.85]{figure1.pdf} \caption{\label{figure1} Transmission system simulation results after 60spans for several values of the fiber launch power per channel, for PDM-BPSK, PB-5B8D, PA-7B8D and PDM-QPSK formats. (a) $Q^2$ factor. (b) Gain (expressed as $Q^2$ difference) of PB-5B8D and PA-7B8D over PDM-BPSK and PDM-QPSK, respectively.} \end{figure*} Figure \ref{figure1}.a shows the bell curves of PDM-BPSK, PB-5B8D, PA-7B8D and PDM-QPSK formats. In the noise-dominated linear regime, PDM-BPSK exhibits higher $Q^2$ factor values than PB-5B8D because of a higher minimum Euclidean distance ($Q^2$ factor higher by 0.4dB at -11dBm). Nevertheless, in the nonlinear regime, the PB-5B8D outperforms the PDM-BPSK, as shown in fig. \ref{figure1}.b. The increase of the $Q^2$ factor gain is proportional to the input power of the fiber and therefore to strength of fiber nonlinear impairments.\\ Comparing PA-7B8D to PDM-QPSK, fig. \ref{figure1}.a shows around 0.54dB linear performance difference at -11dBm launch power. As these two formats have the same minimum Euclidean distance, this difference comes from a lower bitrate for PA-7B8D. The difference increases slightly in the nonlinear regime as it can be seen in fig. \ref{figure1}.b (above 1dBm for minimum launch power of -5dBm), as PA-7B8D is more robust to nonlinear impairments.\\ \begin{figure*}[!!!ht] \centering \includegraphics[scale=0.85]{figure2.pdf} \caption{\label{figure2} Transmission system simulation results versus the number of spans, for PDM-BPSK, PB-5B8D, PA-7B8D and PDM-QPSK formats. (a) $Q^2$ factor. (b) Gain on $Q^2$ factor.} \end{figure*} After fixing the fiber launch power to its optimal value (-7dBm for all formats according to fig. \ref{figure1}.a), we simulate the reach dependence of each modulation format and plot the resulting $Q^2$ factors in fig. \ref{figure2}.\\ At a Soft-Decision Forward-Error-Correcting (SD-FEC) code threshold of $4.9$dB, we calculate the gain in reach of PB-5B8D over PDM-BPSK. A relatively small 1.6\% gain is achieved (fig. \ref{figure2}.a), however at a 25\% higher spectral efficiency. Despite a lower linear channel performance, PB-5B8D has still a longer reach. It can also be observed in fig. \ref{figure2}.b that this gain appears only after a transmission distance of 6750km. As a consequence, a significant gain in reach-capacity product is obtained for PB-5B8D with comparison to PDM-BPSK.\\ At the same SD-FEC threshold, a gain of 15\% in reach is achieved by PA-7B8D compared to PDM-QPSK baseline, at a 12.5\% lower spectral efficiency (fig.\ref{figure2}.a). In fig.\ref{figure2}.b we observe a gain in $Q^2$ factor of at 0.9dB after 60 spans. Recalling that PA-7B8D has a gain of around 0.54dB in $Q^2$ factor over PDM-QPSK in the linear channel, therefore exhibits a higher tolerance to nonlinear impairments compared to PDM-QPSK. As a consequence, we find a slight increase of the reach-capacity product for PA-7B8D.\\ It is known that the PB constraint mitigates Cross Polarization Modulation (XPolM) effects~\cite{Shiner2014}. This is the reason why PB-5B8D performs better than PDM-BPSK in the nonlinear channel. In simulations we have further observed that adding PA symbols for spectral efficiencies higher than 3bits/4D-symbol can still provide improved nonlinear performance compared to PDM-QPSK. By avoiding strong XPolM-inducing PI symbols contained in PDM-QPSK, PA-7B8D provides a small net gain and a good trade-off between nonlinear performance and implementation complexity. These features make our modulation formats useful in systems where XPolM is a limiting factor. \section{Conclusion} We have designed two new nonlinearity-tolerant 8D modulation formats at spectral efficiencies of 2.5 (PB-5B8D) and 3.5bits/4D-symbol (PA-7B8D). The PB-5B8D format outperforms standard PDM-BPSK by increasing the transmission reach, while offering a substantial 25\% increase in spectral efficiency. The PA-7B8D format can reach distances that are unreachable using PDM-QPSK. Even though the spectral efficiency is 12.5\% lower, it offers additional flexibility for the distance-capacity trade-off with a small net gain. We have provided a simple bit-to-symbol mapping, which shows that these format can be implemented with slight modifications of the standard PDM-QPSK hardware. Finally, the two new modulation formats are very promising candidates for submarine transmission systems. \bibliographystyle{ieeetr}
{ "timestamp": "2017-12-15T02:05:28", "yymm": "1712", "arxiv_id": "1712.05141", "language": "en", "url": "https://arxiv.org/abs/1712.05141" }
\section{Introduction} \label{Sec-1:introduction} The matrix elements of the energy-momentum tensor (EMT) \cite{Pagels} provide most basic information: the mass and spin of a particle. They also define the $D$-term \cite{Polyakov:1999gs}, a property not known experimentally for any particle. The most direct way to probe EMT matrix elements would be scattering off gravitons, which is impractical. However, information on the EMT form factors can be accessed through generalized parton distribution functions (GPDs) which enter the description of certain hard exclusive reactions \cite{Muller:1998fv,Ji:1996ek,Radyushkin:1996nd,Collins:1996fb, Ji:1998pc,Goeke:2001tz,Diehl:2003ny,Belitsky:2005qn,Guidal:2013rya, Vanderhaeghen:1998uc,Belitsky:2001ns}. The second Mellin moments of unpolarized GPDs yield EMT form factors. This provides not only the key to access information about nucleon's spin decomposition \cite{Ji:1996ek}, but also to its mechanical properties \cite{Polyakov:2002yz}. The $D$-term determines the behavior of unpolarized GPDs in the asymptotic limit of renormalization scale $\mu\to\infty$ \cite{Goeke:2001tz}. Aspects of the relation of the $D$-term to GPDs were also investigated in \cite{Teryaev:2001qm}. Similarly electric form factors providing information on the electric charge distribution \cite{Sachs}, the EMT form factors offer insights on the energy density, orbital angular momentum density, and the distribution of internal forces encoded in the stress tensor and directly related to the $D$-term \cite{Polyakov:2002yz}. The EMT densities allow us to gain insights on the particle stability, and may have interesting practical applications \cite{Eides:2015dtr}. For a recent review we refer to \cite{Hudson:2016gnq}. The nucleon $D$-term has been studied in models, lattice QCD, and dispersion relations \cite{Ji:1997gm,Petrov:1998kf,Schweitzer:2002nm,Ossmann:2004bp, Wakamatsu:2007uc,Goeke:2007fp,Goeke:2007fq,Cebulla:2007ei,Kim:2012ts, Hagler:2003jd,Gockeler:2003jf,Negele:2004iu,Bratt:2010jn,Pasquini:2014vua}. $D$-terms have also been investigated in spin-0 \cite{Novikov:1980fa,Megias:2004uj,Brommel:2005ee, Guzey:2005ba,Liuti:2005qj,Mai:2012yc,Hudson-PS-I} and in higher-spin \cite{Gabdrakhmanov:2012aa,Perevalova:2016dln} systems. In all cases the $D$-terms were found negative. The nucleon $D$-term was also studied in chiral perturbation theory which cannot predict its value \cite{Chen:2001pv}. The fixed poles in virtual Compton amplitudes discussed in the pre-QCD era \cite{Cheng:1970vg} might be related to the $D$-term \cite{Brodsky:2008qu}. With the $D$-term experimentally unknown, theoretical predictions are of importance. A particularly interesting question is: what is the $D$-term of a free particle? The purpose of this work is to address this question for fermions. To illustrate how instructive it is to investigate this question, one may recall that the free Dirac equation predicts the anomalous magnetic moment $g=2$ of a charged point-like fermion, which is derived by coupling the free theory to a weak classical magnetic background field. In principle, the same is implicitly done by defining the EMT through coupling the free theory to a classical background gravitational field which for Dirac fields yields the symmetric ``Belifante improved'' EMT. Interactions alter the value $g=2$; little for electrons and muons in QED, far more for protons and neutrons in QCD. But in any case, the free theory provides a valuable benchmark to which we can compare results from theoretical approaches and eventually experiment. In an accompanying work, this question was studied for the bosonic case: free spin-$0$ bosons have an intrinsic $D$-term $D=-1$. This prediction pertains to free point-like bosons, although interacting theories of extended bosons can be constructed where this value is preserved. In general, however, interactions affect the value of $D$ \cite{Hudson-PS-I}. In this work we will show that free non-interacting fermions have no intrinsic $D$-term. This means that, in contrast to bosons, fermionic $D$-terms are generated by dynamics which is an unexpected and highly interesting feature. We will illustrate in two simple models how interactions can generate the $D$-term of a fermion. The outline of this work is as follows. After introducing the notation in Sec.~\ref{Sec-2:FF-of-EMT-in-general}, we will compute the EMT form factors for a free spin $\frac12$ particle in Sec.~\ref{Sec-3:free-case} and show that the $D$-term of a non-interacting fermion vanishes, which has implicitly already been stated in literature as we became aware after completing this part of our work. In Sec.~\ref{Sec-4:heuristic} we provide a heuristic argument based on the 3D density formalism to explain why the $D$-term must be zero for a free pointlike particle for consistency reasons. In Secs.~\ref{Sec-5:interaction-bag} and \ref{Sec-6:interaction-CQSM} we use two models of the nucleon to demonstrate how interactions generate a non-zero value for the $D$-term. We use the bag model, where the interaction is provided by the bag boundary which confines the otherwise free and non-interacting fermion(s). We also use the chiral quark soliton model where the nucleon is described as a solitonic bound state in a strongly interacting theory of quarks, antiquarks and Goldstone bosons. Finally, in Sec.~\ref{Sec-7:conclusions} we summarize our findings and present the conclusions. \section{Form factors of the energy-momentum tensor} \label{Sec-2:FF-of-EMT-in-general} The energy momentum tensor of a theory described by the Lagrangian ${\cal L}$ is defined by coupling the theory to a background gravitational field and varying the action $S_{\rm grav}=\int\!{\rm d}^4x\sqrt{-g}\,{\cal L}$ with respect to the background field, \begin{equation}\label{Eq:EMT-from-gravity} \hat T_{\mu\nu} = \frac{2}{\sqrt{-g}}\, \frac{\delta S_{\rm grav}}{\delta g^{\mu\nu}}\,, \end{equation} where $g$ denotes the determinant of the metric. The matrix elements of the EMT operator in spin-$\frac12$ states are described by three form factors \cite{Pagels} \begin{eqnarray}\label{Eq:ff-of-EMT} \langle p^\prime| \hat T_{\mu\nu}(0) |p\rangle = \bar u(p^\prime)\biggl[ M_2(t)\,\frac{P_\mu P_\nu}{m} + J(t)\,\frac{i(P_{\mu}\sigma_{\nu\rho}+P_{\nu}\sigma_{\mu\rho})\Delta^\rho}{2m} + D(t)\,\frac{\Delta_\mu\Delta_\nu-g_{\mu\nu}\Delta^2}{4m}\biggr]u(p)\, , \end{eqnarray} with states and spinors normalized by $\langle p^\prime|p\rangle = 2p^0(2\pi)^3\delta^{(3)}({\bf p}^\prime-{\bf p})$ and $\bar u(p) u(p)=2 m$ where $m$ denotes the mass. We suppress spin indices for brevity, and define $P=(p+p')/2$, $\Delta=(p'-p)$, $t=\Delta^2$. The form factors of the EMT in Eq.~(\ref{Eq:ff-of-EMT}) can be interpreted \cite{Polyakov:2002yz} in analogy to the electromagnetic form factors \cite{Sachs} in the Breit frame characterized by $\Delta^0=0$. In this frame one can define the static EMT \begin{equation}\label{Def:static-EMT} T_{\mu\nu}({\bf r},{\bf s}) = \int\frac{\!{\rm d}^3 {\bf\Delta}}{(2\pi)^32E}\;\exp(i {\bf\Delta}{\bf r})\; \langle p^\prime,S^\prime|\hat{T}_{\mu\nu}(0)|p,S\rangle \end{equation} with the initial and final polarization vectors of the nucleon $S$ and $S^\prime$ defined such that they are equal to $(0,{\bf s})$ in the respective rest-frame, where we introduce the unit vector ${\bf s}$ denoting the quantization axis for the nucleon spin. The energy density $T_{00}({\bf r})$ yields the fermion mass according to $\int\!{\rm d}^3r\,T_{00}({\bf r},{\bf s})=m$, while $\varepsilon^{i j k} r_j T_{0k}^Q({\bf r},{\bf s})$ corresponds to the distribution of angular momentum inside the fermion. The components of $T_{ik}({\bf r})$ constitute the stress tensor. The form factors $M_2(t)$ and $D(t)$ are related to $T_{\mu\nu}({\bf r},{\bf s})$ by \begin{eqnarray} M_2(t)-\frac{t}{4m^2}\biggl(M_2(t)-2 J(t)+ D(t) \biggr) &=&\;\;\;\frac{1}{m}\,\int\!{\rm d}^3{\bf r}\, e^{-i{\bf r} {\bf\Delta}} \, T_{00}({\bf r},{\bf s})\,, \label{Eq:ff-M2}\\ D(t)+\frac{4t}{3}\, D^\prime(t) +\frac{4t^2}{15}\, D^{\prime\prime}(t) &=& -\frac{2}{5}\,m \int\!{\rm d}^3{\bf r}\,e^{-i{\bf r} {\bf\Delta}}\, T_{ij}({\bf r})\,\left(r^i r^j-\frac{{\bf r}^2}3\,\delta^{ij}\right)\, , \label{Eq:ff-d1} \end{eqnarray} where the primes denote derivatives with respect to the argument. The explicit expressions relating $\varepsilon^{i j k} r_j T_{0k}^Q({\bf r},{\bf s})$ to $J(t)$, see \cite{Polyakov:2002yz,Lorce:2017wkb}, will not be needed in this work. At zero momentum-transfer the form factors satisfy the constraints \begin{eqnarray} && M_2(0) =1, \quad J(0) =\frac12\, , \nonumber\\ && D(0)= -\frac{2}{5}\,m \int\!{\rm d}^3{\bf r}\;T_{ij}({\bf r})\, \left(r^i r^j-\frac{{\bf r}^2}3\,\delta^{ij}\right)\equiv D\, . \label{Eq:M2-J-d1} \end{eqnarray} The form factors $M_2(t)$ and $J(t)$ are constrained at $t=0$ because the total energy of the fermion is equal to its mass and its spin is 1/2, see \cite{Lowdon:2017idv} for a recent rigorous discussion in an axiomatic approach. But the value of $D\equiv D(0)$ is a priori unknown, and must be determined from experiments. The physical interpretation of the $D$-term is the following. $D(t)$ is connected to the distribution of pressure and shear forces experienced by the partons in the nucleon \cite{Polyakov:2002yz}: $T_{ij}({\bf r})$ can be decomposed as \begin{equation}\label{Eq:T_ij-pressure-and-shear} T_{ij}({\bf r}) = s(r)\left(\frac{r_ir_j}{r^2}-\frac 13\,\delta_{ij}\right) + p(r)\,\delta_{ij}\, . \end{equation} Hereby $p(r)$ describes the distribution of the ``pressure" inside the hadron, while $s(r)$ is related to the distribution of the ``shear forces.'' Both functions are related to each other due to the conservation of the EMT \cite{Polyakov:2002yz}. Another important consequence of the EMT conservation is the von Laue condition \cite{von-Laue} \begin{equation}\label{Eq:stability} \int\limits_0^\infty \!\!{\rm d} r\;r^2p(r)=0 \;, \end{equation} which is a necessary (but not sufficient) condition for stability. Further worthwhile noticing properties which follow from the conservation of the EMT are discussed in Ref.~\cite{Goeke:2007fp}. \section{EMT form factors for a free Dirac particle} \label{Sec-3:free-case} The simplest case is the theory of a free spin $\frac12$ fermion described by the Lagrangian \begin{equation}\label{Eq:Lagrangian-free-case} {\cal L} = \bar{\Psi}(i\fslash{\partial}-m)\Psi\,. \end{equation} For a free spin $\frac12$ particle Eq.~(\ref{Eq:EMT-from-gravity}) yields the EMT operator given by \begin{equation}\label{Eq:EMT-free} \hat{T}_{\mu\nu}(x) = \frac{1}{4}\,\bar\Psi(x)\,\biggl( i\gamma_\mu\overrightarrow{\partial}_{\!\mu} +i\gamma_\nu\overrightarrow{\partial}_{\!\mu} -i\gamma_\mu\overleftarrow{ \partial}_{\!\mu} -i\gamma_\nu\overleftarrow{ \partial}_{\!\mu} \biggr)\,\Psi(x)\;, \end{equation} where the arrows indicate on which fields the derivatives act. Evaluating the matrix elements yields \begin{eqnarray} \langle p^\prime| \hat T_{\mu\nu}(x) |p\rangle = \frac{1}{4}\,\bar u(p^\prime)\biggl[ \,\gamma_\mu^{ } p_\nu^{ } + p_\mu^{ } \gamma_\nu^{ } + \gamma_\mu^{ }p^\prime_\nu +p^\prime_\mu \gamma_\nu \,\biggr]u(p)\, e^{i(p^\prime-p)x}\,. \label{Eq:ff-of-EMT-free-1} \end{eqnarray} Exploring the Gordon identity we can rewrite this result as \begin{equation} \langle p^\prime| \hat T_{\mu\nu}(x) |p\rangle = \bar u(p^\prime)\biggl[ \frac{P_\mu P_\nu}{m} + \frac12\; \frac{i(P_{\mu}\sigma_{\nu\rho}+P_{\nu}\sigma_{\mu\rho})\Delta^\rho}{2m} \biggr]u(p)\, e^{i(p^\prime-p)x}\,, \label{Eq:ff-of-EMT-free-2} \end{equation} from which we read off the predictions of the free Dirac theory for the EMT form factors, namely \begin{equation}\label{Eq:ff-of-EMT-free-3} M_2(t) = 1 \, , \;\;\; J(t) = \frac12 \, , \;\;\; D(t) = 0 \, . \end{equation} Several comments are in order. The form factors are constant functions of $t$ as expected for a free point-like particle, and we consistently recover the general constraints for $M_2(t)$ and $J(t)$ at $t=0$ in Eq.~(\ref{Eq:M2-J-d1}). The value of the $D$-term is therefore the only non-trivial result from this exercise: it is remarkable it vanishes for a free point-like fermion \cite{Footnote-Majorana}. It is important to remark that the vanishing of the $D$-term in the free case was implicitly known in literature, see e.g.\ \cite{Donoghue:2001qc} where quantum corrections to the metric were studied. Although a quantum gravity theory is not yet known, the leading quantum corrections can be computed from the known low energy structure of the theory \cite{Donoghue:1993eb}. These calculations are challenging \cite{Khriplovich:2002bt,BjerrumBohr:2002kt,Khriplovich:2004cx}. But the ``tree level'' results for EMT form factors were obtained unambiguously already in \cite{Donoghue:2001qc}. Our free field calculation, Eq.~(\ref{Eq:ff-of-EMT-free-3}), agrees with Ref.~\cite{Donoghue:2001qc}. The loop corrections to the Reissner-Nordstr\"om and Kerr-Newman metrics \cite{Khriplovich:2002bt,BjerrumBohr:2002kt,Khriplovich:2004cx} show how (QED, gravity) interactions generate quantum long-range contributions to the stress tensor. A consistent description of the $D$-term requires, however, the full picture of the stress tensor including short-distance contributions which cancel exactly the long-distance ones in Eq.~(\ref{Eq:stability}). The results of these works therefore do not allow us to gain insights on how much these corrections contribute to the $D$-terms of elementary (and charged) fermions. \section{\boldmath Heuristic consistency argument why $\bm{D=0}$ for a free fermion} \label{Sec-4:heuristic} The vanishing of the $D$-term of a free fermion can be made plausible on the basis of a heuristic argument which was already helpful in discussing the EMT densities in the bosonic case \cite{Hudson-PS-I}. The argument explores the 3D-density framework which strictly speaking requires the particle to be heavy such that relativistic corrections can be neglected. The argument is based on two assumptions: (i) form factors are $t$-independent constants in the free theory case, and (ii) energy density is formally given by $T_{00}(\vec{r}\,)= m\;\delta^{(3)}(\vec{r}\,)$ for a heavy particle \cite{Footnote-heavy-mass}. Per assumption (i) we can replace the form factors in Eq.~(\ref{Eq:ff-M2}) by their values at zero-momentum transfer. Next, we notice that the result in the square brackets in the following equation must be zero to comply with assumption (ii), \begin{equation} \frac{1}{m}\,\int\!{\rm d}^3{\bf r}\, e^{-i {\bf\Delta}{\bf r}} \, T_{00}({\bf r}) = M_2(0)-\frac{t}{4m^2} \underbrace{\left[M_2(0)-2 J(0)+D(0) \right]}_{ \displaystyle\stackrel{!!}{=} 0} \stackrel{!} = 1 \, . \label{Eq:ff-M2a} \end{equation} With the constraints in Eq.~(\ref{Eq:M2-J-d1}) it is clear that $M_2(0)-2 J(0)=0$. From this it then immediately follows that the $D$-term must vanish for a point-like particle for consistency reasons. This is nothing but a heuristic argument. But it is nevertheless helpful to make it plausible why the $D$-term of a free fermion vanishes. From this argument it is also clear why in the interacting case one may in general encounter a non-zero $D$-term: when interactions are present form factors can no longer be expected to be $t$-independent constants, and $D(t)$ in general do not need to be zero. An extended internal structure implies a non-zero $D$-term along the same lines: now $T_{00}(\vec{r}\,)\neq m\;\delta^{(3)}(\vec{r}\,)$ and form factors exhibit a generic $t$-dependence, e.g, $M_2(t)=1+\frac16\,\langle r^2\rangle \,t + {\cal O}(t^2)$ \cite{Goeke:2007fp}. In App.~\ref{App-A} we include another heuristic argument why the $D$-term vanishes for elementary fermions but not for elementary bosons, based on a simple analysis of the structure of the Lagrangians. \newpage \section{\boldmath Emergence of the $D$-term from bag boundary forces} \label{Sec-5:interaction-bag} The bag model describes one or several non-interacting fermions confined inside a ``bag'' which, in its rest frame, is a spherical region of radius $R$ carrying the energy density $B>0$. If $N_c=3$ quarks or a $\bar qq$-pair are placed inside the bag in a color-singlet state, this yields the popular model of hadrons with confinement simulated by the bag boundary condition \cite{Chodos:1974je}. The Lagrangian of the bag model can be written as \cite{Thomas:2001kw} \begin{equation}\label{Eq:Lagrangian-bag} {\cal L} = \biggl(\bar\psi\, (i\fslash{\partial}-m)\psi-B\biggr)\,\Theta_V + \frac12\,\bar\psi\,\psi\:\eta^\mu\partial_\mu\Theta_V \;, \end{equation} where $\Theta_V=\Theta(R-r)$, $\delta_S=\delta(R-r)$, $\eta^\mu=(0,\vec{e}_r)$, $\vec{e}_r = \vec{x}/r$, $r=|\vec{x}\,|$ in the bag rest frame. The indices $V$ and $S$ denote respectively the volume and the surface of the bag. The boundary condition for the fields is equivalent to the statement that there is no energy-momentum flow out of the bag, i.e.\ $\eta_\mu T^{\mu\nu}(t,\vec{r})=0$ for $\vec{r}\in S$. The starting point is as follows. If no bag boundary condition is present, i.e.\ in the limit $R\to\infty$ in Eq.~(\ref{Eq:Lagrangian-bag}), we deal with the free Lagrangian (\ref{Eq:Lagrangian-free-case}) with an additive constant $B$ which is irrelevant and can be discarded. In such a free theory the $D$-term is zero, as we have shown in Sec.~\ref{Sec-3:free-case}. Next let us discuss what happens if we solve the theory with the bag radius $R$ kept finite. This means we effectively introduce an interaction acting on the otherwise free fermion. We will see that now a non-zero $D$-term emerges. Below we quote only the main steps needed in our context. The details of this calculation will be reported elsewhere \cite{Neubelt-et-al}. The equations of motion of the theory (\ref{Eq:Lagrangian-bag}) are $(i\fslash{\partial}-m)\psi = 0$ for $r<R$, while at the surface $\vec{x}\in S$ the linear boundary condition $i\fslash{\eta}\,\psi = \psi$ and the non-linear boundary condition $\frac12\,\eta_\mu\partial^\mu(\bar\psi\psi) =- B$ hold. The ground state solution has positive parity and is given by the wave-function \begin{equation}\label{Eq:bag-wave-function} \psi(t,\vec{x}) = e^{-i\omega t/R} \;\frac{A}{\sqrt{4\pi}}\, \left(\begin{array}{l} \alpha_+j_0(\omega \, r/R)\,\chi_s \phantom{\displaystyle\frac11}\\ \alpha_-j_1(\omega \, r/R)\,i\vec{\sigma}\vec{e}_r\chi_s \end{array}\right) \, , \;\;\; \end{equation} where $\alpha_\pm=\sqrt{1\pm mR/\Omega}$ and $\Omega=\sqrt{\omega ^2+m^2R^2}$, $\omega $ denotes the lowest solution of the equation $\omega = (1-mR-\Omega)\,\tan\omega$, $\sigma^i$ are Pauli matrices, $\chi_s$ are two-component spinors. The normalization $\int\!{\rm d}^3x\;\psi^\dag(\vec{x},t)\,\psi(\vec{x},t) = 1$ fixes the constant $A$. If $N_c$ fermions are placed in the bag the $D$-term is given by \begin{equation}\label{Eq:D-term-bag} D= \frac13\,M\,N_c\,\frac{A^2 R^4\!}{\omega^4}\;\alpha_+\alpha_- \biggl( -\frac{\omega ^3}{3} +\frac{5}{4}\,(\omega-\sin\omega \,\cos\omega) -\frac{\omega }{2}\,\sin^2\omega \biggr)\,. \end{equation} where $M=N_c\Omega/R+\frac43\,\pi\,B\,R^3$ is the mass of the system. One can show that always $D<0$ in this model \cite{Neubelt-et-al}. For $N_c=3$ colors and assuming the fermions to be massless quarks (in which case $\omega=2.04\dots$) one obtains $D=-1.145$ in agreement with the numerical bag model calculation of nucleon GPDs and EMT form factors from Ref.~\cite{Ji:1997gm}. As an application of Eq.~(\ref{Eq:D-term-bag}) it is insightful to consider the limit $mR\to\infty$ where $\omega\to\pi$, and the $D$-term becomes \begin{equation}\label{Eq:D-term-bag-non-rel} D = N_c^2\,\frac{(-4\,\pi^2+15)}{45} \,. \end{equation} This result can be interpreted in two ways. For the first interpretation we may assume that $m$ is fixed and $R$ becomes much larger \cite{Footnote:limit-R} than the Compton wave length of the particle, $R\gg 1/m$. This means the ``interaction'' decreases, as the confined particle(s) can occupy an increasing volume with the boundary being ``moved'' further and further away. However, no matter how far away we move the boundary \cite{Footnote:limit-R}: {\it some} interaction remains, and generates a non-zero $D$-term. For the second interpretation we may assume a fixed $R$ and $m\to\infty$. This is known as the non-relativistic limit, in which $\alpha_-\to 0$ and the lower component of the spinor in (\ref{Eq:bag-wave-function}) vanishes. The $D$-term in Eq.~(\ref{Eq:D-term-bag}) is proportional to $\alpha_-$ which vanishes, and to the mass of the system which behaves as $M\to N_cm$ for $m\to\infty$. The product $M\alpha_-$ is finite in the limit $m\to\infty$. As a result the $D$-term assumes a finite value as quoted in Eq.~(\ref{Eq:D-term-bag-non-rel}). This result demonstrates that also non-relativistic systems have a $D$-term, i.e.\ this property is not a relativistic effect. For a detailed discussion of the $D$-term in the bag model we refer to \cite{Neubelt-et-al}. One virtue of the bag model is its transparency, which we explored here to learn insightful lessons about the $D$-term. One caveat is that it does not comply with chiral symmetry which is incorporated in the model discussed next. \newpage \section{Chiral interactions and the $\bm{D}$-term of nucleon} \label{Sec-6:interaction-CQSM} The spontaneous breaking of chiral symmetry is the dominant feature of strong interactions in the non-perturbative low-energy regime. A theoretically consistent and phenomenologically successful model of baryons based on chiral symmetry breaking is the chiral quark-soliton model \cite{Diakonov:1987ty} defined in the SU(2) flavor-sector by \cite{Diakonov:1984tw,Dhar:1985gh} \begin{equation}\label{Eq:CQSM} {\cal L}_{\rm eff}=\bar{\Psi}\,(i\fslash{\partial\,}-M\,U^{\gamma_5})\Psi\,, \;\;\; U^{\gamma_5}=\exp(i\gamma_5\tau^a\pi^a/F_\pi) \end{equation} where $F_\pi=93\,{\rm MeV}$ denotes the pion decay constant. Besides the emergence of Goldstone bosons, another consequence of chiral symmetry breaking is the dynamically generated ``constituent'' quark mass $M\simeq 350\,{\rm MeV}$. The effective theory (\ref{Eq:CQSM}) was derived from the instanton model of the QCD vacuum \cite{Diakonov:1983hh,Diakonov:1985eg} which provides a microscopic picture of the dynamical breaking of chiral symmetry, see \cite{Diakonov:2000pa} for reviews. In order to solve the strongly coupled theory (\ref{Eq:CQSM}) (the coupling constant of quark and pion field is $M/F_\pi\sim 3.8$) a non-perturbative method based on the limit of a large number of colors $N_c$ is used. In this limit the functional integration over $U$-fields in Eq.~(\ref{Eq:CQSM}) is solved in the saddle-point approximation by evaluating the model expressions at the static solitonic field $U(\vec{x})$ and integrating over the zero-modes of the soliton solution. The spectrum of the Hamiltonian of the effective theory (\ref{Eq:CQSM}), $H=-i\gamma^0\gamma^k\partial_k+M\gamma^0U^{\gamma_5}(\vec{x})$, contains continua of positive energies $E>M$ and negative energies $E<-M$, and a discrete level with an energy $-M < E_{\rm lev} < M$. The nucleon state is obtained by occupying the discrete level and the states of negative continuum and subtracting the free negative continuum (``vacuum subtraction''). The solitonic field $U(\vec{x})$ is determined from a self-consistent variational procedure which minimizes the soliton energy. In the physical situation the soliton size is $R_{\rm sol} \sim M^{-1}$ \cite{Diakonov:1987ty}. GPDs and EMT form factors including the $D$-term were studied in this model \cite{Petrov:1998kf,Schweitzer:2002nm, Ossmann:2004bp,Wakamatsu:2007uc,Goeke:2007fp,Goeke:2007fq}. As a demonstration of the consistency of this effective chiral theory let us mention that in this model the GPDs satisfy polynomiality \cite{Schweitzer:2002nm}, the Ji sum rule is valid \cite{Ossmann:2004bp}, the von Laue condition holds, the model correctly reproduces the leading non-analytic terms of the EMT form factors \cite{Goeke:2007fp}, and agrees with available lattice QCD data \cite{Goeke:2007fq}. We will now show that the $D$-term vanishes when one ``switches off'' the chiral interactions in this model. This can be formally done by replacing $U\to 1$ in Eq.~(\ref{Eq:CQSM}) which yields the free theory. One way to practically implement this limit is to consider the formal limit $M\,R_{\rm sol}\to\infty$. As the soliton size increases the discrete level energy decreases and approaches the negative continuum \cite{Diakonov:1987ty}. Since in this limit the spatial extension of the solitonic field $U(\vec{x})$ grows, its gradients $\nabla U(\vec{x})$ decrease. This allows one to expand model expressions in terms of gradients of the chiral field. The expression for the $D$-term valid in such a large soliton expansion was derived in \cite{Schweitzer:2002nm} and is given by \begin{equation}\label{Eq:d1-gradient-expansion} D = -\,F_\pi^2 M_N\int\!{\rm d}^3x\,P_2(\cos\vartheta)\, \vec{x}^{\,2}\,{\rm tr}_F[\nabla^3 U][\nabla^3 U^\dag]+\dots \end{equation} where ${\rm tr}_F$ is the trace over flavor indices, $M_N$ denotes the nucleon mass, and the dots indicate higher order derivatives. Notice that the expression (\ref{Eq:d1-gradient-expansion}) is quasi model-independent: it is the leading contribution in the chiral expansion of the $D$-term from which one can derive the leading non-analytic terms \cite{Goeke:2007fq}. The second Legendre polynomial reflects that the $D$-term is related to the traceless part of the stress tensor \cite{Polyakov:2002yz}. After these preparations we can now discuss what happens in the formal limit when we ``switch off'' the chiral interactions and $U\to 1$. In this limit all gradients vanish in Eq.~(\ref{Eq:d1-gradient-expansion}) and we recover that $D=0$ which is the free field theory prediction obtained in Eq.~(\ref{Eq:ff-of-EMT-free-3}). This shows that the $D$-term in the chiral quark soliton model is due to the chiral interactions which define and characterize this model. Let us stress that the above discussion applies only to the formal limit $U\to 1$ which we implemented by means of the large soliton expansion. Only in this limit it is justified to expand model expressions in powers of the derivatives of the chiral field. In the physical situation the soliton size is such that $M\,R_{\rm sol}\sim 1$ and no such expansion is allowed (though it can be used to derive chiral leading non-analytic contributions, and it may give useful rough estimates). In order to obtain in the physical situation reliable model predictions for the $D$-term, and a pressure satisfying the von Laue condition (\ref{Eq:stability}), it is necessary to evaluate numerically the full model expression \cite{Goeke:2007fp}. \newpage \section{Conclusions} \label{Sec-7:conclusions} The $D$-term of a free non-interacting fermion vanishes. This is a simple prediction of the free Dirac equation which is, in principle, analog to the prediction $g=2$ for the anomalous magnetic moment of a charged point-like fermion. This result is remarkable for several reasons and has interesting implications. The prediction of a vanishing $D$-term from the free Dirac equation should be contrasted with the bosonic case. The free Klein-Gordon equation predicts an intrinsic non-zero $D$-term already for free and non-interacting bosons. When interactions are introduced in bosonic theories, the value of $D$ is in general affected and, depending on the theory, the effect can be sizable \cite{Hudson-PS-I}. However, in the fermionic case interactions do not modify the $D$-term, but {\it generate} it. In other words, the $D$-term of a spin-$\frac12$ particle is entirely of dynamical origin. We have provided an heuristic consistency argument which makes it plausible why the $D$-term of a free point-like spin $\frac12$ particle should vanish. While not a rigorous derivation, this argument was already successfully applied to explain why a free point-like boson must have $D=-1$ \cite{Hudson-PS-I}. We have explored two dynamical models of the nucleon to illustrate how the $D$-term is generated in interacting systems. In the bag model we have shown how a non-zero $D$-term emerges when we ``switch on'' interactions which in this model are formulated in terms of boundary conditions which confine otherwise free fermions. We used also the chiral quark soliton model where we have shown how the $D$-term vanishes when the strongly coupled chiral interactions in that model are ``switched off.'' These are simple models of the nucleon, but these results solidify our conclusions: in a fermionic system the $D$-term is generated by dynamics, it arises entirely from interactions. The calculations of the nucleon $D$-term in models, lattice QCD, or dispersion relations \cite{Ji:1997gm,Petrov:1998kf,Schweitzer:2002nm,Ossmann:2004bp, Wakamatsu:2007uc,Goeke:2007fp,Goeke:2007fq,Cebulla:2007ei,Kim:2012ts, Hagler:2003jd,Gockeler:2003jf,Negele:2004iu,Bratt:2010jn,Pasquini:2014vua} give therefore insights which are completely due to the underlying (effective, model, chiral, QCD) dynamics. With its relation to the internal forces and the stress tensor \cite{Polyakov:2002yz} the $D$-term emerges therefore as a valuable window to gain new insights on the structure of composite particles, and especially the QCD dynamics inside the nucleon. In any case, all presently known matter is composed of what we consider elementary fermions, which indicates the importance to study this interesting particle property in more detail. Knowledge of EMT form-factors can be applied to the spectroscopy of the hidden-charm pentaquarks observed at LHCb \cite{Eides:2015dtr,Perevalova:2016dln}. Also the EMT form factors of mesons can be inferred from data and this information may help to discriminate usual from exotic \cite{Polyakov:1998ze,Kawamura:2013wfa}. It will be very exciting to learn about the $D$-term from lattice QCD calculations and experiment and the perspectives are good. After first, vague and model-dependent glimpses on the nucleon $D$-term from the HERMES experiment \cite{Ellinghaus:2002bq} one may expect more quantitative insights from experiments at Jefferson Lab \cite{JLab,Jo:2015ema}, COMPASS at CERN \cite{Joerg:2016hhs}, and the envisioned future Electron-Ion-Collider \cite{Accardi:2012qut}. \ \ \noindent{\bf Acknowledgments.} We would like to thank C\'edric Lorc\'e and Maxim Polyakov for helpful discussions. This work was supported in part by the National Science Foundation (Contract No.\ 1406298).
{ "timestamp": "2017-12-15T02:09:52", "yymm": "1712", "arxiv_id": "1712.05317", "language": "en", "url": "https://arxiv.org/abs/1712.05317" }
\section{Introduction} An \emph{intersection graph} of geometric objects has one vertex per object and an edge between every pair of vertices corresponding to intersecting objects. Intersection graphs for many different families of geometric objects have been studied due to their practical applications and rich structural properties~\cite{McKee1999, Brandstadt1999}. Among the most studied ones are \emph{disk graphs}, which are intersection graphs of closed disks in the plane, and their special case, \emph{unit disk graphs}, where all the radii are the same. Their applications range from sensor networks to map labeling~\cite{DBLP:conf/waoa/Fishkin03}, and many standard optimization problems have been studied on disk graphs, see for example~\cite{EJvL2009} and references therein. In this paper, we study \textsc{Maximum Clique} on general disk graphs. \paragraph*{Known results.} Recognizing unit disk graphs is NP-hard \cite{Breu98}, and even $\exists \mathbb{R}$-complete~\cite{Kang12}. Clark et al.~\cite{Clark90} gave a polynomial-time algorithm for \textsc{Maximum Clique}\xspace on unit disk graphs with a geometric representation. The core idea of their algorithm can actually be adapted so that the geometric representation is no longer needed \cite{Raghavan03}. The complexity of the problem on general disk graphs is unfortunately still unknown. Using the fact that the transversal number for disks is $4$, Amb\"uhl and Wagner~\cite{Ambuhl05} gave a simple $2$-approximation algorithm for \textsc{Maximum Clique}\xspace on general disk graphs. They also showed the problem to be APX-hard on intersection graphs of ellipses and gave a $9\rho^2$-approximation algorithm for filled ellipses of aspect ratio at most $\rho$. Since then, the problem has proved to be elusive with no new positive or negative results. The question on the complexity and further approximability of \textsc{Maximum Clique}\xspace on general disk graphs is considered as folklore~\cite{bang2006}, but was also explicitly mentioned as an open problem by Fishkin~\cite{DBLP:conf/waoa/Fishkin03}, Amb\"uhl and Wagner~\cite{Ambuhl05} and Cabello~\cite{CabelloOpen,Cabello2015}. A closely related problem is \textsc{Maximum Independent Set}, which is known to be W[1]-hard (even on unit disk graphs \cite{Marx08}) and to admit a subexponential exact algorithm~\cite{AlberF04} and PTAS~\cite{Erlebach2005,Chan2003} on disk graphs. \paragraph*{Results and organization.} In Section~\ref{sec:structural}, we mainly prove that the disjoint union of two odd cycles is not the complement of a disk graph. To the best of our knowledge, this is the first structural property that general disk graphs do not inherit from strings or from convex objects. We provide an infinite family of forbidden induced subgraphs, an analogue to the recent work of Atminas and Zamaraev on unit disk graphs~\cite{Atminas16}. In Section~\ref{sec:algorithms}, we show how to use this structural result to approximate and solve \textsc{Maximum Independent Set}\xspace on complements of disk graphs, hence \textsc{Maximum Clique}\xspace on disk graphs. More precisely, we present the first quasi-polynomial-time approximation scheme (QPTAS) and subexponential-time algorithm for \textsc{Maximum Clique}\xspace on disk graphs, even without the geometric representation of the graph. In Section~\ref{sec:gen&lim}, we highlight how those algorithms contrast with the situation for ellipses or triangles, where there is a constant $\alpha>1$ for which an $\alpha$-approximation running in subexponential time is highly unlikely (in particular, ruling out at once QPTAS \emph{and} subexponential-time algorithm). We conclude in Section~\ref{sec:perspectives} with a few open questions. \paragraph*{Definitions and notations.} For two integers $i \leqslant j$, we denote by $[i,j]$ the set of integers $\{i,i+1,\ldots, j-1, j\}$. For a positive integer $i$, we denote by $[i]$ the set of integers $[1,i]$. If $S$ is a subset of vertices of a graph, we denote by $N(S)$ the open neighborhood of $S$ and by $N[S]$ the set $N(S) \cup S$. The \emph{2-subdivision} of a graph $G$ is the graph $H$ obtained by subdividing each edge of $G$ exactly twice. If $G$ has $n$ vertices and $m$ edges, then $H$ has $n+2m$ vertices and $3m$ edges. The \emph{co-2-subdivision} of $G$ is the complement of $H$. Hence it has $n+2m$ vertices and ${n+2m \choose 2} - 3m$ edges. The \emph{co-degree} of a graph is the maximum degree of its complement. A \emph{co-disk} is a graph that is the complement of a disk graph. For two distinct points $x$ and $y$ in the plane, we denote by $\ell(x,y)$ the unique line going through $x$ and $y$, and by $\text{seg}(x,y)$ the closed straight-line segment whose endpoints are $x$ and $y$. If $s$ is a segment with positive length, then we denote by $\ell(s)$ the unique line containing $s$. We denote by $d(x,y)$ the euclidean distance between points $x$ and $y$. We will often define disks and elliptical disks by their boundary, i.e., circles and ellipses, and also use the following basic facts. There are exactly two circles that pass through a given point with a given tangent at this point and a given radius; one if we further specify on which side of the tangent the circle is. There is exactly one circle which passes through two points with a given tangent at one of the two points, provided the other point is \emph{not} on this tangent. Finally, there exists one (not necessarily unique) ellipse which passes through two given points with two given tangents at those points. The \emph{Exponential Time Hypothesis} (ETH) is a conjecture by Impagliazzo et al. asserting that there is no $2^{o(n)}$-time algorithm for \textsc{3-SAT} on instances with $n$ variables \cite{ImpagliazzoETH}. The ETH, together with the sparsification lemma \cite{ImpagliazzoETH}, even implies that there is no $2^{o(n+m)}$-time algorithm solving \textsc{3-SAT}. \section{Disk graphs with co-degree 2}\label{sec:structural} In this section, we fully characterize the degree-2 complements of disk graphs. We show the following: \begin{theorem}\label{thm:main-structural} A disjoint union of paths and cycles is the complement of a disk graph if and only if the number of odd cycles is at most one. \end{theorem} We split this theorem into two parts. In the first one, Section~\ref{subsec:notco-disk}, we show that the union of two disjoint odd cycles is not the complement of a disk graph. This is the part that will be algorithmically useful. As disk graphs are closed under taking induced subgraphs, it implies that in the complement of a disk graph two vertex-disjoint odd cycles have to be linked by at least one edge. This will turn out useful when solving \textsc{Maximum Independent Set}\xspace on the complement of the graph (to solve \textsc{Maximum Clique}\xspace on the original graph). In the second part, Section~\ref{subsec:co-disk}, we show how to represent the complement of the disjoint union of even cycles and exactly one odd cycle. Although this result is not needed for the forthcoming algorithmic section, it nicely highlights the singular role that parity plays and exposes the complete set of disk graphs of co-degree 2. \subsection{The disjoint union of two odd cycles is not co-disk} \label{subsec:notco-disk} We call \emph{positive distance} between two non-intersecting disks the minimum of $d(x,y)$ where $x$ is in one disk and $y$ is in the other. If the disks are centered at $c_1$ and $c_2$ with radius $r_1$ and $r_2$, respectively, then this value is $d(c_1,c_2)-r_1-r_2$. We call \emph{negative distance} between two intersecting disks the length of the straight-line segment defined as the intersection of three objects: the two disks and the line joining their center. This value is $r_1+r_2-d(c_1,c_2)$, which is positive. We call \emph{proper representation} a disk representation where every edge is witnessed by a proper intersection of the two corresponding disks, i.e., the interiors of the two disks intersect. It is easy to transform a disk representation into a proper representation (of the same graph). \begin{lemma}\label{lem:proper} If a graph has a disk representation, then it has a proper representation. \end{lemma} \begin{proof} If two disks intersect non-properly, we increase the radius of one of them by $\varepsilon/2$ where $\varepsilon$ is the smallest positive distance between two disks. \end{proof} In order not to have to discuss about the corner case of three aligned centers in a disk representation, we show that such a configuration is never needed to represent a disk graph. \begin{lemma}\label{lem:generalPosition} If a graph has a disk representation, it has a proper representation where no three centers are aligned. \end{lemma} \begin{proof} By Lemma~\ref{lem:proper}, we have or obtain a proper representation. Let $\varepsilon$ be the minimum between the smallest positive distance and the smallest negative distance. As the representation is proper, $\varepsilon > 0$. If three centers are aligned, we move one of them to any point which is not lying in a line defined by two centers in a ball of radius $\varepsilon/2$ centered at it. This decreases by at least one the number of triple of aligned centers, and can be repeated until no three centers are aligned. \end{proof} From now on, we assume that every disk representation is proper and without three aligned centers. We show the folklore result that in a representation of a $K_{2,2}$ that sets the four centers in convex position, both non-edges have to be \emph{diagonal}. \begin{lemma}\label{lem:cotwoKtwo} In a disk representation of $K_{2,2}$ with the four centers in convex position, the non-edges are between vertices corresponding to opposite centers in the quadrangle. \end{lemma} \begin{proof} Let $c_1$ and $c_2$ be the centers of one non-edge, and $c_3$ and $c_4$ the centers of the other non-edge. Let $r_i$ be the radius associated to center $c_i$ for $i \in [4]$. It should be that $d(c_1,c_2)>r_1+r_2$ and $d(c_3,c_4)>r_3+r_4$ (see Figure~\ref{fig:k22}). Assume $c_1$ and $c_2$ are consecutive on the convex hull formed by $\{c_1, c_2, c_3, c_4\}$, and say, without loss of generality, that the order is $c_1, c_2, c_3, c_4$. Let $c$ be the intersection of $\text{seg}(c_1,c_3)$ and $\text{seg}(c_2,c_4)$. It holds that $d(c_1,c_3) + d(c_2,c_4) = d(c_1,c) + d(c,c_3) + d(c_2,c) + d(c,c_4) = (d(c_1,c)+d(c,c_2)) + (d(c_3,c)+d(c,c_4)) > d(c_1,c_2) + d(c_3,c_4) > r_1 + r_2 + r_3 + r_4 = (r_1+r_3)+(r_2+r_4)$. Which implies that $d(c_1,c_3) > r_1+r_3$ or $d(c_2,c_4) > r_2+r_4$; a contradiction. \begin{figure} \centering \begin{tikzpicture}[scale=0.25, dot/.style={fill,circle,inner sep=-0.02cm} ] \draw[very thin,fill=red, fill opacity=0.2] (11.8405399558, 12.6637388353) circle (3.82299906735); \node[dot] at (11.8405399558, 12.6637388353) {}; \node at (11.8405399558, 12.6637388353 - 1.2) {$c_1$}; \draw[very thin,fill=red, fill opacity=0.2] (18.7026380772, 18.0821427855) circle (4.22302853751); \node[dot] at (18.7026380772, 18.0821427855) {}; \node at (18.7026380772, 18.0821427855 + 1.2) {$c_2$}; \draw[very thin,fill=red, fill opacity=0.2] (11.5839455246, 16.9641945052) circle (3.96909254758); \node[dot] at (11.5839455246, 16.9641945052) {}; \node at (11.5839455246, 16.9641945052 + 1.2) {$c_3$}; \draw[very thin,fill=red, fill opacity=0.2] (18.4743856348, 12.97903103) circle (3.46657454294); \node[dot] at (18.4743856348, 12.97903103) {}; \node at (18.4743856348, 12.97903103 - 1.2) {$c_4$}; \end{tikzpicture} \caption{Disk realization of a $K_{2,2}$. As the centers are positioned, it is impossible that the two non-edges are between the disks 2 and 3, and between the disks 1 and 4 (or between the disks 1 and 3, and between the disks 2 and 4).}\label{fig:k22} \end{figure} \end{proof} We derive a useful consequence of the previous lemma, phrased in terms of intersections of lines and segments. \begin{corollary}\label{cor:intersect} In any disk representation of $K_{2,2}$ with centers $c_1, c_2, c_3, c_4$ with the two non-edges between the vertices corresponding to $c_1$ and $c_2$, and between $c_3$ and $c_4$, it should be that $\ell(c_1,c_2)$ intersects $\text{seg}(c_3,c_4)$ or $\ell(c_3,c_4)$ intersects $\text{seg}(c_1,c_2)$. \end{corollary} \begin{proof} Either the disk representation has the four centers in convex position. Then, by Lemma~\ref{lem:cotwoKtwo}, $\text{seg}(c_1,c_2)$ and $\text{seg}(c_3,c_4)$ are the diagonals of a convex quadrangle. Hence they intersect, and \emph{a fortiori}, $\ell(c_1,c_2)$ intersects $\text{seg}(c_3,c_4)$ ($\ell(c_3,c_4)$ intersects $\text{seg}(c_1,c_2)$, too). Or the disk representation has one center, say without loss of generality, $c_1$, in the interior of the triangle formed by the other three centers. In this case, $\ell(c_1,c_2)$ intersects $\text{seg}(c_3,c_4)$. If instead a center in $\{c_3,c_4\}$ is in the interior of the triangle formed by the other centers, then $\ell(c_3,c_4)$ intersects $\text{seg}(c_1,c_2)$. \end{proof} We can now prove the main result of this section thanks to the previous corollary, parity arguments, and \emph{some elementary properties of closed plane curves}, namely Property I and Property III of the eponymous paper \cite{Tait1877}. \begin{theorem}\label{thm:main-structural-non-disk} The complement of the disjoint union of two odd cycles is not a disk graph. \end{theorem} \begin{proof} Let $s$ and $t$ be two positive integers and $G=\overline{C_{2s+1} + C_{2t+1}}$ the complement of the disjoint union of a cycle of length $2s+1$ and a cycle of length $2t+1$. Assume that $G$ is a disk graph. Let $\mathcal C_1$ (resp. $\mathcal C_2$) be the cycle embedded in the plane formed by $2s+1$ (resp. $2t+1$) straight-line segments joining the consecutive centers of disks along the first (resp. second) cycle. Observe that the segments of those two cycles correspond to the non-edges of $G$. We number the segments of $\mathcal C_1$ from $S_1$ to $S_{2s+1}$, and the segments of $\mathcal C_2$, from $S'_1$ to $S'_{2t+1}$. For the $i$-th segment $S_i$ of $\mathcal C_1$, let $a_i$ be the number of segments of $\mathcal C_2$ intersected by the line $\ell(S_i)$ prolonging $S_i$, let $b_i$ be the number of segments $S'_j$ of $\mathcal C_2$ such that the prolonging line $\ell(S'_j)$ intersects $S_i$, and let $c_i$ be the number of segments of $\mathcal C_2$ intersecting $S_i$. For the second cycle, we define similarly $a'_j$, $b'_j$, $c'_j$. The quantity $a_i+b_i-c_i$ counts the number of segments of $\mathcal C_2$ which can possibly represent a $K_{2,2}$ with $S_i$ according to Corollary~\ref{cor:intersect}. As we assumed that $G$ is a disk graph, $a_i+b_i-c_i = 2t+1$ for every $i \in [2s+1]$. Otherwise there would be at least one segment $S'_j$ of $\mathcal C_2$ such that $\ell(S_i)$ does not intersect $S'_j$ \emph{and} $\ell(S'_j)$ does not intersect $S_i$. Observe that $a_i$ is an even integer since $\mathcal C_2$ is a closed curve. Also, $\Sigma_{i=1}^{2s+1}a_i+b_i-c_i=(2t+1)(2s+1)$ is an odd number, as the product of two odd numbers. This implies that $\Sigma_{i=1}^{2s+1}b_i-c_i$ shall be odd. $\Sigma_{i=1}^{2s+1}c_i$ counts the number of intersections of the two closed curves $\mathcal C_1$ and $\mathcal C_2$, and is therefore even. Hence, $\Sigma_{i=1}^{2s+1}b_i$ shall be odd. Observe that $\Sigma_{i=1}^{2s+1}b_i=\Sigma_{j=1}^{2t+1}a'_j$ by reordering and reinterpreting the sum from the point of view of the segments of $\mathcal C_2$. Since the $a'_j$ are all even, $\Sigma_{i=1}^{2s+1}b_i$ is also even; a contradiction. \end{proof} \subsection{The disjoint union of cycles with at most one odd is co-disk} \label{subsec:co-disk} We only show the following part of Theorem~\ref{thm:main-structural} to emphasize that, rather unexpectedly, parity plays a crucial role in disk graphs of co-degree 2. It is also amusing that the complement of any odd cycle is a \emph{unit} disk graph while the complement of any even cycle of length at least 8 is not \cite{Atminas16}. Here, the situation is somewhat reversed: complements of even cycles are \emph{easier} to represent than complements of odd cycles. \begin{theorem}\label{thm:coEvenCycles} The complement of the disjoint union of even cycles and one odd cycle is a disk graph. \end{theorem} \begin{proof} We start with a disk representation of the complement of one even cycle $C_{2s}$. Again, this construction is not possible with \emph{unit} disks for even cycles of length at least 8. We assume that the vertices of the cycle $C_{2s}$ are $1, 2, \ldots, 2s$ in this order. For each $i \in [2s]$, the disk $\mathcal D_i$ encodes the vertex $i$. We start by fixing the disks $\mathcal D_1$, $\mathcal D_2$, and $\mathcal D_{2s}$. Those three disks have the same radius. We place $\mathcal D_2$ and $\mathcal D_{2s}$ side by side: their centers have the same $y$-coordinate. They intersect and the distance between their center is $\varepsilon > 0$. We define $\mathcal D_1$ as the disk above $\mathcal D_2$ and $\mathcal D_{2s}$ tangent to those two disks and sharing the same radius. We denote by $p_1$ its intersection with $\mathcal D_2$ and by $p_s$ its intersection with $\mathcal D_{2s}$. We then slightly shift $\mathcal D_1$ upward so that it does not touch (nor does it intersect) $\mathcal D_2$ and $\mathcal D_{2s}$ anymore. While we do this translation, we imagine that the points $p_1$ and $p_s$ remain fixed at the boundary of $\mathcal D_2$ and $\mathcal D_{2s}$ respectively (see Figure~\ref{fig:complement-one-even-cycle1}). Let $p_2, p_3, \ldots, p_{s-1}$ points in the interior of $\mathcal D_1$ and below the line $\ell(p_1,p_s)$ such that $p_1, p_2, \ldots, p_{s-1}, p_s$ form an $x$-monotone convex chain (see Figure~\ref{fig:complement-one-even-cycle2}). \begin{figure}[h!] \centering \begin{minipage}{0.3\textwidth} \centering \begin{tikzpicture}[ dot/.style={fill,circle,inner sep=-0.01cm}, vert/.style={draw, fill=red, opacity=0.2}, verta/.style={draw, fill=blue, opacity=0.2}, ] \def0.5{1} \coordinate (c2) at (0,0) ; \draw[vert] (c2) circle (1) ; \node at (c2) {$\mathcal D_2$} ; \coordinate (c2s) at (0.5,0) ; \draw[vert] (c2s) circle (1) ; \node at (c2s) {$\mathcal D_{2s}$} ; \coordinate (c1) at (0.5 / 2,2 - 0.5 / 17) ; \draw[verta] (c1) circle (1) ; \node at (c1) {$\mathcal D_1$} ; \node[dot] at (0.25,0.96) {} ; \node[dot] at (0.5 - 0.25,0.96) {} ; \node at (0.05,0.96) {$p_1$} ; \node at (0.5 - 0.05,0.96) {$p_s$} ; \end{tikzpicture} \subcaption{Three important disks with the same size $\mathcal D_1$, $\mathcal D_2$, $\mathcal D_{2s}$.} \label{fig:complement-one-even-cycle1} \end{minipage} \qquad \begin{minipage}{0.6\textwidth} \centering \begin{tikzpicture} [ dot/.style={fill,circle,inner sep=-0.01cm} ] \centerarc[draw, fill=blue, fill opacity=0.2](0,0)(-50:-130:5) ; \def-3.2{-3.2} \def-4{-4} \def3.2{3.2} \node[dot] at (-3.2,-4) {} ; \node at (-3.2,-4 - 0.2) {$p_1$} ; \node[dot] at (3.2,-4) {} ; \node at (3.2,-4 - 0.2) {$p_s$} ; \foreach \k/\i/\j in {2/0.5/0.2,3/1/0.36,4/1.5/0.48}{ \node[dot] at (-3.2+\i,-4-\j) {} ; \node at (-3.2+\i,-4-\j- 0.2) {$p_\k$} ; } \foreach \k/\i/\j in {1/0.5/0.2,2/1/0.36,3/1.5/0.48}{ \node[dot] at (3.2-\i,-4-\j) {} ; \node at (3.2-\i,-4-\j- 0.2) {$p_{s\text{-}\k}$} ; } \node at (0,-4.7) {$\ldots$} ; \node at (0,-4.2) {$\mathcal D_1$} ; \end{tikzpicture} \subcaption{Zoom where $\mathcal D_1$ almost touches $\mathcal D_2$ and $\mathcal D_{2s}$.} \label{fig:complement-one-even-cycle2} \end{minipage} \caption{The disks $\mathcal D_1$, $\mathcal D_2$, $\mathcal D_{2s}$ and the convex chain $p_1, p_2, \ldots, p_s$. The curvature of the boundary of $\mathcal D_1$ is exaggerated in the zoom for the sake of clarity.} \label{fig:complement-one-even-cycle} \end{figure} Now, we define the disks $\mathcal D_4, \mathcal D_6, \ldots, \mathcal D_{2s-2}$. For each $i \in \{4,6,\ldots,2s-2\}$, let $\mathcal D_i$ be the unique disk with the same radius as $\mathcal D_2$ and such that the boundary of $\mathcal D_i$ crosses $p_{i/2}$ and is below its tangent $\tau_{i/2}$ at this point which has the direction of $\ell(p_{i/2-1},p_{i/2+1})$. It should be observed that the only disk with even index $i$ which contains $p_{i/2}$ is $\mathcal D_i$. We can further choose the convex chain $\{p_i\}_{i \in [s]}$ such that one co-tangent $\tau_{i,i+1}$ to $\mathcal D_{2i}$ and $\mathcal D_{2i+2}$ has a slope between the slopes of $\tau_i$ and $\tau_{i+1}$. Finally we define the disks $\mathcal D_3, \mathcal D_5, \ldots, \mathcal D_{2s-1}$. For each $i \in \{3,5,\ldots,2s-1\}$, let $\mathcal D_i$ be tangent to $\tau_{i,i+1}$ at the point of $x$-coordinate the mean between the $x$-coordinates of $p_{\frac{i-1}{2}}$ and $p_{\frac{i+1}{2}}$. Moreover, $\mathcal D_i$ is above $\tau_{i,i+1}$ and has a radius sufficiently large to intersect every disk with even index which are not $\mathcal D_{i-1}$ and $\mathcal D_{i+1}$. It is easy to see that the disks $\mathcal D_i$ with even index (resp. odd index) form a clique. By construction, the disk $\mathcal D_i$ with odd index greater than 3 intersects every disk with even index except $\mathcal D_{i-1}$ and $\mathcal D_{i+1}$ since $\mathcal D_i$ is on the other side of $\tau_{i,i+1}$ than those two disks. As the line $\tau_{i,i+1}$ intersects every other disk with even index, there is a sufficiently large radius so that $\mathcal D_i$ does so, too. The particular case of $\mathcal D_1$ has been settled at the beginning of the construction. This disk avoids $\mathcal D_2$ and $\mathcal D_{2s}$ and contains $p_2, p_3, \ldots, p_{s-1}$, so intersects all the other disks with even index. We now explain how to \emph{stack} even cycles. We make the distance $\varepsilon$ between the center of $\mathcal D_2$ and $\mathcal D_{2s}$ a thousandth of their common radius. Note that this distance does not depend on the value of $s$. We identify the small region (point) where the disk $\mathcal D_1$ intersects with the disks of even index, between two different complements of cycles. We then rotate from this point one representation by a small angle (see Figure~\ref{fig:even-cycles-complement} for multiple complements of even cycles stacked). \begin{figure}[h!] \centering \begin{tikzpicture}[ dot/.style={fill,circle,inner sep=-0.01cm}, vert/.style={draw, very thin, fill=red, fill opacity=0.2}, verta/.style={draw, very thin, fill=blue, fill opacity=0.2}, extended line/.style={shorten >=-#1,shorten <=-#1}, extended line/.default=1cm, one end extended/.style={shorten >=-#1}, one end extended/.default=1cm, ] \def0.5{1} \def-4{-4} \def4{4} \def3{3} \coordinate (c1) at (0,1) ; \draw[verta] (c1) circle (1) ; \coordinate (c2) at (0,-1) ; \draw[vert] (c2) circle (1) ; \draw[very thin] (-4,0) -- (4,0) ; \fill[blue,fill opacity=0.2] (-4,0) -- (4,0) -- (4,3) -- (-4,3) -- cycle ; \foreach \i in {-20,-10,10,20}{ \begin{scope}[rotate=\i] \coordinate (c1) at (0,1) ; \draw[verta] (c1) circle (1) ; \coordinate (c2) at (0,-1) ; \draw[vert] (c2) circle (1) ; \draw[very thin] (-4,0) -- (4,0) ; \fill[blue,opacity=0.2] (-4,0) -- (4,0) -- (4,3) -- (-4,3) -- cycle ; \end{scope} } \node at (0,1) {$\mathcal D_1$} ; \node at (0,2.4) {$\mathcal D_{2i+1}$} ; \node at (0,-1) {$\mathcal D_{2i}$} ; \end{tikzpicture} \caption{A disk realization of the complement of the disjoint union of an arbitrary number of even cycles.} \label{fig:even-cycles-complement} \end{figure} The reason why there are indeed all the edges between two complements of cycles is intuitive and depicted in Figure~\ref{fig:clique-minus-matching} and more specifically Figure~\ref{fig:cmm-lines}. We superimpose all the complements of even cycles in a way that the maximum rotation angle between two complements of cycles is small (see for instance Figure~\ref{fig:even-cycles-odd-cycle-complement}). \begin{figure}[h!] \centering \begin{minipage}{0.55\textwidth} \centering \begin{tikzpicture} [ scale = 1.5, dot/.style={fill,circle,inner sep=-0.01cm}, vert/.style={draw, very thin, fill=red, fill opacity=0.2}, verta/.style={draw, very thin, fill=blue, fill opacity=0.2}, vertb/.style={draw, very thin, fill=green, fill opacity=0.2}, ] \def1.2{1.2} \def5{5} \foreach \h in {1,...,5}{ \pgfmathsetmacro{\i}{15 * (\h - 5 / 2 - 0.5)} \pgfmathsetmacro{\j}{25 * (\h-1)} \draw[rotate=\i,verta] (0,0) arc (-90:270:1.2) ; \draw[rotate=\i,vert] (0,-0.01) arc (90:-270:1.2) ; } \node at (0,1) {$\mathcal D_1$} ; \node at (0,-1) {$\mathcal D_{2i}$} ; \end{tikzpicture} \subcaption{The only potential non-edges are between two disks represented almost tangent.} \label{fig:cmm-circles} \end{minipage} \qquad \begin{minipage}{0.35\textwidth} \centering \begin{tikzpicture} [ scale = 1.4, dot/.style={fill,circle,inner sep=-0.01cm}, vert/.style={draw, very thin, fill=red, fill opacity=0.2}, verta/.style={draw, very thin, fill=blue, fill opacity=0.2}, vertb/.style={draw, very thin, fill=green, fill opacity=0.2}, ] \def5{5} \foreach \h in {1,...,5}{ \pgfmathsetmacro{\i}{15 * (\h - 5 / 2 - 0.5)} \pgfmathsetmacro{\j}{25 * (\h-1)} \draw[rotate=\i,blue] (0,0) -- (-2,0) ; \draw[rotate=\i,blue] (0,0) -- (2,0) ; \draw[rotate=\i,red] (0,-0.1) -- (-2,-0.1) ; \draw[rotate=\i,red] (0,-0.1) -- (2,-0.1) ; } \end{tikzpicture} \subcaption{Zoom in where the boundary of the disks intersect.} \label{fig:cmm-lines} \end{minipage} \caption{Zoom in where the disk $\mathcal D_1$ of the several complements of even cycles intersects all the $\mathcal D_{2i}$ of the other cycles.} \label{fig:clique-minus-matching} \end{figure} Finally, we need to add one disjoint odd cycle in the complement. There is a nice representation of a complement of an odd cycle by unit disks in the paper of Atminas and Zamaraev \cite{Atminas16} (see Figure~\ref{fig:atminas-zamaraev}). \begin{figure}[h!] \centering \begin{tikzpicture}[ dot/.style={fill,circle,inner sep=-0.01cm}, vert/.style={draw, very thin, fill=red, fill opacity=0.2}, verta/.style={draw, very thin, fill=blue, fill opacity=0.2}, vertb/.style={draw, very thin, fill=green, fill opacity=0.2}, ] \def0.5{13} \foreach \i in {1,...,0.5}{ \begin{scope}[rotate=360 * \i / 13] \coordinate (d\i) at (0,1) ; \draw[vertb] (d\i) circle (0.95) ; \end{scope} } \end{tikzpicture} \caption{A disk realization of the complement of an odd cycle with unit disks as described by Atminas and Zamaraev \cite{Atminas16}. Unfortunately, we cannot use this representation.} \label{fig:atminas-zamaraev} \end{figure} We will use a different and non-unit representation for the next step to work. Let $2s+1$ be the length of the cycle. We use a similar construction as for the complement of an even cycle. We denote the disks $\mathcal D'_1, \mathcal D'_2, \ldots, \mathcal D'_{2s+1}$. The difference is that we separate $\mathcal D'_1$ away from $\mathcal D'_2$ but not from $\mathcal D'_{2s}$. Then, we represent all the disks with odd index but $\mathcal D'_{2s+1}$ as before. The disk $\mathcal D'_{2s+1}$ is chosen as being cotangent to $\mathcal D'_1$ and $\mathcal D'_{2s}$ and to the left of them. Then we very slighlty move $\mathcal D'_{2s+1}$ to the left so that it does not intersect those two disks anymore. The disk $\mathcal D'_{2s}$ have the rightmost center among the disks with even index. Therefore $\mathcal D'_{2s+1}$ still intersects all the other disks of even index. Moreover, the disks with even index form a clique and the disks with odd index form a clique minus an edge between the vertex $1$ and the vertex $2s+1$. Hence, the intersection graph of those disks is indeed the complement of $C_{2s+1}$ (see Figure~\ref{fig:odd-cycle-complement}). \begin{figure}[h!] \centering \begin{tikzpicture}[ dot/.style={fill,circle,inner sep=-0.01cm}, vert/.style={draw, very thin, fill=red, fill opacity=0.2}, verta/.style={draw, very thin, fill=blue, fill opacity=0.2}, vertb/.style={draw, very thin, fill=green, fill opacity=0.2}, ] \def0.5{1} \def-4{-4} \def4{4} \def3{3} \def-2{-2} \coordinate (c1) at (0,1) ; \draw[verta] (c1) circle (1) ; \coordinate (c2) at (0,-1) ; \draw[vert] (c2) circle (1) ; \draw[very thin] (-4,0) -- (4,0) ; \fill[green,fill opacity=0.2] (-4,0) -- (4,0) -- (4,3) -- (-4,3) -- cycle ; \draw[very thin] (-1,-2) -- (-1,3) ; \fill[green,fill opacity=0.2] (-1,-2) -- (-1,3) -- (-4,3) -- (-4,-2) -- cycle ; \node at (0,2.4) {$\mathcal D'_{2i+1}$} ; \node at (0,-1) {$\mathcal D'_{2i}$} ; \node at (-2.5,-0.3) {$\mathcal D'_{2s+1}$} ; \node at (0,1) {$\mathcal D'_1$} ; \end{tikzpicture} \caption{A disk realization of the complement of an odd cycle of length $2s+1$.} \label{fig:odd-cycle-complement} \end{figure} This representation of $\overline{C_{2s+1}}$ can now be put on top of complements of even cycles. We identify the small region (point) where the disk $\mathcal D_1$ intersects the disks of even index (in complements of even cycles) with the small region (point) where the disk $\mathcal D'_1$ intersects the disks of even index (in the one complement of odd cycle). We make the disk $\mathcal D'_1$ significantly smaller than $\mathcal D_1$ and rotate the representation of $\overline{C_{2s+1}}$ by a sizable angle, say 60 degrees (see Figure~\ref{fig:even-cycles-odd-cycle-complement}). \begin{figure}[h!] \centering \begin{tikzpicture}[ dot/.style={fill,circle,inner sep=-0.01cm}, vert/.style={draw, very thin, fill=red, fill opacity=0.2}, verta/.style={draw, very thin, fill=blue, fill opacity=0.2}, vertb/.style={draw, very thin, fill=green, fill opacity=0.2}, ] \def0.5{1} \def-4{-4} \def4{4} \def3{3} \coordinate (c1) at (0,1) ; \draw[verta] (c1) circle (1) ; \coordinate (c2) at (0,-1) ; \draw[vert] (c2) circle (1) ; \draw[very thin] (-4,0) -- (4,0) ; \fill[blue,fill opacity=0.2] (-4,0) -- (4,0) -- (4,3) -- (-4,3) -- cycle ; \foreach \i in {-1,-0.5,0.5,1}{ \begin{scope}[rotate=\i] \coordinate (c1) at (0,1) ; \draw[verta] (c1) circle (1) ; \coordinate (c2) at (0,-1) ; \draw[vert] (c2) circle (1) ; \draw[very thin] (-4,0) -- (4,0) ; \fill[blue,opacity=0.2] (-4,0) -- (4,0) -- (4,3) -- (-4,3) -- cycle ; \end{scope} } \node at (0,1) {$\mathcal D_1$} ; \node at (0,2.4) {$\mathcal D_{2i+1}$} ; \node at (0,-1) {$\mathcal D_{2i}$} ; \def0.045{0.1} \def2{2} \def-2{-2} \def2.8{2.8} \begin{scope}[rotate=55] \coordinate (d1) at (0,0.045) ; \draw[verta] (d1) circle (0.045) ; \coordinate (d2) at (0,-0.045) ; \draw[vert] (d2) circle (0.045) ; \draw[very thin] (-2,0) -- (2.8,0) ; \fill[green,fill opacity=0.2] (-2,0) -- (2.8,0) -- (2.8,2) -- (-2,2) -- cycle ; \draw[very thin] (-0.045,-2) -- (-0.045,2) ; \fill[green,fill opacity=0.2] (-0.045,-2) -- (-0.045,2) -- (-2,2) -- (-2,-2) -- cycle ; \end{scope} \node at (-1.5,0.6) {$\mathcal D'_{2i+1}$} ; \node at (-1.5,-0.6) {$\mathcal D'_{2s+1}$} ; \end{tikzpicture} \caption{Placing the complement of odd cycle on top of the complements of even cycles.} \label{fig:even-cycles-odd-cycle-complement} \end{figure} It is easy to see that the disks of the complement of the odd cycle intersect all the disks of the complements of even cycles. A good sanity check is to observe why we cannot stack representations of complements of odd cycles, with the same rotation scheme. In Figure~\ref{fig:sanity-check}, the rotation of two representations of the complement of an odd cycle leaves disks $\mathcal D'_1$ and $\mathcal D''_{2s'+1}$ far apart when they should intersect. \begin{figure}[h!] \centering \begin{tikzpicture}[ dot/.style={fill,circle,inner sep=-0.01cm}, vert/.style={draw, very thin, fill=red, fill opacity=0.2}, verta/.style={draw, very thin, fill=blue, fill opacity=0.2}, vertb/.style={draw, very thin, fill=green, fill opacity=0.2}, ] \def0.5{1} \def-4{-4} \def4{4} \def3{3} \def-2{-2} \foreach \i in {0,20}{ \begin{scope}[rotate=\i] \coordinate (c1) at (0,1) ; \draw[verta] (c1) circle (1) ; \coordinate (c2) at (0,-1) ; \draw[vert] (c2) circle (1) ; \draw[very thin] (-4,0) -- (4,0) ; \fill[green,fill opacity=0.2] (-4,0) -- (4,0) -- (4,3) -- (-4,3) -- cycle ; \draw[very thin] (-1,-2) -- (-1,3) ; \fill[green,fill opacity=0.2] (-1,-2) -- (-1,3) -- (-4,3) -- (-4,-2) -- cycle ; \end{scope} } \node at (0,2.4) {$\mathcal D'_{2i+1}$} ; \node at (0,-1) {$\mathcal D'_{2i}$} ; \node at (-2.5,-0.3) {$\mathcal D'_{2s+1}$} ; \node at (0,1) {$\mathcal D'_1$} ; \node at (-2.5,-1.3) {$\mathcal D''_{2s'+1}$} ; \end{tikzpicture} \caption{Sanity check: trying to stack the complements of two odd cycles fails. The disks $\mathcal D'_1$ and $\mathcal D''_{2s'+1}$ do not intersect.} \label{fig:sanity-check} \end{figure} \end{proof} Theorem~\ref{thm:main-structural-non-disk} and Theorem~\ref{thm:coEvenCycles}, together with the fact that disk graphs are closed by taking induced subgraphs prove Theorem~\ref{thm:main-structural}. \section{Algorithmic consequences}\label{sec:algorithms} Now we show how to use the structural results from Section \ref{sec:structural} to obtain algorithms for \textsc{Maximum Clique}\xspace in disk graphs. A clique in a graph $G$ is an independent set in $\overline{G}$. So, leveraging the result from Theorem \ref{thm:main-structural}, we will focus on solving \textsc{Maximum Independent Set}\xspace in graphs without two vertex-disjoint odd cycles as an induced subgraph. \subsection{QPTAS} The odd cycle packing number $\ocp(H)$ of a graph $H$ is the maximum number of vertex-disjoint odd cycles in $H$. Unfortunately, the condition that $\overline{G}$ does not contain two vertex-disjoint odd cycles as an induced subgraph is not quite the same as saying that the odd cycle packing number of $\overline{G}$ is 1. Otherwise, we would immediately get a PTAS by the following result of Bock et al.~\cite{Bock14}. \begin{theorem}[Bock et al.~\cite{Bock14}]\label{thm:bock-ptas} For every fixed $\varepsilon >0$ there is a polynomial $(1+\varepsilon)$-approximation algorithm for \textsc{Maximum Independent Set}\xspace for graphs $H$ with $n$ vertices and $\ocp(H) = o(n / \log n)$. \end{theorem} The algorithm by Bock et al. works in polynomial time if $\ocp(H) = o(n / \log n)$, but it does not need the odd cycle packing explicitly given as an input. This is important, since finding a maximum odd cycle packing is NP-hard \cite{DBLP:conf/stoc/KawarabayashiR10}. We start by proving a structural lemma, which spares us having to determine the odd cycle packing number. \begin{lemma}\label{lem:bigdeg} Let $H$ be a graph with $n$ vertices, whose complement is a disk graph. If $\ocp(H) > n/ \log^2 n$, then $H$ has a vertex of degree at least $n / \log ^4 n$. \end{lemma} \begin{proof} Consider a maximum odd cycle packing $\mathcal{C}$. By assumption, it contains more than $n/\log^2 n$ vertex-disjoint cycles. By the pigeonhole principle, there must be a cycle $C \in \mathcal{C}$ of size at most $\log^2 n$. Now, by Theorem \ref{thm:main-structural-non-disk}, $H$ has no two vertex-disjoint odd cycles with no edges between them. Therefore there must be an edge from $C$ to every other cycle of $\mathcal{C}$, there are at least $n / \log^2 n$ such edges. Let $v$ be a vertex of $C$ with the maximum number of edges to other cycles in $\mathcal{C}$, by the pigeonhole principle its degree is at least $n / \log^4 n$. \end{proof} Now we are ready to construct a QPTAS for \textsc{Maximum Clique}\xspace in disk graphs. \begin{theorem}\label{thm:qptas} For any $\varepsilon > 0$, \textsc{Maximum Clique}\xspace can be $(1+\varepsilon)$-approximated in time $2^{O(\log^5 n)}$, when the input is a disk graph with $n$ vertices. \end{theorem} \begin{proof} Let $G$ be the input disk graph and let $\overline{G}$ be its complement, we want to find a $(1+\varepsilon)$-approximation for \textsc{Maximum Independent Set}\xspace in $\overline{G}$. We consider two cases. If $\overline{G}$ has no vertex of degree at least $n / \log ^4 n$, then, by Lemma \ref{lem:bigdeg}, we know that $\ocp(\overline{G}) \leqslant n / \log^2 n = o(n / \log n)$. In this case we run the PTAS of Bock et al. and we are done. In the other case, $\overline{G}$ has a vertex $v$ of degree at least $n / \log ^4 n$ (note that it may still be the case that $\ocp(\overline{G}) = o(n / \log n)$). We branch on $v$: either we include $v$ in our solution and remove it and all its neighbors, or we discard $v$. The complexity of this step is described by the recursion $F(n) \leqslant F(n-1) + F(n- n / \log^4 n)$ and solving it gives us the desired running time. Note that this step is exact, i.e., we do not lose any solutions. \end{proof} \subsection{Subexponential algorithm} Now we will show how our structural result can be used to construct a subexponential algorithm for \textsc{Maximum Clique}\xspace in disk graphs. The \emph{odd girth} of a graph is the size of a shortest odd cycle. An \emph{odd cycle cover} is a subset of vertices whose deletion makes the graph bipartite. We will use a result by Györi et al. \cite{Gyori97}, which says that graphs with large odd girth have small odd cycle cover. In that sense, it can be seen as relativizing the fact that odd cycles do not have the Erd\H{o}s-P\'osa property. Bock et al. \cite{Bock14} turned the non-constructive proof into a polynomial-time algorithm. \begin{theorem}[Györi et al. \cite{Gyori97}, Bock et al. \cite{Bock14}]\label{thm:occ} Let $H$ be a graph with $n$ vertices and no odd cycle shorter than $\delta n$ ($\delta$ may be a function of $n$). Then there is an odd cycle cover $X$ of size at most $(48/\delta) \ln (5/\delta)$ Moreover, $X$ can be found in polynomial time. \end{theorem} Let us start with showing three variants of an algorithm. \begin{theorem} \label{thm:subexp} Let $G$ be a disk graph with $n$ vertices. Let $\Delta$ be the maximum degree of $\overline{G}$ and $c$ the odd girth of $\overline{G}$ (they may be functions of $n$). \textsc{Maximum Clique}\xspace has a branching or can be solved, up to a polynomial factor, in time:\\ \begin{enumerate*}[label=(\roman*),itemjoin={\quad}] \item $2^{\tilde{O}(n/\Delta)}$ (branching), \label{case:subexp-delta} \item $2^{\tilde{O}(n/c)}$ (solved), \label{case:subexp-oddgirth} \item $2^{{O}(c \Delta)}$ (solved). \label{case:subexp-both} \end{enumerate*} \end{theorem} \begin{proof} Let $G$ be the input disk graph and let $\overline{G}$ be its complement, we look for a maximum independent set in $\overline{G}$. To prove \ref{case:subexp-delta}, consider a vertex $v$ of degree $\Delta$ in $\overline{G}$. We branch on $v$: either we include $v$ in our solution and remove $N[v]$, or discard $v$. The complexity is described by the recursion $F(n) \leqslant F(n-1) + F(n- (\Delta+1))$ and solving it gives \ref{case:subexp-delta}. Observe that this does not give an algorithm running in time $2^{\tilde{O}(n/\Delta)}$ since the maximum degree might drop. Therefore, we will do this branching as long as it is \emph{good enough} and then finish with the algorithms corresponding to \ref{case:subexp-oddgirth} and \ref{case:subexp-both}. For \ref{case:subexp-oddgirth} and \ref{case:subexp-both}, let $C$ be the cycle of length $c$, it clearly can be found in polynomial time. By application of Theorem \ref{thm:occ} with $\delta = c/n$, we find an odd cycle cover $X$ in $\overline{G}$ of size $\tilde{O}(n/c)$ in polynomial time (see for instance \cite{AlonYZ97}). Next we exhaustively guess in time $2^{\tilde{O}(n/c)}$ the intersection $I$ of an optimum solution with $X$ and finish by finding a maximum independent set in the bipartite graph $\overline{G}-(X \cup N(I))$, which can be done in polynomial time. The total complexity of this case is $2^{\tilde{O}(n/c)}$, which shows \ref{case:subexp-oddgirth}. Finally, observe that the graph $\overline{G} - N[C]$ is bipartite, since otherwise $\overline{G}$ contains two vertex-disjoint odd cycles with no edges between them. Moreover, since every vertex in $\overline{G}$ has degree at most $\Delta$, it holds that $|N[C]| \leqslant c (\Delta-1) \leqslant c \Delta$. Indeed, a vertex of $C$ can only have $c (\Delta-2)$ neighbors outside $C$. We can proceed as in the previous step: we exhaustively guess the intersection of the optimal solution with $N[C]$ and finish by finding the maximum independent set in a bipartite graph (a subgraph of $\overline{G}-N[C]$), which can be done in total time $2^{O(c \Delta)}$, which shows \ref{case:subexp-both}. \end{proof} Now we show how the structure of $G$ affect the bounds in Theorem \ref{thm:subexp}. \begin{corollary}\label{cor:subexp} Let $G$ be a disk graph with $n$ vertices. \textsc{Maximum Clique}\xspace can be solved in time: \begin{compactenum}[(a)] \item $2^{\tilde{O}(n^{2/3})}$, \item $2^{\tilde{O}(\sqrt{n})}$ if the maximum degree of $\overline{G}$ is constant, \item polynomial, if both the maximum degree and the odd girth of $\overline{G}$ are constant. \end{compactenum} \end{corollary} \begin{proof} $\Delta$ and $c$ can be computed in polynomial time. Therefore, knowing what is faster among cases \ref{case:subexp-delta}, \ref{case:subexp-oddgirth}, and \ref{case:subexp-both} is tractable. For case (a), while there is a vertex of degree at least $n^{1/3}$, we branch on it. When this process stops, we do what is more advantageous between cases \ref{case:subexp-oddgirth} and \ref{case:subexp-both}. Note that $\min (n/\Delta, n/c, c \Delta) \leqslant n^{2/3}$ (the equality is met for $\Delta = c = n^{1/3}$). For case (b), we do what is best between cases \ref{case:subexp-oddgirth} and \ref{case:subexp-both}. Note that $\min (n/c, c) \leqslant \sqrt{n}$ (the equality is met for $c = \sqrt{n})$. Finally, case (c) follows directly from case \ref{case:subexp-both} in Theorem \ref{thm:subexp}. \end{proof} Observe that case (b) is typically the hardest one for \textsc{Maximum Clique}\xspace. Moreover, the win-win strategy of Corollary \ref{cor:subexp} can be directly applied to solve \textsc{Maximum Weighted Clique}\xspace, as finding a maximum weighted independent set in a bipartite graph is still polynomial-time solvable. On the other hand, this approach cannot be easily adapted to obtain a subexponential algorithm for \textsc{Clique Partition} (even \textsc{Clique $p$-Partition} with constant $p$), since \textsc{List Coloring} (even \textsc{List $3$-Coloring}) has no subexponential algorithm for bipartite graphs, unless the ETH fails~(see \cite{precoloring}, the bound can be obtained if we start reduction from a sparse instance of {\textsc{1-in-3-Sat} instead of {\textsc{Planar 1-in-3-Sat}). \section{Other intersection graphs and limits}\label{sec:gen&lim} In this section, we discuss the impossibility of generalizing our results to related classes of intersection graphs. \subsection{Filled ellipses and filled triangles} A natural generalization of a disk is an \emph{elliptical disk}, also called \emph{filled ellipse}, i.e., an ellipse plus its interior. The simplest convex set with non empty interior is a filled triangle (a triangle plus its interior). We show that our approach developed in the two previous sections, and actually every approach, is bound to fail for filled ellipses and filled triangles. APX-hardness was shown for \textsc{Maximum Clique}\xspace in the intersection graphs of (\emph{non-filled}) ellipses and triangles by Amb\"uhl and Wagner~\cite{Ambuhl05}. Their reduction also implies that there is no subexponential algorithm for this problem, unless the ETH fails. Moreover, they claim that their hardness result extends to filled ellipses since \emph{``intersection graphs of ellipses without interior are also intersection graphs of filled ellipses''}. Unfortunately, as we show below, this claim is incorrect. \begin{theorem}\label{thm:counterexample} There is a graph $G$ which has an intersection representation with ellipses without their interior, but has no intersection representation with convex sets. \end{theorem} \begin{proof} The argument is similar to the one used by Brimkov et al. \cite{DBLP:journals/corr/BrimkovJKKPRST14}, which was in turn inspired by the construction by Kratochv\'il and Matou\v{s}ek \cite{DBLP:journals/jct/KratochvilM94}. Consider the graph $G$ in Figure~\ref{fig:counterexample2} (containing what we will henceforth call \textit{black}, \textit{gray} and \textit{white} vertices), and observe that $c$ and $d$ are two non-adjacent vertices with the same neighborhoods. \begin{figure}[ht] \centering \begin{center} \tiny \begin{tikzpicture}[scale=0.72] \tikzstyle{every node}=[draw, shape = circle] \node (o1) at (0,3) {}; \node (o2) at (1,2.6) {}; \node (o3) at (2,2) {}; \node (o4) at (2.6,1) {}; \node (o5) at (3,0) {}; \node (o6) at (2.6,-1) {}; \node (o7) at (2,-2) {}; \node (o8) at (1,-2.6) {}; \node (o9) at (0,-3) {}; \node (o10) at (-1,-2.6) {}; \node (o11) at (-2,-2) {}; \node (o12) at (-2.6,-1) {}; \node (o13) at (-3,0) {}; \node (o14) at (-2.6,1) {}; \node (o15) at (-2,2) {}; \node (o16) at (-1,2.6) {}; \node[fill = black, inner sep=-0.19cm] (a) at (0,0) {\color{white}{\large $c$}}; \node[fill = black, inner sep=-0.24cm] (ap) at (1,-0.3) {\color{white}{\large $d$}}; \node[fill = black, inner sep=-0.23cm] (b) at (-1,-0.5) {\color{white}{\large $b$}}; \node[fill = black, inner sep=-0.20cm] (c) at (0,1) {\color{white}{\large $a$}}; \node[fill = gray] (a3) at (1.2,1.2) { }; \node[fill = gray] (a7) at (1.2,-1.2) { }; \node[fill = gray] (a11) at (-1.2,-1.2) { }; \node[fill = gray] (a15) at (-1.2,1.2) { }; \node[fill = gray] (b1) at (-0.5,1.5) { }; \node[fill = gray] (b9) at (-0.5,-1.5) { }; \node[fill = gray] (c5) at (1.5,0.5) { }; \node[fill = gray] (c13) at (-1.5,0.5) { }; \draw (o1)--(o2)--(o3)--(o4)--(o5)--(o6)--(o7)--(o8)--(o9)--(o10)--(o11)--(o12)--(o13)--(o14)--(o15)--(o16)--(o1); \draw (a) -- (a3) -- (o3); \draw (a) -- (a7) -- (o7); \draw (a) -- (a11) -- (o11); \draw (a) -- (a15) -- (o15); \draw (ap) -- (a3) -- (o3); \draw (ap) -- (a7) -- (o7); \draw (ap) -- (a11) -- (o11); \draw (ap) -- (a15) -- (o15); \draw (b) -- (b1) -- (o1); \draw (b) -- (b9) -- (o9); \draw (c) -- (c5) -- (o5); \draw (c) -- (c13) -- (o13); \draw (a) -- (b) -- (c) -- (a); \draw (ap) -- (b) -- (c) -- (ap); \end{tikzpicture} \hskip 14 pt \begin{tikzpicture}[scale=0.45] \draw (0,0) ellipse (2.5 and 2.5); \draw (0,0) ellipse (2.2 and 2.2); \draw (0,0) ellipse (3.1 and 0.9); \draw (0,0) ellipse (0.9 and 3.1); \node at (0.7,3) {\large $a$}; \node at (-1.87,0.0) {\large $c$}; \node at (2.5,1.6) {\large $d$}; \node at (3,0.8) {\large $b$}; \draw (-3.75,4.5) ellipse (2.5 and 0.75); \draw (3.75,4.5) ellipse (2.5 and 0.75); \draw (3.75,-4.5) ellipse (2.5 and 0.75); \draw (-3.75,-4.5) ellipse (2.5 and 0.75); \draw (0,6.5) ellipse (1.75 and 0.75); \draw (0,-6.5) ellipse (1.75 and 0.75); \draw (-6.5,0) ellipse (0.75 and 2.25); \draw (6.5,0) ellipse (0.75 and 2.25); \draw (1.5, 5.5) ellipse (0.5 and 1); \draw (1.5, -5.5) ellipse (0.5 and 1); \draw (-1.5, 5.5) ellipse (0.5 and 1); \draw (-1.5, -5.5) ellipse (0.5 and 1); \draw (6, 3) ellipse (0.5 and 1.5); \draw (6, -3) ellipse (0.5 and 1.5); \draw (-6, 3) ellipse (0.5 and 1.5); \draw (-6, -3) ellipse (0.5 and 1.5); \draw (0, 4.5) ellipse (0.25 and 1.75); \draw (0, -4.5) ellipse (0.25 and 1.75); \draw (4.5, 0) ellipse (1.75 and 0.25); \draw (-4.5, 0) ellipse (1.75 and 0.25); \draw (1.75, 2.7) ellipse (0.15 and 1.7); \draw (-1.75, 2.7) ellipse (0.15 and 1.7); \draw (1.75, -2.7) ellipse (0.15 and 1.7); \draw (-1.75, -2.7) ellipse (0.15 and 1.7); \end{tikzpicture} \end{center} \caption{A graph (left), which has a representation with empty ellipses (right) but no representation with convex sets.} \label{fig:counterexample2} \end{figure} Suppose $G$ can be represented by intersecting convex sets. For a vertex $v$, let $R_v$ be the convex set representing $v$. The union of representatives of the white vertices contains a closed Jordan curve, that we will call the outer circle. Let us choose the outer circle in such a way that it intersects the representatives of all gray vertices. It divides the plane into two faces -- an interior and an exterior. The outer circle cannot be crossed by the representative of any black vertex. Moreover, as black vertices form a connected subgraph, they have to be represented in the same face $F$ (with respect to the outer circle). Thus, along this circle the representatives of gray vertices appear in a prescribed ordering (note that they form an independent set). This implies the ordering in which some part of representatives of the black vertices occur. First, observe that the representatives of the gray neighbors of $a,b$, and $c$ intersect the outer circle in the following ordering: $a_1,c_1,b_1,c_2,a_2,c_3,b_2,c_4$ (where each $z_i$ for $z \in \{a,b,c\}$ is a distinct gray neighbor of $z$). Clearly, each gray neighbor of $a$ must intersect $R_a$ outside $R_a \cap (R_b \cup R_c)$, each gray neighbor of $b$ must intersect $R_b$ outside $R_b \cap (R_a \cup R_c)$, and each gray neighbor of $c$ must intersect $R_c$ outside $R_c \cap (R_a \cup R_b)$. Thus, some parts of $R_a$, $R_b$, and $R_c$ are exposed (i.e., outside the intersection with the union of representatives of remaining two vertices) in the ordering: $a,c,b,c,a,c,b$, as we move along the boundary of $R_a \cup R_b \cup R_c$. Note that this implies that $R_a \cap R_b \cap R_c \neq \emptyset$, since all sets are convex. For any $z \in \{a,b,c\}$ and any $i$, the set $R_{z_i}$ contains a segment $s(z_i)$, whose one end is on the boundary of $R_z$ and the other end is on the outer circle (recall that all representatives are convex). For $z \in \{a,b\}$ and $i \in \{1,2\}$, by $s'(z_i)$ we denote the segment joining the endpoint of $s(z_i)$ on the boundary of $R_z$ to the closest point in $R_z \cap R_c$. Now we observe that the set $\bigcup_{z \in \{a,b\}, i \in \{1,2\}} s(z_i) \cup s'(z_i)$ partitions $F \setminus R_c$ into four disjoint regions $Q_1,Q_2,Q_3,Q_4$. Let $Q_1$ be the region adjacent to $s(a_1)$ and $s(b_1)$, $Q_2$ be the region adjacent to $s(b_1)$ and $s(a_2)$, $Q_3$ be the region adjacent to $s(a_2)$ and $s(b_2)$, and $Q_4$ be the region adjacent to $s(b_2)$ and $s(a_1)$. Note that one of these regions may be unbounded, if $F$ is the unbounded face of the outer circle. For every $i \in \{1,2,3,4\}$, the set $R_{c_i} \setminus R_c$ is contained in $Q_i$. For $i = \{1,2,3,4\}$, let $p_i$ be a point in $R_d \cap R_{c_i}$, such a point exist, since $d$ is adjacent to $c_i$. By convexity of $R_d$, the segment $p_1p_2$ is contained in $R_d$. On the other hand, it crosses the curve $s(b_1) \cup s'(b_1)$, let $q_1$ be the intersection point. Since $R_d$ is disjoint with $R_{b_1}$, clearly $q_1 \in s'(b_1) \subseteq R_b$. In the analogous way we define $q_2$ to be the crossing point of $p_2p_3$ and $s(a_2) \cup s'(a_2)$, $q_3$ to be the crossing point of $p_3p_4$ and $s(b_2) \cup s'(b_2)$, and $q_4$ to be the crossing point of $p_4p_1$ and $s(a_1) \cup s'(a_1)$. We observe that $q_2 \in s'(a_2) \subseteq R_a$, $q_3 \in s'(b_2) \subseteq R_b$, and $q_4 \in s'(a_1) \subseteq R_a$. Let us consider the segment $q_1q_3$. It must intersect either $s(c_2) \cup R_c$ or $s(c_4) \cup R_c$. Without loss of generality, we assume that it intersects $s(c_2) \cup R_c$. Let $q'$ be this intersection point. By convexity, $q' \in R_d$ and $q' \in R_b$. If $q' \in s(c_2)$, we get the contradiction with the fact that $b$ and $c_2$ are non-adjacent. On the other hand, if $q' \in R_c$, we get the contradiction with the fact that $d$ and $c$ are non-adjacent. Finally, it is easy to represent $G$ with empty ellipses (see Fig. \ref{fig:counterexample2} right). \end{proof} This error and the confusion between filled ellipses and ellipses without their interior has propagated to other more recent papers \cite{Keller17}. Fortunately, we show that the hardness result does hold for filled ellipses (and filled triangles) with a different reduction. Our construction can be seen as streamlining the ideas of Ambühl and Wagner \cite{Ambuhl05}. It is simpler and, in the case of (filled) ellipses, yields a somewhat stronger statement. \begin{theorem}\label{thm:hardness-filled-ellipses} There is a constant $\alpha > 1$ such that for every $\varepsilon > 0$, \textsc{Maximum Clique}\xspace on the intersection graphs of filled ellipses has no $\alpha$-approximation algorithm running in subexponential time $2^{n^{1-\varepsilon}}$, unless the ETH fails, even when the ellipses have arbitrarily small eccentricity and arbitrarily close value of major axis. \end{theorem} This is in sharp contrast with our subexponential algorithm and with our QPTAS when the eccentricity is 0 (case of disks). For any $\varepsilon > 0$, if the eccentricity is only allowed to be at most $\varepsilon$, a subexponential algorithm or a QPTAS are very unlikely. This result subsumes \cite{ceroi} (where NP-hardness is shown for connected shapes contained in a disk of radius 1 and containing a concentric disk of radius $1-\varepsilon$ for arbitrarily small $\varepsilon > 0$) and corrects \cite{Ambuhl05}. We show the same hardness for the intersection graphs of filled triangles. \begin{theorem}\label{thm:hardness-filled-triangles} There is a constant $\alpha > 1$ such that for every $\varepsilon > 0$, \textsc{Maximum Clique}\xspace on the intersection graphs of filled triangles has no $\alpha$-approximation algorithm running in subexponential time $2^{n^{1-\varepsilon}}$, unless the ETH fails. \end{theorem} We first show this lower bound for \textsc{Maximum Weighted Independent Set} on the class of all the 2-subdivisions, hence the same hardness for \textsc{Maximum Weighted Clique} on all the co-2-subdivisions. It is folklore that from the PCP of Moshkovitz and Raz \cite{Moshkovitz10}, which roughly implies that \textsc{Max 3-SAT} cannot be $7/8+\varepsilon$-approximated in subexponential time under the ETH, one can derive such inapproximability in subexponential time for many hard graph and hypergraph problems; see for instance \cite{Bonnet15}. The following inapproximability result for \textsc{Maximum Independent Set}\xspace on bounded-degree graphs was shown by Chleb\'ik and Chleb\'ikov\'a \cite{Chlebik06}. As their reduction is almost linear, the PCP of Moshkovitz and Raz boosts this hardness result from ruling out polynomial-time up to ruling out subexponential time $2^{n^{1-\varepsilon}}$ for any $\varepsilon > 0$. \begin{theorem}[\cite{Chlebik06,Moshkovitz10}]\label{thm:inapprox-mis} There is a constant $\beta > 0$ such that \textsc{Maximum Independent Set}\xspace on graphs with $n$ vertices and maximum degree $\Delta$ cannot be $1+\beta$-approximated in time $2^{n^{1-\varepsilon}}$ for any $\varepsilon > 0$, unless the ETH fails. \end{theorem} We could actually state a slightly stronger statement for the running time but will settle for this for the sake of clarity. \begin{theorem}\label{thm:hardness-2subd} There is a constant $\alpha > 1$ such that for any $\varepsilon > 0$, \textsc{Maximum Independent Set} on the class of all the 2-subdivisions has no $\alpha$-approximation algorithm running in subexponential time $2^{n^{1-\varepsilon}}$, unless the ETH fails. \end{theorem} \begin{proof} Let $G$ be a graph with maximum degree a constant $\Delta$, with $n$ vertices $v_1, \ldots, v_n$ and $m$ edges $e_1, \ldots, e_m$, and let $H$ be its 2-subdivision. Recall that to form $H$, we subdivided every edge of $G$ exactly twice. These $2m$ vertices in $V(H) \setminus V(G)$, representing edges, are called \emph{edge vertices} and are denoted by $v^+(e_1), v^-(e_1), \ldots, v^+(e_m), v^-(e_m)$, as opposed to the other vertices of $H$, which we call \emph{original vertices}. If $e_k=v_iv_j$ is an edge of $G$, then $v^+(e_k)$ (resp. $v^-(e_k)$) has two neighbors: $v^-(e_k)$ and $v_i$ (resp. $v^+(e_k)$ and $v_j$). Observe that there is a maximum independent set $S$ which contains exactly one of $v^+(e_k), v^-(e_k)$ for every $k \in [m]$. Indeed, $S$ cannot contain both $v^+(e_k)$ and $v^-(e_k)$ since they are adjacent. On the other hand, if $S$ contains neither $v^+(e_k)$ nor $v^-(e_k)$, then adding $v^+(e_k)$ to $S$ and potentially removing the other neighbor of $v^+(e_k)$ which is $v_i$ (with $e_k=v_iv_j$) can only increase the size of the independent set. Hence $S$ contains $m$ edge vertices and $s \leqslant n$ original vertices, and there is no larger independent set in $H$. We observe that the $s$ original vertices is $S$ form an independent set in $G$. Indeed, if $v_iv_j=e_k \in E(G)$ and $v_i,v_j \in S$, then neither $v^+(e_k)$ nor $v^-(e_k)$ could be in $S$. Now, assume there is an approximation with ratio $\alpha := 1+\frac{2\beta}{(\Delta+1)^2}$ for \textsc{Maximum Independent Set} on 2-subdivisions running in subexponential time, where $1+\beta > 1$ is a ratio which is not attainable for \textsc{Maximum Independent Set} on graphs of maximum degree $\Delta$ according to Theorem~\ref{thm:inapprox-mis}. On instance $H$, this algorithm would output a solution with $m'$ edge vertices and $s'$ original vertices. As we already observed this solution can be easily (in polynomial time) transformed into an at-least-as-good solution with $m$ edge vertices and $s''$ original vertices forming an independent set in $G$. Further, we may assume that $s'' \geqslant n / (\Delta+1)$ since for any independent set of $G$, we can obtain an independent set of $H$ consisting of the same set of original vertices and $m$ edge vertices. Since $m \leqslant n \Delta / 2$ and $s'' \geqslant n / (\Delta+1)$, we obtain $m \leqslant s'' \Delta(\Delta+1)/2$ and $2m/(\Delta+1)^2 \leqslant s''\Delta /(\Delta+1)$. From $\frac{m+s}{m+s''} \leqslant \alpha$ and $\Delta \geqslant 3$, we have \[ s \leqslant m\cdot \frac{2\beta}{(\Delta+1)^2} + s''\cdot (1+ \frac{2\beta}{(\Delta+1)^2}) \leqslant s'' (\frac{\Delta\beta}{\Delta+1} + 1 +\frac{2\beta}{(\Delta+1)^2} ) \leqslant s'' (1+\beta) \] This contradicts the inapproximability of Theorem~\ref{thm:inapprox-mis}. Indeed, note that the number of vertices of $H$ is only a constant times the number of vertices of $G$ (recall that $G$ has bounded maximum degree, hence $m=O(n)$). \end{proof} Recalling that independent set is a clique in the complement, we get the following. \begin{corollary}\label{cor:hardness-co2subd} There is a constant $\alpha > 1$ such that for any $\varepsilon > 0$, \textsc{Maximum Clique}\xspace on the class of all the co-2-subdivisions has no $\alpha$-approximation algorithm running in subexponential time $2^{n^{1-\varepsilon}}$, unless the ETH fails. \end{corollary} For exact algorithms the subexponential time that we rule out under the ETH is not only $2^{n^{1-\varepsilon}}$ but actually any $2^{o(n)}$. Now, to Theorem~\ref{thm:hardness-filled-ellipses} and Theorem~\ref{thm:hardness-filled-triangles}, it is sufficient to show that intersection graphs of (filled) ellipses or of (filled) triangles contain all co-2-subdivisions. We start with (filled) triangles since the construction is straightforward. \begin{lemma}\label{lem:triangles-co2subd} The class of intersection graphs of filled triangles contains all co-2-subdivisions. \end{lemma} \begin{proof} Let $G$ be any graph with $n$ vertices $v_1, \ldots, v_n$ and $m$ edges $e_1,\ldots,e_m$, and $H$ be its co-2-subdivision. We start with $n+2$ points $p_0, p_1, p_2, \ldots, p_n, p_{n+1}$ forming a convex monotone chain. Those points can be chosen as $p_i := (i,p(i))$ where $p$ is the equation of a positive parabola taking its minimum at $(0,0)$. For each $i \in [0,n+1]$, let $q_i$ be the reflection of $p_i$ by the line of equation $y = 0$. Let $x := (n+1,0)$. For each vertex $v_i \in V(G)$ the filled triangle $\delta_i := p_iq_ix$ encodes $v_i$. Observe that the points $p_0=q_0$, $p_{n+1}$, and $q_{n+1}$ will only be used to define the filled triangles encoding edges. To encode (the two new vertices of) a subdivided edge $e_k=v_iv_j$, we use two filled triangles $\Delta^+_k$ and $\Delta^-_k$. The triangle $\Delta^+_k$ (resp. $\Delta^-_k$) has an edge which is supported by $\ell(p_{i-1},p_{i+1})$ (resp. $\ell(q_{j-1},q_{j+1})$) and is prolonged so that it crosses the boundary of each $\delta_{i'}$ but $\delta_i$ (resp. but $\delta_j$). A second edge of $\Delta^+_k$ and $\Delta^-_k$ are parallel and make with the horizontal a small angle $\varepsilon k$, where $\varepsilon> 0$ is chosen so that $\varepsilon m$ is smaller than the angle formed by $\ell(p_0,p_1)$ with the horizontal line. Those almost horizontal edges intersect for each pair $\Delta^+_{k'}$ and $\Delta^-_{k''}$ with $k' \neq k''$ intersects close to the same point. Filled triangles $\Delta^+_k$ and $\Delta^-_k$ do not intersect. See Figure~\ref{fig:triangles-co2subd} for the complete picture. It is easy to check that the intersection graph of $\{\delta_i\}_{i \in [n]} \cup \{\Delta^+_k,\Delta^-_k\}_{k \in [m]}$ is $H$. The family $\{\delta_i\}_{i \in [n]}$ forms a clique since they all contain for instance the point $x$. The filled triangle $\Delta^+_k$ (resp. $\Delta^-_k$) intersects every other filled triangles except $\Delta^-_k$ (resp. $\Delta^+_k$) and $\delta_i$ (resp. $\delta_j$) with $e_k=v_iv_j$. One may observe that no triangle is fully included in another triangle. So the construction works both as the intersection graph of filled triangles \emph{and} triangles without their interior. The edge of a $\Delta^+_k$ or a $\Delta^-_k$ crossing the boundary of all but one $\delta_i$, and the almost horizontal edge can be arbitrary prolonged to the right and to the left respectively. Thus, the triangles can all be made isosceles. \end{proof} \begin{figure}[h!] \centering \begin{tikzpicture}[ inv/.style={opacity=0}, dot/.style={fill,circle,inner sep=-0.01cm}, vert/.style={draw, fill=red, opacity=0.2}, verta/.style={draw, fill=blue, opacity=0.2}, vertb/.style={draw, fill=green, opacity=0.2}, extended line/.style={shorten >=-#1,shorten <=-#1}, extended line/.default=1cm, one end extended/.style={shorten >=-#1}, one end extended/.default=1cm, ] \def5{5} \def0.08{0.08} \def0.2{0.2} \def0.02{0.02} \def-5{-5} \def5{5} \def0.05{0.05} \coordinate (su) at (-5,0.02) ; \coordinate (sd) at (-5,-0.02) ; \coordinate (eu) at (5,0.02) ; \coordinate (ed) at (5,-0.02) ; \coordinate (su2) at (-5,-0.1) ; \coordinate (eu2) at (5,0.4) ; \coordinate (sd2) at (-5,-0.1 - 2 * 0.02) ; \coordinate (ed2) at (5,0.4 - 2 * 0.02) ; \coordinate (x) at (5+1,0) ; \node[dot] at (x) {} ; \foreach \i in {1,...,5}{ \coordinate (p\i) at (\i,\i * \i * 0.08 + \i * 0.2) ; \coordinate (q\i) at (\i,- \i * \i * 0.08 - \i * 0.2) ; \coordinate (pd\i) at (\i,\i * \i * 0.08 + \i * 0.2 - 0.05) ; \coordinate (qu\i) at (\i,- \i * \i * 0.08 - \i * 0.2 + 0.05) ; } \foreach \i in {1,...,5}{ \node[dot] at (p\i) {} ; \node[dot] at (q\i) {} ; \draw[vert] (p\i) -- (q\i) -- (x) -- cycle ; } \path[name path=J1,overlay] (su) -- (eu)--([turn]0:5cm); \path[name path=K1,overlay] (pd2) -- (0,0)--([turn]0:5cm); \path[name path=J2,overlay] (sd) -- (ed)--([turn]0:5cm); \path[name path=K2,overlay] (qu5) -- (qu3)--([turn]0:5cm); \path [name intersections={of=J1 and K1,by={I1}}]; \path [name intersections={of=J2 and K2,by={I2}}]; \coordinate (e1) at (I1) ; \coordinate (e2) at (I2) ; \coordinate (c1) at ( $ (pd2)!-2!(e1) $ ) ; \coordinate (c2) at ( $ (qu5)!-0.1!(e2) $ ) ; \path[name path=J3,overlay] (su2) -- (eu2)--([turn]0:5cm); \path[name path=K3,overlay] (pd3) -- (pd1)--([turn]0:5cm); \path[name path=J4,overlay] (sd2) -- (ed2)--([turn]0:5cm); \path[name path=K4,overlay] (qu4) -- (qu2)--([turn]0:5cm); \path [name intersections={of=J3 and K3,by={I3}}]; \path [name intersections={of=J4 and K4,by={I4}}]; \coordinate (e3) at (I3) ; \coordinate (e4) at (I4) ; \coordinate (c3) at ( $ (pd3)!-1.4!(e3) $ ) ; \coordinate (c4) at ( $ (qu4)!-0.35!(e4) $ ) ; \foreach \i/\j/\k in {1/su/vertb,2/sd/vertb,3/su2/verta,4/sd2/verta}{ \draw[\k] (\j) -- (c\i) -- (e\i) -- cycle ; } \end{tikzpicture} \caption{A co-2-subdivision of a graph with $5$ vertices (in red) represented with triangles. Only two edges are shown: one between vertices $1$ and $4$ (green) and one between vertices $2$ and $3$ (blue).} \label{fig:triangles-co2subd} \end{figure} We use the same ideas for the construction with filled ellipses. The two important sides of a triangle encoding an edge of the initial graph $G$ become two tangents of the ellipse. \begin{lemma}\label{lem:ellipses-co2subd} The class of intersection graphs of filled ellipses contains all co-2-subdivisions. \end{lemma} \begin{proof} Let $G$ be any graph with $n$ vertices $v_1, \ldots, v_n$ and $m$ edges $e_1,\ldots,e_m$, and $H$ be its co-2-subdivision. We start with the convex monotone chain $p_0, p_1, p_2, \ldots, p_{n-1}, p_n, p_{n+1}$, only the gap between $p_i$ and $p_{i+1}$ is chosen very small compared to the positive $y$-coordinate of $p_0$. The disks $\mathcal D_i$ encoding the vertices $v_i \in G$ must form a clique. We also take $p_0$ with a large $x$-coordinate. For $i \in [0,n+1]$, $q_i$ is the symmetric of $p_i$ with respect to the $x$-axis. For each $i \in [n]$, we define $\mathcal D_i$ as the disk whose boundary is the unique circle which goes through $p_i$ and $q_i$, and whose tangent at $p_i$ has the direction of $\ell(p_{i-1},p_{i+1})$. It can be observed that, by symmetry, the tangent of $\mathcal D_i$ at $q_i$ has the direction of $\ell(q_{i-1},q_{i+1})$. Let us call $\tau^+_i$ (resp. $\tau^-_i$) the tangent of $\mathcal D_i$ at $p_i$ (resp. at $q_i$) very slightly translated upward (resp. downward). The tangent $\tau^+_i$ (resp. $\tau^-_i$) intersects every disks $\mathcal D_{i'}$ but $\mathcal D_i$ (see Figure~\ref{fig:disks-all-but-one}). Let denote by $p'_i$ (resp. $q'_i$) be the projection of $p_i$ (resp. $q_i$) onto $\tau^+_i$ (resp. onto $\tau^-_i$) \begin{figure} \centering \begin{tikzpicture}[scale=1.7, xscale=-1, vert/.style={draw, fill=red, opacity=0.2}, dot/.style={fill,circle,inner sep=-0.03cm}, ] \def0.5{0.5} \def0.045{0.045} \foreach \i in {0,...,3}{ \coordinate (c\i) at (- \i * 0.5,0) ; \pgfmathsetmacro1.2{1+\i * \i * 0.045} ; \draw[vert] (c\i) circle (1.2) ; \pgfmathsetmacro0.08{90.1 - \i * 10} \pgfmathsetmacro\x{- \i * 0.5 + 1.2 * cos(0.08)} \pgfmathsetmacro\y{ 1.2 * sin(0.08)} \pgfmathsetmacro\ym{- 1.2 * sin(0.08)} \pgfmathsetmacro\j{1.1 + \i * 0.4} \pgfmathsetmacro\jj{2.6 - \i * 0.2} \pgfmathtruncatemacro\ipo{\i+1} \coordinate (p\i) at (\x, \y) ; \node at (\x, \y+0.1) {$p_\ipo$} ; \coordinate (e\i) at (\x, \ym) ; \node at (\x, \ym-0.1) {$q_\ipo$}; \path[overlay] (c\i) -- (p\i) -- ([turn]-90:\j cm) node (q\i) {} ; \path[overlay] (c\i) -- (p\i) -- ([turn]90:\jj cm) node (qq\i) {} ; \path[overlay] (c\i) -- (e\i) -- ([turn]90:\j cm) node (f\i) {} ; \path[overlay] (c\i) -- (e\i) -- ([turn]-90:\jj cm) node (ff\i) {} ; } \node at (c2) {$\mathcal D_3$} ; \draw[blue,very thick] ($ (q2) + (0,-0.02) $) -- ($ (qq2) + (0,0.05) $) ; \draw[red,very thick] (c2) circle (1+4*0.045) ; \node at (-2,1.85) {$\tau^+_3$}; \foreach \i in {0,...,3}{ \node[dot] at (p\i) {} ; \node[dot] at (e\i) {} ; } \end{tikzpicture} \caption{The blue line intersects every red disk but the third one.} \label{fig:disks-all-but-one} \end{figure} For each $k \in [m]$, let $\ell_k$ be the line crossing the origin $O=(0,0)$ and forming with the horizontal an angle $\varepsilon k$, where $\varepsilon k$ is smaller than the angle formed by $\ell(p_0,p_1)$ with the horizontal. Let $\ell^+_k$ (resp. $\ell^-_k$) be $\ell_k$ very slightly translated upward (resp. downward). To encode an edge $e_k=v_iv_j$, we have two filled ellipses $\mathcal E^+_k$ and $\mathcal E^-_k$. The ellipse $\mathcal E^+_k$ (resp. $\mathcal E^-_k$) is defined as being tangent with $\tau^+_i$ at $p'_i$ (resp. with $\tau^-_j$ at $q'_j$) and tangent at $\ell^+_k$ (resp. $\ell^-_k$) at the point of $x$-coordinate $0$ (thus very close to $O$), where $e_k=v_iv_j$. The proof that the intersection graph of $\{\mathcal D_i\}_{i \in [n]} \cup \{\mathcal E^+_k,\mathcal E^-_k\}_{k \in [m]}$ is $H$ is similar to the case of filled triangles. As no ellipse is fully contained in another ellipse, this construction works for both filled ellipses \emph{and} ellipses without their interior. We place $p_0$ at $P:=(\sqrt 3/2,1/2)$ and make the distance between $p_i$ and $p_{i+1}$ very small compared to 1. All points $p_i$ are very close to $P$ and all points $q_i$ are very close to $Q:=(\sqrt 3/2,-1/2)$. This makes the radius of all disks $\mathcal D_i$ arbitrarily close to 1. We choose the convex monotone chain $p_0, \ldots, p_{n+1}$ so that $\ell(p_0,p_1)$ forms a 60-degree angle with the horizontal. As, the chain is strictly convex but very close to a straight-line, $\ell(p_0,p_1) \approx \ell(p_n,p_{n+1}) \approx \ell(p_i,p_{i+1}) \approx \ell(p_i,p_{i+2})$. Thus, all those lines almost cross $P$ and form an angle of roughly 60-degree with the horizontal. The same holds for points $q_i$. For the choice of an elliptical disk tangent to the $x$-axis at $O$ and to a line with a 60-degree slope at $P$ (resp. at $Q$), we take a disk of radius 1 centered at $(0,1)$ (resp. at $(0,-1)$); see Figure~\ref{fig:almost-disks}. \begin{figure}[h!] \centering \begin{tikzpicture}[ scale=1.4, vert/.style={draw, fill=red, opacity=0.2}, verta/.style={draw, fill=blue, opacity=0.2}, vertb/.style={draw, fill=blue, opacity=0.2}, dot/.style={fill,circle,inner sep=-0.03cm}] \draw[vertb] (0,0) circle (1) ; \node at (0,0) {$\mathcal E^-_k$} ; \coordinate (cm) at (0,0) ; \draw[verta] (0,2) circle (1) ; \node at (0,2) {$\mathcal E^+_k$} ; \coordinate (cp) at (0,2) ; \draw[vert] (1.73,1) circle (1) ; \node at (1.73,1) {$\mathcal D_i$} ; \coordinate (cv) at (1.73,1) ; \node[dot] at (1.73/2,1.5) {} ; \node[dot] at (1.73/2,0.5) {} ; \node[dot] at (0,1) {} ; \node at (1.73/2,1.75) {$P$} ; \node at (1.73/2,0.25) {$Q$} ; \node at (-0.25,1) {$O$} ; \coordinate (P) at (1.73/2,1.5) ; \coordinate (Q) at (1.73/2,0.5) ; \coordinate (O) at (0,1) ; \draw[opacity=0.3] (O) -- (P) -- (Q) -- cycle ; \draw[opacity=0.3] (cm) -- (O) -- (cp) -- (P) -- (cv) -- (Q) -- cycle ; \end{tikzpicture} \caption{The layout of the disks $\mathcal D_i$, and the elliptical disks $\mathcal E^+_k$ and $\mathcal E^-_k$.} \label{fig:almost-disks} \end{figure} The acute angle formed by $\ell_1$ and $\ell_m$ (incident in $O$) is made arbitrarily small so that, by continuity of the elliptical disk defined by two tangents at two points, the filled ellipses $\mathcal E^+_k$ and $\mathcal E^-_k$ have eccentricity arbitrarily close to 0 and major axis arbitrarily close to 1. \end{proof} In the construction, we made \emph{both} the eccentricity of the (filled) ellipses arbitrarily close to 0 and the ratio between the largest and the smallest major axis arbitrarily close to 1. We know that this construction is very unlikely to work for the extreme case of unit disks, since a polynomial algorithm is known for \textsc{Max Clique}. Note that even with disks of arbitrary radii, Theorem~\ref{thm:main-structural-non-disk} unconditionally proves that the construction does fail. Indeed the co-2-subdivision of $C_3+C_3$ is the complement of $C_9+C_9$, hence not a disk graph. \subsection{Homothets of a convex polygon} Another natural direction of generalizing a result on disk intersection graphs is to consider {\em pseudodisk intersection graphs}, i.e., intersection graphs of collections of closed subsets of the plane (regions bounded by simple Jordan curves) that are pairwise in a {\em pseudodisk} relationship (see Kratochv\'il \cite{DBLP:conf/gd/Kratochvil96}). Two regions $A$ and $B$ are in pseudodisk relation if both differences $A\setminus B$ and $B\setminus A$ are arc-connected. It is known that $P_{hom}$ graphs, i.e., intersection graphs of homothetic copies of a fixed polygon $P$, are pseudodisk intersection graphs~\cite{agarwal453state}. As shown by Brimkov {\em et al.}, for every convex $k$-gon $P$, a $P_{hom}$ graph with $n$ vertices has at most $n^k$ maximal cliques~\cite{DBLP:journals/corr/BrimkovJKKPRST14}. This clearly implies that \textsc{Maximum Clique}\xspace, but also \textsc{Clique $p$-Partition} for fixed $p$ is polynomially solvable in $P_{hom}$ graphs. Actually, the bound on the maximum number of maximal cliques from \cite{DBLP:journals/corr/BrimkovJKKPRST14} holds for a more general class of graphs, called $k_{DIR}$-CONV, which admit a intersection representation by convex polygons, whose every side is parallel to one of $k$ directions. Moreover, we observe that Theorem \ref{thm:coEvenCycles} cannot be generalized to $P_{hom}$ graphs or $k_{DIR}$-CONV graphs. Indeed, consider the complement $\overline{P_n}$ of an $n$-vertex path $P_n$. The number of maximal cliques in $\overline{P_n}$, or, equivalently, maximal independent sets in $P_n$ is $\Theta(c^n)$ for $c \approx 1.32$, i.e., exponential in $n$ \cite{DBLP:journals/jgt/Furedi87}. Therefore, for every fixed polygon $P$ (or for every fixed $k$) there is $n$, such that $\overline{P_n}$ is not a $P_{hom}$ ($k_{DIR}$-CONV) graph. \section{Perspectives}\label{sec:perspectives} We presented the first QPTAS and subexponential algorithm for \textsc{Maximum Clique}\xspace on disk graphs. Our subexponential algorithm extends to the weighted case and yields a polynomial algorithm if both the degree $\Delta$ and the odd girth $c$ of the complement graph are constant. Indeed, our full characterization of disk graphs with co-degree 2, implies a backdoor-to-bipartiteness of size $c\Delta$ in the complement. We have also paved the way for a potential NP-hardness construction. We showed why the versatile approach of representing complements of even subdivisions of graphs forming a class on which \textsc{Maximum Independent Set}\xspace is NP-hard fails if the class is \emph{general graphs}, \emph{planar graphs}, or even any class containing the disjoint union of two odd cycles. This approach was used by Middendorf for some string graphs \cite{Middendorf92} (with the class of all graphs), Cabello et al. \cite{CabelloCL13} to settle the then long-standing open question of the complexity of \textsc{Maximum Clique}\xspace for segments (with the class of planar graphs), in Section~\ref{sec:gen&lim} of this paper for ellipses and triangles (with the class of all graphs). Determining the complexity of \textsc{Maximum Independent Set}\xspace on graphs without two vertex-disjoint odd cycles as an induced subgraph is a valuable first step towards settling the complexity of \textsc{Maximum Clique}\xspace on disks. Another direction is to try and strengthen our QPTAS in one of two ways: either to obtain a PTAS for \textsc{Maximum Clique}\xspace on disk graphs, or to obtain a QPTAS (or PTAS) for \textsc{Maximum Weighted Clique}\xspace on disk graphs. It is interesting to note that Bock et al. \cite{Bock14} showed a PTAS for \textsc{Maximum Weighted Independent Set}\xspace for graphs $G$ with $\ocp(G) = O(\log n / \log \log n)$. However, this bound is too weak to use a win-win approach similar to Theorem \ref{thm:qptas}. \bibliographystyle{abbrv}
{ "timestamp": "2018-03-01T02:11:53", "yymm": "1712", "arxiv_id": "1712.05010", "language": "en", "url": "https://arxiv.org/abs/1712.05010" }
\section{Introduction} Automatic music generation using neural networks has attracted much attention. There are two classes of music generation approaches, symbolic music generation~\cite{hadjeres2016deepbach}\cite{magenta2016}\cite{yang2017midinet} and audio music generation~\cite{oord2016wavenet}\cite{mehri2016samplernn}. In this study, we focus on symbolic melody generation, which requires learning from sheet music. Many music genres such as pop music consist of melody and harmony. Since usually beautiful harmonies can be ensured by using legitimate chord progressions which have been summarized by musicians, we only focus on melody generation, similar to some recent studies~\cite{magenta2016}\cite{yang2017midinet}\cite{colombo2017deep}\cite{pmlr-v80-roberts18a}. This greatly simplifies the melody generation problem. Melody is a linear succession of musical notes along time. It has both short time scale such as notes and long time scale such as phrases and movements, which makes the melody generation a challenging task. Existing methods generate pitches and rhythm simultaneously \cite{magenta2016} or sequentially \cite{chu2016song} using Recurrent Neural Networks~(RNNs), but they usually work on the note scale without explicitly modeling the larger time-scale components such as rhythmic patterns. It is difficult for them to learn long-term dependency or structure in melody. Theoretically, an RNN can learn the temporal structure of any length in the input sequence, but in reality, as the sequence gets longer it is very hard to learn long-term structure. Different RNNs have different learning capability, e.g., LSTM~\cite{hochreiter1997long} performs much better than the simple Elman network. But any model has a limit for the length of learnable structure, and this limit depends on the complexity of the sequence to be learned. To enhance the learning capability of an RNN, one approach is to invent a new structure. In this work we take another approach: increase the granularity of the input. Since each symbol in the sequence corresponds to longer segment than the original representation, the same model would learn longer temporal structure. To implement this idea, we propose a Hierarchical Recurrent Neural Network (HRNN) for learning melody. It consists of three LSTM-based sequence generators --- Bar Layer, Beat Layer and Note Layer. The Bar Layer and Beat Layer are trained to generate bar profiles and beat profiles, which are designed to represent the high-level temporal features of melody. The Note Layer is trained to generate melody conditioned on the bar profile sequence and beat profile sequence output by the Bar Layer and Beat Layer. By learning on different time scales, the HRNN can grasp the general regular patterns of human composed melodies in different granularities, and generate melody with realistic long-term structures. This method follows the general idea of granular computing~\cite{bargiela2012granular}, in which different resolutions of knowledge or information is extracted and represented for problem solving. With the shorter profile sequences to guide the generation of note sequence, the difficulty of generating note sequence with well-organized structure is alleviated. \section{Related Work} \subsection{Melody Generation with Neural Networks} There is a long history of generating melody with RNNs. A recurrent autopredictive connectionist network called CONCERT is used to compose music~\cite{mozer1994neural}. With a set of composition rules as constraints to evaluate melodies, an evolving neural network is employed to create melodies~\cite{chen2001creating}. As an important form of RNN, LSTM~\cite{hochreiter1997long} is used to capture the global music structure and improve the quality of the generated music~\cite{eck2002first}. Boulanger-Lewandowski, Bengio, and Vincent explore complex polyphonic music generation with an RNN-RBM model~\cite{boulanger2012modeling}. Lookback RNN and Attention RNN are proposed to tackle the problem of creating melody's long-term structure~\cite{magenta2016}. The Lookback RNN introduces a handcrafted lookback feature that makes the model repeat sequences easier while the Attention RNN leverages an attention mechanism to learn longer-term structures. Inspired by convolution, two variants of RNN are employed to attain transposition invariance~\cite{johnson2017generating}. To model the relation between rhythm and melody flow, a melody is divided into pitch sequence and duration sequence and these two sequences are processed in parallel~\cite{colombo2016algorithmic}. This approach is further extended in \cite{colombo2017deep}. A hierarchical VAE is employed to learn the distribution of melody pieces in~\cite{pmlr-v80-roberts18a}, the decoder of which is similar to our model. The major difference is that the higher layer of its decoder uses the automatically learned representation of bars, while our higher layers use predefined representation of bars and beats which makes the learning problem easier. Generative Adversarial Networks~(GANs) have also been used to generate melodies. For example, RNN-based GAN~\cite{mogren2016c} and CNN-based GAN~\cite{yang2017midinet} are employed to generate melodies, respectively. However, the generated melodies also lack realistic long-term structures. Some models are proposed to generate multi-track music. A 4-layer LSTM is employed to produce the key, press, chord and drum of pop music seperately~\cite{chu2016song}. With pseudo-Gibbs sampling, a model can generate highly convincing chorales in the style of Bach~\cite{colombo2017deep}. Three GANs for symbolic-domain multi-track music generation were proposed~\cite{DBLP:conf/aaai/DongHYY18}. An end-to-end melody and arrangement generation framework XiaoIce Band was proposed to generate a melody track with accompany tracks with RNN~\cite{Zhu:2018:XBM:3219819.3220105}. \subsection{Hierarchical and Multiple Time Scales Networks} The idea of hierarchical or multiple time scales has been used in neural network design, especially in the area of natural language processing. The Multiple Timescale Recurrent Neural Network~(MTRNN) realizes the self-organization of a functional hierarchy with two types of neurons ``fast'' unit and ``slow'' unit~\cite{yamashita2008emergence}. Then it is shown that the MTRNN can acquire the capabilities to recognize, generate, and correct sentences in a hierarchical way: characters grouped into words, and words into sentences~\cite{hinoshita2011emergence}. An LSTM auto-encoder is trained to preserve and reconstruct paragraphs by hierarchically building embeddings of words, sentences and paragraphs~\cite{li2015hierarchical}. To process inputs at multiple time scales, the Clockwork RNN is proposed, which partitions the hidden layers of RNN into separate modules~\cite{koutnik2014clockwork}. Different from the Clockwork RNN, we integrate the prior knowledge of music in constructing the hierarchical model and feed multiple time scales of features to different layers. \section{Music Concepts and Representation} \label{Sec:BMT} We first briefly introduce some basic music concepts and their properties to familiarize the readers who do not have a music background, then explain how the concepts are represented in the model. \subsection{Basic Music Concepts} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{Figs/melody.pdf} \caption{A typical form of melody. The time signature of this musical piece is 4/4. The numerator means a bar contains 4 beats, and the denominator means the time length of 1 beat is a quarter note.} \label{Fig:Melody} \end{figure} As shown in Fig.~\ref{Fig:Melody}, melody, often known as tune, voice, or line, is a linear succession of musical notes, and each note represents the pitch and duration of a sound. Several combined notes form a beat that determines the rhythm based on which listeners would tap their fingers when listening to music. A bar contains a certain number of beats in each musical piece. Time signature~(e.g., 3/4) specifies which note value is to be given in each beat by the denominator and the number of beats in each bar by the numerator. Each musical piece has a key chosen from 12 notes in an octave. Key signature, such as C$\sharp$ or B$\flat$, designates which key the current musical piece is. The musical piece can be transformed to different keys while maintaining the general tone structure. Therefore we can transpose all of the musical pieces to key C, while maintaining the relative relationship between notes. Shifting all musical pieces to the same key makes it easier for the model to learn the relative relationship between notes. The generated pieces can be transposed to any key. \begin{figure*} \centering \includegraphics[width=\textwidth]{Figs/profiles_aug_v2.pdf} \caption{ Samples of beat profiles and bar profiles. Here we use notes with same pitch to illustrate rhythm in beat and bar. The rhythm represented by beat profile 5 and 6 are related with the rhythm of the previous beat so they are shown with two beats where the first beats are all quarter notes. } \label{Fig:Profiles} \end{figure*} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{Figs/Representation.pdf} \caption{An example of melody representation. \textbf{Top}: A melody with the length of one bar. \textbf{Bottom}: Representation of the melody, in which the N means a no-event and the O means a note-off event. Since the fourth note is not immediately followed by any note, a note-off event is necessary here.} \label{Fig:Representation} \end{figure} \begin{figure*}[htpb] \centering \includegraphics[width=\textwidth]{Figs/Arch_v3.pdf} \caption{Architecture of HRNN. From top to bottom are Bar Layer, Beat Layer and Note Layer respectively. Inner layer connections along time are shown with black lines. Connections between layers are shown with green lines, blue lines and red lines. } \label{Fig:Arch} \end{figure*} \subsection{Melody Representation} \label{Sec:MelodyRepresentation} To simplify the problem, we only chose musical pieces with the time signature 4/4. This is a widely-used time signature. According to the statistics on the Wikifonia dataset described in Section~\ref{Sec:Exp}, about 99.83\% of notes have pitches between C2 and C5. Thus, all notes are octave-shifted to this range. Then there are 36 options for a pitch of a note (3 octaves and each octave has 12 notes). To represent duration, we use event messages in the Midi standard. When a note is pressed, a note-on event with the corresponding pitch happens; and when the note is released, a note-off event happens. For a monophonic melody, if two notes are adjacent, the note-on event of the latter indicates the note-off event of the former, and the note-off event of the former is therefore not needed. In this study, every bar was discretized into 16 time steps. At every time step, there are 38 kinds of events (36 note-on events, one note-off event and one no-event), which are exclusive. One example is shown in Fig.~\ref{Fig:Representation}. In this way, note-on events mainly determine the pitches in the melody and no-events mainly determine the rhythm as they determine the duration of the notes. So a 38-dimensional one-hot vector is used to represent the melody at every time step. \subsection{Rhythmic Patterns and Profiles} Rhythmic patterns are successions of durations of notes which occur periodically in a musical piece. It is a concept on a larger time scale than the note scale and is important for melodies' long-term structure. Notice that in this model we do not encode the melody flow because it is hard to find an appropriate high-level representation of it. Two features named \textit{beat profile} and \textit{bar profile} are designed, which are high-level representations of a whole bar and beat, respectively. Compared with individual notes, the two profiles provide coarser representations of the melody. To construct the beat profile set, all melodies are cut into melody clips with a width of one beat and binarized at each time step with $1$ for an event~(note-on events and note-off event) and $0$ for no-event at this step. Then we cluster all these melody clips into several clusters via the K-Means algorithm and use the cluster centers as our beat profiles. Given a one beat melody piece, we can binarize it in the same manner and choose the closest beat profile as its representation. The computation of bar profile is similar, except that the width of melody clip is changed to one bar. Based on the well-known elbow method, the numbers of clusters for beat profiles and bar profiles are set to be 8 and 16 respectively. In Fig.~\ref{Fig:Profiles}, some frequently appeared profiles are shown with notes. \section{Hierarchical RNN for Melody Generation} \label{Sec:Methods} \subsection{Model Architecture}\label{Method:HMG} HRNN consists of three event sequence generators: Bar Layer, Beat Layer and Note Layer, as illustrated in Fig.~\ref{Fig:Arch}. These layers are used to generate bar profile sequence, beat profile sequence and note sequence, respectively. The lower-level generators generate sequences conditioned on the sequence output by the higher-level generators. So to generate a melody, one needs to first generate a bar profile sequence and a beat profile sequence in turn. Suppose that we want to generate a melody piece with the length of one bar, which is represented as $n_t, ..., n_{t+15}$(see Fig.~\ref{Fig:Arch}). First, the Bar Layer generates a bar profile $B_{t}$ with the last bar profile $B_{t-16}$ as input. Then the Beat Layer generates 4 beat profiles $b_t, ..., b_{t+12}$ with $b_{t-4}$ as input conditioned on the bar profile $B_t$. To generate the notes $n_t, n_{t+1},...n_{t+3}$, the Note Layer is conditioned on both $B_t$ and $b_t$; to generate the notes $n_{t+4}, ..., n_{t+7}$, the Note Layer is conditioned on both $B_t$ and $b_{t+4}$; and so on. In this way, each bar profile is a condition for the 16 generated notes and each beat profile is a condition for the 4 generated notes. All of the three layers use LSTM but the time scales of the input are different. Theoretically, the Beat Layer and Bar Layer can learn 4 and 16 times longer temporal structure than the Note Layer, respectively. Note that it is difficult to quantify the length of temporal structure learned in a model, since ``temporal structure'' is an abstract concept and its characterization is still an open problem. We could only probe the difference in length produced by different models indirectly by measuring the quality of the generated sequences using behavior experiments (see Section 5). To explicitly help RNN memorize recent events and potentially repeat them, a Lookback feature was proposed for \textit{the Lookback RNN}~\cite{magenta2016}. A user study suggested that the RNN with Lookback feature outperforms basic RNN~\cite{yang2017midinet} so we also use it in our model\footnote{For fair comparison in experiments, all models were equipped with this feature.}. The lookback distance is 2 and 4 for the Bar Layer, 4 and 8 for the Beat Layer, 4 and 8 for the Note Layer. Therefore, the Note Layer without the condition of the Beat layer and Bar layer is equivalent to \textit{the Lookback RNN}. \subsection{LSTM-Based Event Sequence Generator} Bar profiles, beat profiles and notes can be abstracted as events, which can be generated by RNN. It might be better to use different models for generating different types of events, but for simplicity we use the same LSTM-based event sequence generator for the Bar Layer, Beat Layer and Note Layer. The event sequence generator $G_\theta$ is trained by solving the following optimization problem: \begin{equation}\label{equ:condition} \max_\theta \sum_{y \in \mathcal{Y}} \sum_{t=1}^{\text{len}(y)} \log p(y_t|y_0,...,y_{t-1}, c_t) \end{equation} where $\theta$ are the parameters of the generator, $y$ is a sequence sampled in the event sequences dataset $\mathcal{Y}$. And $y_t$ denotes the $t$-th event in $y$, $c_t$ denotes the condition for $y_t$. LSTM is used to predict the conditional probability in Eq.~\eqref{equ:condition}, which is characterized by input gates $i_t$, output gates $o_t$ and forgetting gates $f_t$~\cite{hochreiter1997long}: \begin{equation} \begin{aligned} i_t & = \sigma(W_{ix}x_t + W_{im}m_{t-1}) \\ f_t & = \sigma(W_{fx}x_t + W_{fm}m_{t-1}) \\ o_t & = \sigma(W_{ox}x_t + W_{om}m_{t-1}) \\ c_t & = f_t \odot c_{t-1} + i_t \odot \tanh(W_{cx} x_t + W_{cm} m_{t-1}) \\ m_t & = o_t \odot c_t \\ p_t & = \text{Softmax}(m_t) \end{aligned} \end{equation} where $W_{ix}$, $W_{im}$, $W_{fm}$, $W_{fm}$, $W_{ox}$ and $W_{om}$ are trainable parameters, $\odot$ denotes the element-wise multiplication and $\sigma(\cdot)$ denotes the sigmoid function. The $y_{t-1}$ is used as input $x_t$. The lookback feature is added to the Bar Layer, Beat Layer and Note Layer, to help the model memorize recent events and potentially repeat them. The lookback distance is 2 and 4 for the Bar Layer, 4 and 8 for the Beat Layer, 4 and 8 for the Note Layer. During generation, given a primer sequence as an initial input sequence, the LSTM network generates the distribution $p_0$ over all candidate events. The next event was chosen by sampling over $p_0$. The successive events are generated according to $p(y_t|y_0,...,y_{t-1}, c_t)$. \section{Experiments} \label{Sec:Exp} Evaluating the performance of the models for melody generation is difficult. The main reason is that measuring the quality of the generated melodies is subjective and it is hard to find an objective metric. We evaluated three generative models, HRNN-1L, HRNN-2L and HRNN-3L mainly based on behavioral experiments. HRNN-3L is the model we described in the previous section. HRNN-2L is the HRNN-3L without the Bar Layer while HRNN-1L is the HRNN-3L without the Bar Layer and the Beat Layer. Note that HRNN-1L is actually the \textit{Lookback RNN} developed by Google Magenta \cite{magenta2016}. The music pieces generated by the models were not post-processed. All melodies used in experiments were publicly available \footnote{\url{https://www.dropbox.com/s/vnd6hoq9olrpb5g/SM.zip?dl=0}}. \subsection{Implementation Details} All LSTM networks used in experiments had two hidden layers and each hidden layer had 256 hidden neurons. They were trained with Adam algorithm~\cite{kingma2014adam} and the initial learning rate was 0.001. The minibatch size was 64. The $\beta_1$, $\beta_2$ and $\epsilon$ of Adam optimizer were set to 0.9, 0.999, 1e-8. To avoid over-fitting, dropout with ratio 0.5 was adopted for every hidden layer of LSTM and validation-based early stopping~(see Fig.~\ref{Fig:Accuracy}) was employed so that the training was stopped as soon as the loss on the validation set increased for 5 times in a row~(the model is evaluated on validation set every 20 training iterations). In each generation trial, primer sequences~(both profiles and melodies) were randomly picked from the validation dataset. For the Bar Layer and Beat Layer, one profile is given as the primer. For the Note Layer, the length of the primer sequence is 1 beat. Beam search with a beam of size 3 was used in all experiments. \subsection{Dataset}\label{SubSec:Dataset} We collected 3,859 lead sheets with the time signature of 4/4 in MusicXML format from \url{http://www.wikifonia.org}. We have made these lead sheets publicly available\footnote{\url{https://www.dropbox.com/s/x5yc5cwjcx2zuvf/Wikifonia.zip?dl=0}} 90\% of the lead sheets were used as training set and the other 10\% were used as validation set. The speed of most music pieces in the dataset is 120 beats per minute. To guarantee the correct segmentation of melodies, all melodies started with weak beats were removed so that we can take bar as a basic unit. \begin{figure} \centering \includegraphics[width=\linewidth]{Figs/Figure_5.pdf} \caption{The accuracy curves of Note Layer for training and validation dataset. \textbf{Left}: no-event accuracy curves. \textbf{Right}: event~(both note-on event and note-off event) accuracy curves. Arrows indicate iterations at which training stopped to prevent over-fitting.} \label{Fig:Accuracy} \end{figure} \subsection{Guiding Effect of Profiles} To verify whether the beat and bar profiles can guide the generation of melody, we plotted the Note Layer's accuracy curves of in Fig.~\ref{Fig:Accuracy}. Here both event accuracy~(accuracy of the note-on event and note-off event; chance level is $1/ 37$) and no-event accuracy~(accuracy of the no-event; chance level is $1/2$) are plotted. With beat and bar profiles, the Note Layer learned the pattern of no-event quickly and easily. For models with profiles, the accuracy of no-event increased to nearly 100\% at about 200 iterations while the model without profile converged slowly and over-fitting started after about 2000 iterations. Since rhythm is encoded by no-event~(see Section~\ref{Sec:MelodyRepresentation}), this showed that the Note Layer successfully utilized the rhythm provided by the beat and bar profiles. The accuracy of note-on and note-off events also improved, which means models with profiles not only did a good job in predicting rhythm, but also in predicting pitch. \begin{figure} \centering Beat profiles: "1, 1, 1, 2, ..., 1, 1, 1, 2"{ \includegraphics[width=0.48\textwidth]{Figs/44488.pdf} \label{Fig:beat_profile_44488} } \\ Beat profiles: "2, 1, 2, 1, ..., 2, 1, 2, 1"{ \includegraphics[width=0.48\textwidth]{Figs/884884.pdf} \label{Fig:beat_profile_2121} } \\ Beat profiles: "2, 5, 2, 5, ..., 2, 5, 2, 5"{ \includegraphics[width=0.48\textwidth]{Figs/2525.pdf} \label{Fig:beat_profile_2525} } \caption{Melodies generated with given beat profile sequences.} \label{Fig:beat_profile} \end{figure} \begin{figure} \centering Bar profiles: "1, 2, 1, 2, 1, 2, 1, 2"{ \includegraphics[width=0.48\textwidth]{Figs/bar-12.pdf} } \\ Bar profiles: "2, 3, 2, 3, 2, 3, 2, 3"{ \includegraphics[width=0.48\textwidth]{Figs/bar-23.pdf} } \\ Bar profiles: "3, 1, 3, 1, 3, 1, 3, 1"{ \includegraphics[width=0.48\textwidth]{Figs/bar-31.pdf} } \caption{Melodies generated with given bar profile sequences.} \label{Fig:bar_profile} \end{figure} With given profile sequences, the Note Layer will generate melodies with rhythm represented by profile sequence. To show this, we used handcrafted profile sequences to guide the generation of the Note Layer. Fig.~\ref{Fig:beat_profile} and Fig.~\ref{Fig:bar_profile} show generated melodies given beat profile sequences to HRNN-2L and bar profile sequences to HRNN-3L~(profile index in Fig.~\ref{Fig:Profiles}). The results verified that the generated melodies are strongly constrained by the given profile sequence patterns. The same conclusion can be obtained using fixed beat profiles and bar profiles extracted from existing melodies. We extracted beat profiles and bar profiles of children's rhymes \textit{Twinkle, Twinkle Little Star} and generated melody conditioned on these profiles. The result is shown in Fig.~\ref{Fig:Twinkle}. The audio files can be found in \textbf{Supplementary Materials}. The rhythm of the generated melody is unison with the original melody, which suggests that the beat profiles and bar profiles effectively guided the generation of melody. \begin{figure} \centering Original melody{ \includegraphics[width=0.48\textwidth]{Figs/twinkle_origin.pdf} } \\ Generated melody{ \includegraphics[width=0.48\textwidth]{Figs/twinkle_hrnn.pdf} } \caption{Original melody of Twinkle, Twinkle Little Star and generated melody by Note Layer of HRNN-3L, given profiles of the original melody.} \label{Fig:Twinkle} \end{figure} \subsection{Qualitative Comparison} \begin{figure*} \centering \includegraphics[width=\textwidth]{Figs/123l_v3.pdf} \caption{Melodies generated by HRNN-1L, HRNN-2L and HRNN-3L. } \label{Fig:123l} \end{figure*} The strong guiding effect of profiles implies that the Note Layer could output good melodies if higher layers could generate good profile sequences. Since note sequences are much longer than their profile sequences, learning the latter should be easier than learning the former using the same type of model. Thus, compared to HRNN-1L, melodies generated by HRNN-2L and 3L model should be more well-organized and keep better long-term structures. The qualitative comparison verified this point. Three typical music pieces generated by HRNN-1L, HRNN-2L, HRNN-3L with the same primer note were shown in Fig.~\ref{Fig:123l}. The melody generated by HRNN-1L has basic rhythm, but also irregular rhythmic patterns. And the melodies generated by HRNN-2L and HRNN-3L contain less irregular rhythmic patterns. \subsection{Comparison of Different Number of Layers} Three human behavior experiments were conducted to evaluate melodies generated by models. For this propose, we built an on-line website where people could listen to melodies and give their feedback. To model real piano playing scenario, sustain pedal effect was added to all model generated and human composed musical pieces evaluated in these experiments. This was achieved by extending all notes' duration so that they ended at the end of the corresponding bars. \subsubsection{Two-Alternative Forced Choice Experiment} \label{subsec:AFC} We randomly provided subjects pairs of melodies with the length of 16 bars (about 32 seconds) and asked them to vote (press one of two buttons in the experiment interface) which melody sounded better in every pair. This is the two-alternative forced choice~(2AFC) setting. Subjects had infinite time for pressing the buttons after they heard the melodies. Pressing the button started a new trial. Three types of pairs were compared: HRNN-3L versus HRNN-1L, HRNN-2L versus HRNN-1L and HRNN-3L versus HRNN-2L. Each model generated a set of 15 melodies and in every trial two melodies were randomly sampled from the two corresponding sets. Different types of pairs were mixed and randomized in the experiment. Call for participants advertisement was spread in a social media. 1637 trials were collected from 103 IP addresses~(Note that one IP address may not necessarily correspond to one subject). The results are shown in Fig.~\ref{Fig:Exp}. In nearly two-thirds of trials, melodies generated by hierarchical models were favored (Pearson's chi-squared test, $p=3.96\times10^{-10}$ for HRNN-3L versus HRNN-1L and $p=2.70\times10^{-8}$ for HRNN-2L versus HRNN-1L). In addition, subjects voted more for melodies generated by HRNN-3L than by HRNN-2L~($p=3.38\times10^{-6}$) \begin{figure} \centering \includegraphics[width=\linewidth]{Figs/SCORE_v7.pdf} \caption{Results of the 2AFC experiment~(\textbf{left}) and the melody score experiment~(\textbf{right}).} \label{Fig:Exp} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{Figs/Figure_11.pdf} \caption{Results of the music turing test.} \label{Fig:turing} \end{figure} \subsubsection{Melody Score Experiment} To quantitatively measure the quality of melodies generated by different models and verify the conclusion obtained in the online experiment, we invited 18 subjects between ages of 18 and 25 to score these melodies. Every subject was asked to score every melody used in 2AFC experiment with 5 levels: 5 the best and 1 the worst. It took each subject about 24 minutes to finish the experiment. We calculated the average score of every melody in Fig.~\ref{Fig:Exp}. The results verified that the two additional layers improved the quality of melodies generated by the single-layer model~(two-tailed test, $p = 0.00576$ for HRNN-3L versus HRNN-1L). \subsubsection{Control the Number of Parameters} In the above experiment, the number of parameters of HRNN-3L (2.84M) was three times that of HRNN-1L (0.94M). That might be the reason why HRNN-3L performed better. So we trained a HRNN-1L model with 450 hidden neurons~(2.79M) and conducted another 2AFC experiment~(as described in Section 5.5.1) to compare their performances. A total of 203 trials were collected from 21 IP addresses. In 127 trials (62.6\%, $p = 3.44\times 10^{-4}$), melodies generated by HRNN-3L were favored, which is similar to the result (63.3\%) in comparison with HRNN-1L with fewer parameters~(Fig.~\ref{Fig:Exp} left). The results indicate that the better performance of the hierarchical structure was not mainly due to the increased number of parameters. \subsubsection{Music Turing Test} To compare the quality of the melodies generated by the models and the quality of melodies composed by human composers, a music ``Turing test" was carried out. Only two models, HRNN-3L and HRNN-1L, were tested. We found that without chord, it was difficult for the models to generate melodies that could fool human. So chords were added as a condition of the Beat Layer and Note Layer in training and generation. Chords and primer sequences used in the generation of a melody were extracted from the same music piece in the validation set. A total of 50 musical pieces containing 8 bars were randomly chosen from the validation set as human composed music. Then the HRNN-1L and the HRNN-3L both generated 25 melodies with the length of 8 bars. Sample We provided subjects music pieces from these 100 examples and asked them to judge if they were composed by human by pressing one of two buttons in the experiment interface. Subjects had infinite time for pressing the buttons after they heard the melodies. Pressing the button started a new trial. Feedback about the correctness of the choice was provided immediately after the subjects made the choice in every trial. Then the subjects had a chance to learn to distinguish human-composed and machine-composed melodies, which made it hard for the models to pass the Turing Test. In this experiment, we collected 4185 trials from 659 IP addresses, among which 1018 music pieces were generated by HRNN-1L, 1003 by HRNN-3L and 2164 by human. As shown in Fig.~\ref{Fig:turing}, 33.69\% of music pieces generated by HRNN-3L were thought to be human composed (or real), which is higher than the result of HRNN-1L, 28.68\%. It is seen that not all music pieces sampled from the original dataset were thought to be composed by humans (only 74.95\% were correctly classified). This implies that some music pieces generated by the models sounded better than human composed pieces, and that the quality of the dataset is not very high. \subsection{Comparison with Other Models} \label{subsec:com_midinet} Though many models have been proposed for melody generation, to the best of our knowledge, only the lookback RNN [21], attention RNN [21], MidiNet [23] and MusicVAE [19] have public available source codes. These models represent the state-of-the-art in the area of melody generation. It was reported in~\cite{yang2017midinet} that the attention RNN had similar performance to the lookback RNN. Our previous experiments have shown that HRNN-3L performed better than the lookback RNN, i.e. HRNN-1L. We then compared MidiNet and MusicVAE with HRNN-3L based on human evaluations. \subsubsection{MusicVAE} MusicVAE is a variatiaonal autoencoder that can generate melodies with the length of 16 bars. We compared the HRNN-3L model with a MusicVAE trained on our dataset~(with the same training setting in the original paper) and their pretrained MusicVAE using the 2AFC setting separately. Each model generated 15 melodies with the length of 16 bars. In each experiment, we randomly provided subjects 20 pairs of melodies and subjects were asked to vote for the better sounded melody. In the comparison between HRNN-3L and MusicVAE that was trained on our dataset, 435 trials were collected from 17 IP addresses. In 317 trials~(72.6\%, $p =1.41 \times 10^{-21}$), melodies generated by HRNN-3L were favored. We found the rhythm of melody generated by this MusicVAE is chaotic. One reason might be that the size of our dataset is too small compared with the dataset used in~\cite{pmlr-v80-roberts18a}. We then compared HRNN-3L and Pretrained-MusicVAE. 461 trials were collected from 21 IP addresses. In 293 trials~(63.5\%, $p =5.82 \times 10^{-9}$), melodies generated by HRNN-3L were favored. We generated 200 melodies with the Pretrained-MusicVAE and the statistics on 200 melodies show that about 40\% of notes generated by Pretrained-MusicVAE had pitches lower than C2, which made some melodies sound strange. \subsubsection{MidiNet} Another 2AFC experiment was used to compare HRNN-3L with MidiNet~\cite{yang2017midinet}. The MidiNet was trained on our dataset with the same training setting in the original paper. Since MidiNet required chords as an input, chords were used as a condition for MidiNet and HRNN-3L. MidiNet generated 25 melodies with the length of 8 bars conditioned on chords. The 25 melodies of HRNN-3L used in the Music Turing Test were used here for comparison.. In this 2AFC experiment, we randomly provided subjects pairs of melodies (HRNN-3L versus MidiNet) and asked them to vote for the better sounded melody in every pair. 290 trials were collected from 28 IP addresses. In 226 trials (77.9\%, $p =1.85 \times 10^{-21}$), melodies generated by HRNN-3L were favored. \section{Discussions} \label{Sec:Con} In this paper, we present a hierarchical RNN model to generate melodies. Two high-level rhythmic features, beat profile and bar profile, are designed to represent rhythm at two different time scales respectively. The human behavior experiment results show that the proposed HRNN can generate more well-organized melodies than the single-layer model. In addition, the proposed HRNN, though very simple, can generate better melodies than the well-known models MusicVAE and MidiNet. In the Music Turing Test, only 33.69\% pieces generated by the proposed model were thought to be composed by human. This proportion is still far from our expectation and there is still a long way to go for developing a perfect automatic melody generator. However, under current technology, HRNN has achieved good enough results. On one hand, one should notice that automatic generation of other forms of data is in the similar stage. For example, many state-of-the-art machine learning models trained on natural images~\cite{zhu2017unpaired}\cite{isola2016image} generate no more than 30\% images that can fool human. On the other hand, the dataset used in this study is not good enough (only 74.95\% human composed pieces were thought to be composed by human) which has hurt the performance of the model. If the model is trained on a dataset in which nearly all human composed pieces can be correctly classified, one may expect that about 44.9\% (=33.69/74.95) pieces generated by the model would fool human subjects. The proposed approach of course have many limitations which should be considered in future. First, since we quantized a bar into 16 time step, the encoding could not represent triplet or other types of rhythm. Second, we only selected musical pieces with 4/4 time signature from the original dataset for training. More time signatures should be taken into consideration to improve the capability of the model. Third, we only considered beats and bars as larger units than notes, and did not consider phrases which are often present in pop music, since they are not labeled in the dataset. With larger time scale units, the model may output pieces with longer temporal structure. \section*{Acknowledgements} This work was supported in part by the National Natural Science Foundation of China under Grant Nos. 61332007, 61621136008 and 61620106010.
{ "timestamp": "2018-09-06T02:05:57", "yymm": "1712", "arxiv_id": "1712.05274", "language": "en", "url": "https://arxiv.org/abs/1712.05274" }
\section{Introduction} Maritime transportation has significant benefits in terms of cost and capability for carrying a higher number of cargos. Indeed, sea trade statistics indicate that 90\% of global trade is performed by maritime transportation. This has led to new investments in container terminals and a variety of initiatives to improve the operational efficiency of existing terminals. Operations at a container terminal can be classified as quay side and yard side. They handle materials using quay cranes (QC), yard cranes (YC), and transportation vehicles such as yard trucks (YT). QCs load and unload containers at the quay side, while YCs load and discharge containers at the yard side. YTs provide transshipment of the containers between the quay and the yard sides. In a typical container terminal, it is important to minimize the vessel berthing time, i.e., the period between the arrival and the departure of a vessel. When a vessel arrives at the terminal, the berth allocation problem selects when and where in the port the vessel should berth. Once a vessel is berthed, its stowage plan determines the containers to be loaded/discharged onto/from the vessel. This provides an input to the {\em QC assignment and scheduling}, which determines the sequence of the containers to be loaded or discharged from different parts of the vessel by the QCs. In addition, the containers discharged by the QCs are placed onto YTs and transported to the storage area, which corresponds to a {\em vehicle dispatching problem}. Each discharged container is assigned a storage location, giving rise to a {\em yard location assignment problem}. Finally, the containers are taken from YTs by YCs and placed onto stacks in storage blocks, specifying a {\em YC assignment and scheduling problem}. A container terminal aims at completing the operations of each berthed vessel as quickly as possible to minimize vessel waiting times at the port and thus to maximize the turnover, i.e., the number of handled containers. Optimizing the integrated operations within a containing terminal is computationally challenging \cite{Vis2003}. Therefore, the optimization problems identified earlier are generally considered separately in the literature, and the number of studies considering integrated operations is rather limited. However, although the optimization of individual problems brings some operational improvements, the main opportunity lies in optimizing terminal operations holistically. This is especially important since the optimization sub-problems have conflicting objectives that can adversely affect the overall performance of the system. This paper considers the integrated optimization of container terminal operations and proposes MIP and CP formulations under some realistic assumptions. To the best of our knowledge, the resulting optimization problem has not been considered in the literature so far. Experimental results show that the MIP formulation is not capable of solving instances of practical relevance, while the CP model finds optimal solutions in reasonable times for realistic instances derived from real container terminal operations. The rest of the paper is organized as follows. Section \ref{section-problem} specifies the problem and the assumptions considered in this work. Section \ref{section-related-work} provides a detailed literature review for the integrated optimization of container terminal operations. Section \ref{section-MIP} presents the MIP model, while Section \ref{section-CP} presents the constraint programming model for the same problem. Section \ref{section-experiments} presents the data generation procedure, the experimental results, and the comparison of the different models. Finally, Section \ref{section-conclusion} presents concluding remarks and future research directions. \section{Problem Definition} \label{section-problem} This section specifies the Integrated Port Container Terminal Problem (IPCTP) and its underlying assumptions. The IPCTP is motivated by the operations of actual container terminals in Turkey. In container terminals, berth allocation assigns a berth and a time interval to each vessel. Literature surveys and interviews with port management officials reveal that significant factors in berth allocation include priorities between customers, berthing privileges of certain vessels in specific ports, vessel sizes, and depth of the water. Because of all these restrictions, the number of alternative berth assignments is quite low, especially in small Turkish ports. As a result, the berthing plan is often determined without the need of intelligent systems and the berthing decisions can be considered as input data to the scheduling of the material-handling equipment. The vessel stowage plan decides how to place the outbound containers on the vessel and is prepared by the shipping company. These two problems are thus separated from the IPCTP. In other words, the paper assumes that vessels are already berthed and ready to be served. The IPCTP is formulated by considering container groups, called {\em shipments}. A single shipment represents a group of containers that travel together and belong to the same customer. Therefore, the containers in a single shipment must be stored in the same yard block and in the same vessel bay. In addition, each shipment is handled as a single batch by QCs and YCs. The IPCTP determines the storage location in the yard for inbound containers. The yard is assumed to be divided into a number of areas containing the storage blocks. Each inbound shipment has a number of possible location points in each area. Each YC is assumed to be dedicated to a specific area of the yard. Note that outbound shipments are at specified yard locations at the beginning of the planning period and hence their YCs are known in advance. In contrast, for inbound shipments, the YC assignment is derived from the storage block decisions. The IPCTP assumes that each yard location can store at most one shipment but there is no difficulty in relaxing that assumption. The inbound and outbound shipments and their vessel bays are specified in the vessel stowage plan. The IPCTP assigns each shipment to a QC and schedules each QC to process a sequence of shipments. The QC scheduling is constrained by movement restrictions and safety distances between QCs. Two adjacent QCs must be apart from each other by safety distance, so that they can perform their tasks simultaneously without interference as described in \cite{NAV:NAV20121}. The IPCTP assumes the existence of a sufficient number of YTs so that cranes never wait. This assumption is motivated by observations in real terminals where many YTs are dedicated to each QC in order to ensure a smooth operation. This organization is justified by the fact that QCs are the most critical handling equipment in the terminal. As a result, QCs are very rarely blocked while discharging and almost never starve while loading. Therefore, the assumption of having a sufficient number of YTs is realistic and simplifies the IPCTP. The IPCTP also assumes that the handling equipment (QC, YC, YT) is homogeneous, and their processing times are deterministic and known. Since the QCs cannot travel beyond the berthed vessel bays and must obey a safety distance \cite{Sammarra2007}, each shipment can only be assigned to an eligible set of QCs that respect safety distance and non-crossing constraints. These are illustrated in Figure \ref{figure-1} where berthed vessel bays and QCs are indexed in increasing order and the safety distance is assumed to be 1 bay. For instance, only QC-1 is eligible to service bays 1--2. Similarly, only QC-3 can operate on vessel bays 8--9. In contrast, bays 3--4 can be served by QC-1 and QC-2. \begin{figure}[!h] \centering \includegraphics[width=0.8\linewidth]{Figure1} \caption{The Vessel Bays and Their Available QCs.} \label{figure-1} \end{figure} The main objective of a container terminal is to maximize total profit by increasing productivity. Terminal operators try to lower vessel turn times and decrease dwell times. To lower vessel turn times, the crane operations must be well-coordinated and the storage location of the inbound shipments must be chosen carefully, since they impact the distance traveled by the YTs. Therefore, the IPCTP jointly considers the storage location assignment for the inbound shipments from multiple berthed vessels and the crane assignment and scheduling for both outbound and inbound containers. The objective of the problem is to minimize the sum of weighted completion times of the vessels. \begin{figure}[!t] \centering \includegraphics[width=0.8\linewidth]{Figure2} \caption{An Example of Interference for Shipments $i, j$ and Quay Cranes $v, w$.} \label{figure-2} \end{figure} The input parameters of the IPCTP are given in Table \ref{table-parameter}. Most are self-explanatory but some necessitate additional explanation. The smallest distance $\delta_{v,w}$ between quay cranes $v$ and $w$ is given by $\delta_{v,w}=\left(\delta+1\right)\left|v-w\right|$ where $\delta$ is the safety distance. The minimum time between the starting times of shipments $i$ and $j$ when processed by cranes $v$ and $w$ is given by \[ \Delta_{i,j}^{v,w}= \begin{cases} \left(b_{i}-b_{j}+\delta_{v,w}\right)s_{QC}& \mbox{if } v < w \mbox{ and } i \ne j \mbox{ and } b_{i}>b_{j}-\delta_{v,w}\\ \left(b_{j}-b_{i}+\delta_{v,w}\right)s_{QC}& \mbox{if } v > w \mbox{ and } i \ne j \mbox{ and } b_{i}<b_{j}-\delta_{v,w}\\ 0 & \mbox{otherwise.} \end{cases} \] This captures the time needed for a quay crane to travel to a safe distance in case of potential interference. This is illustrated in Figure \ref{figure-2}. If shipments $i$ and $j$ are processed by cranes $v$ and $w$, then their starting times must be separated by $s_{QC}$ time units in order to respect the safety constraints (assuming that $w=v+1$). For instance, if shipment $i$ is processed first, crane $v$ must move to bay $1$ before shipment $j$ can be processed. Finally, the set of interferences can be defined by \[ \Theta= \{(i,j,v,w) \in C^2 \times QC^2 \mid i<j \ \& \ \Delta_{i,j}^{v,w}>0 \}. \] The dummy (initial and last) shipments are only used in the MIP model. \begin{table}[!t] \caption{The Parameters of the IPCTP.} \label{table-parameter} \vspace{0.2cm} \begin{tabular}{ll} \hline $S$ & Set of berthed vessels\\ $C_{u}^{s}$ & Set of inbound shipments that belong to vessel $s \in S$\\ $C_{l}^{s}$ & Set of outbound shipments that belong to vessel $s \in S$\\ $C$ & Set of all shipments\\ $C_{u}$ & Set of inbound shipments\\ $C_{l}$ & Set of outbound shipments\\ $L_{u}$ & Set of available yard locations for inbound shipments\\ $L_{l}$ & Set of yard locations for outbound shipments\\ $l_{i}$ & Yard location of outbound shipment $i \in C_{l}$ \\ $L$ & Set of all yard locations\\ $QC$ & Set of QCs\\ $YC$ & Set of YCs\\ $B$ & Set of vessel bays \\ $B_{T}$ & Total number of vessel bays\\ $QC_{T}$ & Total number of QCs\\ $b_{i}$ & Vessel bay position of shipment $i \in C$\\ $QC(i)$ & Set of eligible QCs for shipment $i \in C$\\ $YC(k)$ & The YC responsible for yard location $k \in L$\\ $w_{s}$ & Weight (priority) of vessel $s \in S$\\ $Q_{i}$ & QC handling time of shipment $i \in C$\\ $Y_{i}$ & YT handling time of shipment $i \in C$\\ $tyt_{i}$ & YT handling time of outbound shipment $i \in C_{l}$\\ $tt_{k}$ & YT transfer time of inbound shipment to yard location $k \in L_{u}$\\ $tyc_{k,l}$ & YC travel time between yard locations $k$ and $l$\\ $eqc_{i,j}$ & QC travel time from shipment $i \in C$ to shipment $j \in C$\\ $eyc_{i,j}$ & YC travel time from yard location $i \in L$ to yard location $j \in L$\\ $s_{QC}$ & Travel time for unit distance of equipment $QC$\\ $\delta$ & Safety distance between two $QC$s\\ $\delta_{v,w}$ & Smallest allowed difference between bay positions of quay cranes $v$ and $w$ \\ $\Delta_{i,j}^{v,w}$ & Minimum time between the starting times of shipments $i$ and $j$ \\ & when processed by cranes $v$ and $w$ \\ $\Theta$ & Set of all combinations of shipments and QCs with potential interferences \\ $0$ & Dummy initial shipment\\ $N$ & Dummy last shipment\\ $C^0$ & Set of all shipments including dummy initial shipment $C \cup \{0\}$ \\ $C^N$ & Set of all shipments including dummy last shipment $C \cup \{N\}$ \\ $M$ & A sufficiently large constant integer \\ \hline \end{tabular} \end{table} \section{Literature Review} \label{section-related-work} Port container terminal operations have received significant attention and many studies are dedicated to the sub-problems described earlier: See \cite{RePEc:eee:ejores:v:244:y:2015:i:3:p:675-689} for a classification of these subproblems. Recent work often consider the integration of two or three problems but very few papers propose formulations covering all the sub-problems jointly. Some papers give mathematical formulations of integrated problems but only use heuristic approaches given the computational complexity of solving the models. This section reviews recent publications addressing the integrated problems and highlights their contributions. Chen et al. \cite{Chen2007} propose a hybrid flowshop scheduling problem (HFSP) to schedule QCs, YTs, and YCs jointly. Both outbound and inbound operations are considered, but outbound operations only start after all inbound operations are complete. In the proposed mathematical model, each stage of the flowshop has unrelated multiple parallel machines and a tabu-search algorithm is used to address the computational complexity. Zheng et al. \cite{5421359} study the scheduling of QCs and YCs, together with the yard storage and vessel stowage plans. The authors consider an automated container handling system, in which twin 40’ QCs and a railed container handling system is used. A rough yard allocation plan is maintained to indicate which blocks are available for storing the outbound containers from each bay. No mathematical model is provided, and the yard allocation, vessel stowage, and equipment scheduling is performed using a rule-based heuristic. Xue et al. \cite{Xue2013} propose a mixed integer programming (MIP) model for integrating the yard location assignment for inbound containers, quay crane scheduling, and yard truck scheduling. Non-crossing constraints and safety distances between QCs are ignored, and the assignment of QCs and YTs are predetermined. The yard location assignment considers block assignments instead of container slots. The resulting model cannot be solved for even medium-sized problems. Instead, a two-stage heuristic algorithm is employed, combining an ant colony optimization algorithm, a greedy algorithm, and local search. Chen et al. \cite{CHEN2013142} consider the integration of quay crane scheduling, yard crane scheduling, and yard truck transportation. The problem is formulated as a constraint-programming model that includes both equipment assignment and scheduling. However, non-crossing constraints and safety margins are ignored. The authors state that large-scale instances are computationally intractable for constraint programming and that even small-scale instances are too time-consuming. A three-stage heuristic algorithm is solved iteratively to obtain solutions for large-scale problems with up to 500 containers. Wu et al. \cite{WU201313} study the scheduling of different types of equipment together with the storage strategy in order to optimize yard operations. Only loading operations for outbound containers are considered, and the tasks assigned to each QC and their processing sequence are assumed to be known. The authors formulate models to schedule the YCs and automated guided vehicles (AGV), and use a genetic algorithm to solve large-scale problems. Homayouni et al. \cite{Homayouni2012} study the integrated scheduling of cranes, vehicles, and storage platforms at automated container terminals. They consider a split-platform/retrieval system (SP-AS/RS), which includes AGVs and handling platforms for storing containers efficiently and providing quick access. A mathematical model of the same problem is proposed in \cite{HOMAYOUNI2014545}. In these studies, both outbound and inbound operations are considered. The origin and destination points of the containers are assumed to be predetermined and, in addition, empty for inbound containers. The earlier study proposes a simulated annealing (SA) algorithm to solve the problem, whereas the latter proposes a genetic algorithm (GA) outperforming SA under the same assumptions. Lu and Le \cite{LU2014209} propose an integrated optimization of container terminal scheduling, including YTs and YCs. The authors consider uncertainty factors such as YT travel speed, YC speed, and unit time of the YC operations. The assignment of YTs and YCs are not considered, and pre-assignments are assumed. The objective is to minimize the operation time of YCs in coordination with the YTs and QCs. The authors use a simulation of the real terminal operation environment to capture uncertainties. The authors also formulate a mathematical model and propose a particle swarm optimization (PSO) algorithm. As a future study, they indicate that the scheduling for simultaneous outbound and inbound operations should be considered for terminals adopting parallel operations. Finally, a few additional studies \cite{CHEN200740,Homayouni2013,KAVESHGAR2015168,LAU2008665,5223935,NIU2016284,TANG2014978,XIN2015377,XIN2014214} integrate different sub-problems, highlighting the increasing attention given to integrated solutions. They propose a wide range of heuristic or meta-heuristic algorithms; e.g., genetic algorithm, tabu search, particle swarm optimization, and rule-based heuristic methods. Although most papers in container port operations focus on individual problems, recent developments have emphasized the need and potential for coordinating these interdependent operations. This paper pushes the state-of-the-art further by optimizing all operations holistically and demonstrating that constraint programming is a strong vehicle to address this integrated problem. \section{The MIP Model} \label{section-MIP} \begin{model}[!t] \caption{The MIP Model for the IPCTP: Decision Variables} \label{model:MIP-DV} \begin{subequations} \vspace{-0.2cm} \begin{align} & \mbox{\bf Variables} \nonumber \\ & x_{i,k} \in \{0,1\}: \mbox{inbound shipment } i \mbox{ is assigned to yard location } k \nonumber \\ & z_{i,j}^{q} \in \{0,1\}: \mbox{inbound shipment }j \mbox{ is handled immediately after shipment }i \mbox{ by QC }q \nonumber \\ & qz_{i,j} \in \{0,1\}: \mbox{inbound shipment }j \mbox{ is handled after shipment }i \mbox{ by QC } \nonumber \\ & v_{i,j}^{c} \in \{0,1\}: \mbox{ inbound shipment }j \mbox{ is handled immediately after shipment }i \mbox{ by YC }c \nonumber \\ & sqc_{i} \geq 0: \mbox{start time of shipment }i \mbox{ by its QC} \nonumber \\ & syc_{i} \geq 0 : \mbox{start time of shipment }i \mbox{ by its YC} \nonumber \\ & t_{i} \geq 0: \mbox{travel time of YT for inbound shipment } i \mbox{ to assigned yard location }k \nonumber \\ & sy_{i,j} \geq 0: \mbox{travel time of YC from location } i \mbox{ to location } j \nonumber \\ & Cmax_{s}: \mbox{time of the last handled container at vessel }s \nonumber \end{align} \end{subequations} \end{model} \begin{model}[!t] \caption{The MIP Model for the IPCTP: Objective and Constraints} \label{model:MIP} \begin{subequations} \vspace{-0.2cm} \begin{align} & \mbox{\bf Objective} \nonumber \\ & \mbox{minimize } \textstyle\sum_{s \in S} Cmax_{s} \\ & \mbox{\bf Constraints} \nonumber \\ & Cmax_{s} \geq w_{s}\left(sqc_{i}+Q_{i}\right)\quad \forall s \in S, \forall i \in C_{l}^{s}\enspace \\ & Cmax_{s} \geq w_{s}\left(syc_{i}+Y_{i}\right)\quad \forall s \in S, \forall i \in C_{u}^{s} \\ & \textstyle\sum_{i \in C_{u}} x_{i,k} \le 1\quad \forall k \in L_{u} \\ & \textstyle\sum_{k \in L_{u}} x_{i,k} = 1\quad \forall i \in C_{u} \\ & \textstyle\sum_{j \in C^{N}} z_{0,j}^{q} = 1\quad \forall q \in QC \\ & \textstyle\sum_{j \in C^{N}} v_{0,j}^{c} = 1\quad \forall c \in YC \\ & \textstyle\sum_{j \in C^{0}} z_{i,N}^{q} = 1\quad \forall q \in QC \\ & \textstyle\sum_{j \in C^{0}} v_{i,N}^{c} = 1\quad \forall c \in YC \\ & \textstyle\sum_{q \in QC(i)} \sum_{j \in C^{N}} z_{i,j}^{q} = 1\quad \forall i \in C, i \ne j \\ & \textstyle\sum_{j \in C^{N}} v_{i,j}^{YC(k)} = x_{i,k}\quad \forall k \in L_{u},\forall i \in C_{u}, i \ne j \\ & \textstyle\sum_{j \in C^{N}} v_{i,j}^{YC(k)} =1\quad \forall k \in L_{l},\forall i \in C_{l}, i \ne j \\ & \textstyle\sum_{j \in C^{0}} z_{j,i}^{q}-\sum_{j \in C^{N}} z_{i,j}^{q} =0\quad \forall i \in C,\forall q \in QC \\ & \textstyle\sum_{j \in C^{0}} v_{j,i}^{c}-\sum_{j \in C^{N}} v_{i,j}^{c} =0\quad \forall i \in C,\forall c \in YC \\ & \textstyle t_{i}=\sum_{k \in L_{u}} \left(tt_{k}*x_{i,k}\right)\quad \forall i \in C_{u} \\ & \textstyle sy_{i,j}=\sum_{m \in L_{u}} \left(tyc_{m,l_i}*x_{i,m}\right)\quad \forall i \in C_{u}, \forall j \in C_{l} \\ & \textstyle sy_{i,j}=\sum_{m \in L_{u}}\sum_{l \in L_{u}} \left(tyc_{m,l}*x_{i,m}*x_{j,l}\right)\quad \forall i,j \in C_{u} \\ & \textstyle sy_{i,j}=\sum_{m \in L_{u}} \left(tyc_{l_i,m}*x_{i,m}\right)\quad \forall i \in C_{l}, \forall j \in C_{u} \\ & \textstyle sqc_{j}+M\left(1-z_{i,j}^{q}\right)\geq sqc_{i}+Q_{i}+eqc_{i,j}\quad \forall i,j \in C, \forall q \in QC \\ & \textstyle syc_{j}+M\left(1-v_{i,j}^{c}\right)\geq syc_{i}+Y_{i}+sy_{i,j}\quad \forall i \in C_{u},\forall j \in C, \forall c \in YC \\ & \textstyle syc_{j}+M\left(1-v_{i,j}^{c}\right)\geq syc_{i}+Y_{i}+sy_{i,j}\quad \forall i \in C_{l},\forall j \in C_{u}, \forall c \in YC \\ & \textstyle syc_{j}+M\left(1-v_{i,j}^{c}\right)\geq syc_{i}+Y_{i}+eyc_{i,j}\quad \forall i,j \in C_{l}, \forall c \in YC \\ & \textstyle sqc_{i}\geq syc_{i}+Y_{i}+tyt_{i}\quad \forall i \in C_{l} \\ & \textstyle syc_{i}\geq sqc_{i}+Q_{i}+t_{i}\quad \forall i \in C_{u} \\ & \textstyle sqc_{i}+Q_{i}-sqc_{j}\le M\left(1-qz_{i,j}\right)\quad \forall i,j \in C \\ & \textstyle \sum_{u \in C^{0}} z_{u,i}^{v}+\sum_{u \in C^{0}} z_{u,j}^{w}\le 1+qz_{i,j}+qz_{j,i}\quad \forall \left(i,j,v,w\right) \in \Theta \\ & \textstyle sqc_{i}+Q_{i}+\Delta_{i,j}^{v,w}-sqc_{j}\le M\left(3-qz_{i,j}-\sum_{u \in C^{0}}z_{u,i}^{v}-\sum_{u \in C^{0}}z_{u,j}^{w}\right)\quad \forall \left(i,j,v,w\right) \in \Theta \end{align} \end{subequations} \end{model} The MIP decision variables are presented in Model \ref{model:MIP-DV}, while the objective function and the constraints are given in Model \ref{model:MIP}. They use the formulation of QC interference constraints from \cite{Bierwirth2009}. The first set of MIP variables are binary: They determine the yard location of every inbound shipment and the (immediate) successor relationships on the cranes. The remaining variables are essentially devoted to the start times and the travel times of the shipments. The objective function (2-01) minimizes the maximum weighted completion time of each vessel. Constraints (2-02--2-03) compute the weighted completion time of each vessel. Inbound shipments start their operations at a QC and finish at a YC, whereas outbound shipments follow the reverse order. Constraint (2-04) expresses that each available storage block stores at most one inbound shipment. Constraint (2-05) ensures that each inbound shipment is assigned an available storage block. All the containers of a shipment are assigned to the same block. Constraints (2-06--2-07) assigns the first (dummy) shipments to each QC and YC and Constraints (2-08--2-09) do the same for the last (dummy) shipments. Constraint (2-10) states that every shipment is handled by exactly one eligible QC. Constraints (2-11--2-12) ensures that each shipment is handled by a single YC. In Constraint (2-12), yard blocks are known at the beginning of the planning horizon for outbound shipments: They are thus directly assigned to the dedicated YCs. Constraints (2-13--2-14) guarantee that the shipments are handled in well-defined sequences by each handling equipment (QC and YC). Constraint (2-15) defines the YT transportation times for the inbounds shipments. Constraints (2-16--2-18) specify the empty travel times of the YCs according to yard block assignments of the shipments. Constraints (2-19--2-22) specify the relationship between the start times of two consecutive shipments processed by the same handling equipment. Constraints (2-23--2-24) are the precedence constraints for each shipment, which again differ for inbound and outbound shipments. Constraint (2-25) ensures that, if shipment $i$ precedes shipment $j$ on a QC, shipment $j$ cannot start its operation on that QC until shipment $i$ finishes. Constraint (2-26) guarantees that shipments that potentially interfere are not allowed to be processed at the same time on any QC. Constraint (2-27) imposes a minimum temporal distance between the processing of such shipments, which corresponds to the time taken by the QC to move to a safe location. There are nonlinear terms in constraint (2-17), which computes the empty travel time of a YC, i.e., when it travels between the destinations of two inbound shipments. These terms can be linearized by introducing new binary variables of the form $\theta_{i,k,j,l}$ to denote whether inbound shipments $i,j\in C_{u}$ are assigned to yards locations $k,l\in L_{u}$. The constraints then become: \begin{align} & \textstyle sy_{i,j}=\sum_{k \in L_{u}}\sum_{l \in L_{u}}tyc_{k,l}\theta_{i,k,j,l}\quad \forall i,j \in C_{u} \nonumber \\ & \textstyle \theta_{i,k,j,l}\geq x_{i,k}+x_{j,l}-1, \forall i,j \in C_{u}\quad \forall k,l \in L_{u},i\ne j,k\ne l \nonumber \\ & \textstyle 2-\left(x_{i,k}+x_{j,l}\right)\le 2\left(1-\theta_{i,k,j,l}\right)\quad \forall i,j \in C_{u},\forall k,l \in L_{u},i\ne j,k\ne l \nonumber \end{align} \section{The Constraint Programming Model} \label{section-CP} \begin{model}[!t] \caption{The CP Model for the IPCTP} \label{model:CP-DV} \begin{subequations} \vspace{-0.2cm} \begin{align} & \mbox{\bf Variables} \nonumber \\ & qc_{i}: \mbox{Interval variable for the QC handling of shipment }i \nonumber \\ & yt_{i}: \mbox{Interval variable for the YT handling of inbound shipment }i \nonumber \\ & aqc_{i,j}: \mbox{Optional interval variable for shipment }i \mbox{ on QC }j \mbox{ with duration }Q_{i} \nonumber \\ & ayc_{i,k}: \mbox{Optional interval variable for shipment }i \mbox{ on YC at yard location }k \mbox{ with duration }Y_{i} \nonumber \\ & qcs_{j}: \mbox{Sequence variable for QC } j \mbox{ over } \{ aqc_{i,j} \mid i \in C \} \nonumber \\ & ycs_{j}: \mbox{Sequence variable for YC }j \mbox{ over } \{ ayc_{i,k} \mid i \in C \wedge YC(k) = j\} \nonumber \\ & interfere_{i,v,j,w}: \mbox{Sequence variable over } \{aqc_{i,v},aqc_{j,w}\} \nonumber \\ & \mbox{\bf Objective} \nonumber \\ & \mbox{minimize } \textstyle \sum_{s \in S} w_{s} \left(max\left(\max_{i\in C_{u}^{s}}\textsc{endOf}\left(yt_{i}\right),\max_{j\in C_{l}^{s}}\textsc{endOf}\left(qc_{j}\right)\right)\right) \\ & \mbox{\bf Constraints} \nonumber \\ & \textstyle \textsc{alternative}\left(qc_{i}, \mbox {all}\left(\mbox{$j$ in } QC\left(i\right)\right)aqc_{i,j}\right)\quad \forall i\in C\enspace \\ & \textstyle \textsc{alternative}\left(yt_{i}, \mbox {all}\left(\mbox{$k$ in } L_{u}\right)ayc_{i,k}\right)\quad \forall i\in C_{u} \\ & \textstyle \sum_{i \in C_{u}}\textsc{presenceOf}\left(ayc_{i,k}\right)\le 1\quad \forall k\in L_{u} \\ & \textstyle \textsc{presenceOf}\left(ayc_{i,l_{i}}\right)=1\quad \forall i\in C_{l} \\ & \textstyle \textsc{noOverlap}\left(ycs_{m},eyc_{i,j}\right)\quad \forall m\in YC \\ & \textstyle \textsc{noOverlap}\left(qcs_{m},eqc_{i,j}\right)\quad \forall m\in QC \\ & \textstyle \textsc{endBeforeStart}\left(aqc_{i,n},ayc_{i,k},tt_{k}\right)\quad \forall i\in C_{u}, k\in L_{u}, n\in QC_{i} \\ & \textstyle \textsc{endBeforeStart}\left(ayc_{i,l_{i}},aqc_{i,n},tyt_{i}\right)\quad \forall i\in C_{l}, n\in QC_{i} \\ & \textstyle \textsc{noOverlap}\left(interfere_{i,v,j,w},\Delta_{i,j}^{v,w}\right)\ \forall i,j\in C, v\in QC_{i}, w\in QC_{j}:\Delta_{i,j}^{v,w}>0 \end{align} \end{subequations} \end{model} The CP model is presented in Model \ref{model:CP-DV} using the OPL API of CP Optimizer. It uses interval variables for representing the QC handling of all shipments and the YC handling of inbound shipments. In addition, a range of optional interval variables are used to represent the handling of shipment $i$ on QC $j$, and the handling of shipment $i$ at yard location $k$. The model also declares a number of sequence variables associated with each QC and YC: Each sequence constraint collects all the optional interval variables associated with a specific crane. Finally, the model declares a number of sequences for optional interval variables that may interfere.\footnote{Not all such sequences are useful but we declare them for simplicity.} The CP model minimizes the weighted completion time of each vessel by computing the maximum end date of the yard cranes (inbound shipments) and for the quay crane (outbound shipments). Alternative constraints (3-02) ensure that the QC processing of a shipment is performed by exactly one QC. Alternative constraints (3-03) enforce that each inbound shipment is allocated to exactly one yard location, and hence one yard crane. Constraints (3-04) state that at most one shipment can be allocated to each yard location. Constraints (3-05) fix the yard location (and hence the yard crane) of outbound shipments. Cranes are disjunctive resources and can execute only one task at a time, which is expressed by the {\sc noOverlap} constraints (3-06--3-07) over the sequence variables associated with the cranes. These constraints also enforce the transition times between successive operations, capturing the empty travel times between yard locations (constraints 3-06) and bay locations (constraints 3-07). Constraints (3-08) impose the precedence constraints between the QC and YC tasks of inbound shipments, while adding the travel time to move the shipment from its bay to its chosen yard location. Constraints (3-09) impose the precedence constraints between the YC and QC operations of outbound shipments, adding the travel time from the fixed yard location to the fixed bay of the shipment. Interference constraints for the QCs are imposed by constraints (3-10). These constraints state that, if there is a conflict between two shipments and their QCs, then the two shipments cannot overlap in time, and their executions must be separated by a minimum time. This is expressed by {\sc noOverlap} constraints over sequences consisting of the pairs of optional variables associated with these tasks. \section{Experimental Results} \label{section-experiments} The MIP and CP models were written in OPL and run on the IBM ILOG CPLEX 12.7.1 software suite. The results were obtained on an Intel Core i7-5500U CPU 2.40 GHz computer. \subsection{Data Generation} The test cases were generated in accordance with earlier work, while capturing the operations of an actual container terminal. These are the first instances of this type, since the IPCTP has not been considered before in the literature. Travel and processing times in the test cases model those in the actual terminal. Different instances have slightly different times, as will become clear. Figure \ref{figure-3} depicts the layout of the yard side considered in the experiments, which also models the actual terminal. The yard side is divided into 3 separate fields denoted by A, B, and C. Each field has two location areas and a single YC is responsible for each location area, giving a total of 6 yard cranes. In each location area, there are 2 yard block groups, shown as the dotted region, and traveling between them takes one unit of time. Field C is the nearest to the quay side, and there is a hill from field C to field A. The transportation times for YTs are generated according to these distances. YTs can enter and exit each field from the entrance shown in the figure, so the transfer times between the vessels and the yard blocks close to the entrance take less time. YT transfer times are generated between [5,10] considering the position of the yard blocks. At the quay side, the travel of a QC between consecutive vessel bay locations takes 3 units of time. \begin{figure}[!t] \small \centering \includegraphics[width=0.5\linewidth]{Figure3} \caption{Layout of the Yard Side.} \label{figure-3} \end{figure} The processing times of the cranes for a single container are generated uniformly in [2,5] for YCs and [2,4] for QCs. The safety margin for the QCs is set to 1 vessel bay. The IPCTP is expressed in terms of shipments and the number of containers in each shipment is uniformly distributed between [4,40]. The experiments evaluate the impact of the number of shipments, the number of vessel bays, the inbound-outbound shipment ratio, and the number of available yard locations for inbound shipments. The number of shipments varies between 5 and 25, by increments of 5. The instances can thus contain up to 1,000 containers. The number of vessel bays are taken in $\{4,6,8\}$. The number of QCs depends on the vessel bays due to the QC restrictions: There are half as many QCs as there are vessel bays. The inbound-outbound shipment ratios are 20\% and 50\%, representing the fraction of inbound shipments over the outbound shipments. Finally, the number of available yard locations (U-L ratio) is computed from the number of inbound shipments: There are 2 to 3 times more yard locations than inbound shipments. For each configuration of the parameters, 5 random instances were generated. \subsection{Computational Results and Analysis} \paragraph{The Results} The results are given in Tables \ref{Table-20} and \ref{Table-50} for each configuration, for a total of 300 instances. Table \ref{Table-20} reports the results for 20\% inbound-outbound ratio, and Table \ref{Table-50} for 50\%. In the tables, each configuration is specified in terms of the U-L ratio, the number of bays and the number of shipments (Shp.). The average number of containers (Cnt.) in each shipment is also presented. For each such configuration, the tables report the average objective value and the average CPU time for its five instances. The CPU time is limited to an hour (3,600 seconds). The MIP solver did not always find feasible solutions within an hour for some or all five instances of a configuration. Note that this may result in an average objective value that is lower for the MIP model than the CP model, even when CP solves all instances optimally, since the MIP model may not find a feasible solution to an instance with a high optimal value. These cases are flagged by superscripts of the form $x/y$, where $x$ is the number of infeasible and $y$ is the number of suboptimal solutions in that average. The superscripts for CP indicate the number of suboptimal solutions. An entry 'NA' in the table means that the MIP cannot find a feasible solution to any of the five instances. For the MIP, the tables also report the optimality gap on termination, i.e., the gap in percentage between the best lower and upper bounds. For CP, the experiments were also run with a CPU limit of 600 seconds. The relative percentage deviations (RPD\%) from the 1-hour runs are listed to assess CP's ability to find high-quality solutions quickly. The RPD is computed as follows: \[ RPD\% = \dfrac{\left(\mbox{Obj. in 600 sec.} - \mbox{Obj. in 3600 sec.}\right)*100}{\left(\mbox{Obj. in 3600 sec.}\right)}. \] \begin{table}[!t] \small \centering \caption{Results for Import-Export Rate 20\%} \label{Table-20} \vspace{0.5cm} \begin{tabular}{ccrr|r|r|r|r|r|r|} \cline{5-10} \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & \multicolumn{3}{c|}{\textbf{MIP}} & \multicolumn{3}{c|}{\textbf{CP}} \\ \hline \multicolumn{1}{|c|}{\begin{tabular}[c]{@{}c@{}}U-L\\ Ratio\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}\# of\\ Bays\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}\# of\\ Shp.\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Avg. \#\\ of Cnt.\end{tabular}} & \multicolumn{1}{c|}{Obj.} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}CPU\\ (sec.)\end{tabular}} & \multicolumn{1}{c|}{GAP\%} & \multicolumn{1}{c|}{Obj.} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}CPU\\ (sec.)\end{tabular}} & \multicolumn{1}{c|}{RPD\%} \\ \hline \multicolumn{1}{|c|}{\multirow{15}{*}{2}} & \multicolumn{1}{c|}{\multirow{5}{*}{4}} & \multicolumn{1}{r|}{5} & 67.8 & 301.80 & 0.06 & 0.00 & 301.80 & 0.27 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{10} & 88.8 & $548.20^{0/3}$ & 3032.78 & 0.17 & 548.20 & 5.45 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{15} & 99.8 & $751.20^{0/5}$ & 3604.88 & 0.56 & 748.80 & 32.78 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{20} & 117 & $829.40^{0/5}$ & 3600.38 & 0.62 & $820.00^{1}$ & 775.74 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{25} & 139.4 & $1032.75^{1/4}$ & 3600.25 & 0.66 & $999.00^{2}$ & 1700.54 & 0.06 \\ \cline{2-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\multirow{5}{*}{6}} & \multicolumn{1}{r|}{5} & 157.2 & 242.00 & 0.06 & 0.00 & 242.00 & 0.34 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{10} & 168.2 & 389.60 & 68.51 & 0.00 & 389.60 & 6.60 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{15} & 193.4 & $560.20^{0/5}$ & 3607.59 & 0.42 & 553.60 & 47.90 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{20} & 207.8 & $634.67^{2/3}$ & 3600.99 & 0.50 & 647.40 & 222.27 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{25} & 233.8 & $916.00^{4/1}$ & 3608.07 & 0.62 & 754.20 & 755.66 & 0.15 \\ \cline{2-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\multirow{5}{*}{8}} & \multicolumn{1}{r|}{5} & 246.8 & 584.00 & 0.13 & 0.00 & 583.90 & 1.18 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{10} & 254.8 & 640.60 & 23.88 & 0.00 & 640.60 & 23.88 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{15} & 279.4 & $994.00^{0/5}$ & 3607.15 & 0.32 & 952.10 & 160.57 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{20} & 306.2 & $1261.00^{0/5}$ & 3600.63 & 0.45 & 1161.90 & 835.41 & 0.40 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{25} & 325 & NA & NA & NA & 1343.10 & 1855.45 & 3385.58 \\ \hline \multicolumn{1}{|c|}{\multirow{15}{*}{3}} & \multicolumn{1}{c|}{\multirow{5}{*}{4}} & \multicolumn{1}{r|}{5} & 338 & 244.40 & 0.09 & 0.00 & 244.40 & 0.33 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{10} & 358.6 & $434.00^{0/2}$ & 1488.63 & 0.10 & 434.00 & 5.10 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{15} & 378.2 & $642.40^{0/5}$ & 3604.06 & 0.49 & 641.20 & 38.15 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{20} & 390.4 & $863.80^{0/5}$ & 3603.24 & 0.64 & $860.20^{1}$ & 840.29 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{25} & 402.6 & $920.00^{3/2}$ & 3625.08 & 0.65 & $844.80^{2}$ & 2086.27 & 0.08 \\ \cline{2-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\multirow{5}{*}{6}} & \multicolumn{1}{r|}{5} & 418 & 310.20 & 0.22 & 0.00 & 310.20 & 0.45 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{10} & 439 & 421.60 & 127.63 & 0.00 & 421.60 & 6.96 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{15} & 461 & $513.80^{0/5}$ & 3600.55 & 0.35 & 513.20 & 323.83 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{20} & 484 & $699.50^{3/2}$ & 3600.68 & 0.54 & 631.80 & 140.84 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{25} & 509.4 & $659.00^{4/1}$ & 3600.57 & 0.51 & $856.80^{5}$ & 3601.78 & 0.14 \\ \cline{2-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\multirow{5}{*}{8}} & \multicolumn{1}{r|}{5} & 519.2 & 597.00 & 0.13 & 0.00 & 596.90 & 1.42 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{10} & 532.8 & 815.40 & 161.25 & 0.00 & 815.20 & 25.67 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{15} & 571.4 & $980.40^{0/5}$ & 3600.70 & 0.30 & 972.40 & 232.30 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{20} & 591.2 & $1219.50^{3/2}$ & 3600.16 & 0.46 & 1214.90 & 1475.40 & 3.34 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{25} & 611.2 & $1471.00^{4/1}$ & 3600.27 & 0.48 & $1449.50^{1}$ & 2498.82 & 4390.42 \\ \hline \end{tabular} \end{table} \begin{table}[!t] \small \centering \caption{Results for Import-Export Rate 50\%} \label{Table-50} \vspace{0.5cm} \begin{tabular}{ccrr|r|r|r|r|r|r|} \cline{5-10} \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & \multicolumn{3}{c|}{\textbf{MIP}} & \multicolumn{3}{c|}{\textbf{CP}} \\ \hline \multicolumn{1}{|c|}{\begin{tabular}[c]{@{}c@{}}U-L\\ Ratio\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}\# of\\ Bays\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}\# of\\ Shp.\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Avg. \#\\ of Cnt.\end{tabular}} & \multicolumn{1}{c|}{Obj.} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}CPU\\ (sec.)\end{tabular}} & \multicolumn{1}{c|}{GAP\%} & \multicolumn{1}{c|}{Obj.} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}CPU\\ (sec.)\end{tabular}} & \multicolumn{1}{c|}{RPD\%} \\ \hline \multicolumn{1}{|c|}{\multirow{15}{*}{2}} & \multicolumn{1}{c|}{\multirow{5}{*}{4}} & \multicolumn{1}{r|}{5} & 67.8 & 320.40 & 0.21 & 0.00 & 320.40 & 0.47 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{10} & 88.8 & $541.60^{0/5}$ & 3600.08 & 0.37 & 541.60 & 15.93 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{15} & 99.8 & $738.40^{0/5}$ & 3600.97 & 0.58 & 738.40 & 113.76 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{20} & 117 & $755.67^{2/3}$ & 3600.31 & 0.60 & $820.00^{1}$ & 1110.29 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{25} & 139.4 & $1570.00^{4/1}$ & 3600.63 & 0.77 & $998.40^{2}$ & 1989.51 & 0.06 \\ \cline{2-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\multirow{5}{*}{6}} & \multicolumn{1}{r|}{5} & 157.2 & 199.00 & 0.11 & 0.00 & 199.00 & 0.83 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{10} & 168.2 & $377.00^{0/1}$ & 881.29 & 0.05 & 377.00 & 10.71 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{15} & 193.4 & $583.80^{0/5}$ & 3603.98 & 0.48 & 542.80 & 94.11 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{20} & 207.8 & $829.50^{3/2}$ & 3600.21 & 0.63 & 648.60 & 900.15 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{25} & 233.8 & $1811.00^{4/1}$ & 3600.52 & 0.81 & $752.20^{1}$ & 1312.59 & 0.23 \\ \cline{2-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\multirow{5}{*}{8}} & \multicolumn{1}{r|}{5} & 246.8 & 580.40 & 0.22 & 0.00 & 580.20 & 2.36 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{10} & 254.8 & 600.00 & 548.25 & 0.00 & 599.80 & 47.07 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{15} & 279.4 & $990.00^{1/4}$ & 3601.38 & 0.31 & 898.10 & 247.72 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{20} & 306.2 & $1149.33^{2/3}$ & 3600.35 & 0.38 & 1052.90 & 1323.69 & 3.64 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{25} & 325 & $3846.00^{3/2}$ & 3600.47 & 0.81 & $1333.50^{4}$ & 3391.34 & 4559.78 \\ \hline \multicolumn{1}{|c|}{\multirow{15}{*}{3}} & \multicolumn{1}{c|}{\multirow{5}{*}{4}} & \multicolumn{1}{r|}{5} & 338 & 265.00 & 0.22 & 0.00 & 265.00 & 0.95 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{10} & 358.6 & $428.40^{0/4}$ & 2893.00 & 0.23 & 428.40 & 18.07 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{15} & 378.2 & $653.50^{1/4}$ & 3601.66 & 0.49 & $641.20^{1}$ & 813.17 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{20} & 390.4 & $1201.25^{1/4}$ & 3600.46 & 0.72 & $859.80^{1}$ & 1230.81 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{25} & 402.6 & NA & NA & NA & $841.60^{5}$ & 3601.76 & 0.08 \\ \cline{2-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\multirow{5}{*}{6}} & \multicolumn{1}{r|}{5} & 418 & 302.40 & 0.61 & 0.00 & 302.40 & 1.21 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{10} & 439 & $407.80^{0/2}$ & 1777.30 & 0.13 & 407.80 & 16.12 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{15} & 461 & $488.00^{1/4}$ & 3607.27 & 0.33 & $490.60^{1}$ & 855.09 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{20} & 484 & $971.33^{2/3}$ & 3600.57 & 0.65 & $611.60^{1}$ & 959.72 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{25} & 509.4 & NA & NA & NA & $856.20^{5}$ & 3602.76 & 0.44 \\ \cline{2-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\multirow{5}{*}{8}} & \multicolumn{1}{r|}{5} & 519.2 & 550.80 & 0.18 & 0.00 & 550.50 & 2.57 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{10} & 532.8 & 756.60 & 609.18 & 0.00 & 756.50 & 66.82 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{15} & 571.4 & $1006.20^{0/5}$ & 3600.69 & 0.33 & 968.80 & 427.87 & 0.00 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{20} & 591.2 & $2428.50^{3/2}$ & 3600.54 & 0.73 & $1146.50^{1}$ & 2187.52 & 772.78 \\ \cline{3-10} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{25} & 611.2 & NA & NA & NA & $1393.30^{5}$ & 3613.52 & 6479.76 \\ \hline \end{tabular} \end{table} \paragraph{MIP versus CP} The experimental results indicate that CP is orders of magnitude more efficient than MIP on the IPCTP. This is especially remarkable since this paper compares two black-box solvers. Overall, within the time limit, the MIP model does not find feasible solutions for 71 out of 300 instances and cannot prove optimality for 103 instances. On all but the smallest instances, the MIP solver cannot prove optimality for all five instances of the same confguration. In almost all configurations with 20 or more shipments, the MIP solver fails to find feasible solutions on at least one of the instances. In contrast, the CP model always find feasible solutions and proves optimality for 260 instances out of 300 instances. CP proves optimality on all but 12 instances in Table \ref{Table-50}, and all but 28 in Table \ref{Table-20}. On instances where both models find optimal solutions, the CP model is almost always 1--3 orders of magnitude faster (except for the smallest instances). Finally, the CP model always dominates the MIP model, in the sense that it proves optimality every time the MIP does. \paragraph{Short Runs} On almost all instances but the largest ones, the CP model finds optimal, or near optimal, solutions within 10 minutes. On the largest instances, longer CPU times are necessary to find optimal or near-optimal solutions. \paragraph{Sensitivity Analysis} The sensitivity analysis is restricted to the CP model for obvious reasons. The sensitivity of each factor is analyzed by comparing their respective run times and objective values. In general, the effect of the number of bays on the solution values and on CPU times tend to be small. In contrast, increasing the U-L ratio from 2 to 3 gives inbound shipments more alternatives for yard locations, which typically increases CPU times. Increasing the ratio of inbound-outbound containers also increases problem difficulty. This is not a surprise, since inbound shipments are more challenging, as they require a yard location assignment, while outbound shipments have both their yard locations and vessel bays fixed. Nevertheless, the CP model scales reasonably well when this ratio is increased. These analyses indicate that the number of shipments/containers is by far the most important element in determining the computing times in the IPCTP: The other factors have a significantly smaller impact, which is an interesting result in its own right. \section{Conclusion} \label{section-conclusion} This paper introduced the Integrated Port Container Terminal Problem (IPCTP) which, to the best of our knowledge, integrates for the first time, a wealth of port operations, including the yard location assignment, the assignment of quay and yard cranes, and the scheduling of these cranes under realistic constraints. In particular, the IPCTP considers empty travel time of the equipment and interference constraints between the quay cranes. The paper proposed both an MIP and a CP model for the IPCTP of a configuration based on an actual container terminal, which were evaluated on a variety of configurations regarding the number of vessel bays, the number of yard locations, the ratio of inbound-outbound shipments, and the number of shipments/containers. Experimental results indicate that the MIP model can only be solved optimally for small instances and often cannot find feasible solutions. The CP model finds optimal solutions for 87\% of the instances and, on instances where both models can be solved optimally, the CP model is typically 1--3 orders of magnitude faster and proves optimality each time the MIP does. The CP model scales reasonably well with the number of vessel bays and yard locations, and the ratio of inbound-outbound shipments. It also solves large realistic instances with hundreds of containers. These results contrast with the existing literature which typically resort to heuristic or meta-heuristic algorithms, with no guarantee of optimality. Future work will be devoted to capturing a number of additional features, including operator-based processing times, the stacking of inbound containers using re-shuffling operations, and the scheduling of the yard trucks. \newpage
{ "timestamp": "2017-12-15T02:09:42", "yymm": "1712", "arxiv_id": "1712.05302", "language": "en", "url": "https://arxiv.org/abs/1712.05302" }
\section{Introduction} Following the discovery of a Higgs boson with a mass of 125 GeV \cite{atlas,cms}, henceforth labeled by $H$, with characteristics similar to those of the predicted state of the Standard Model (SM), experiments at the LHC have effectively begun to probe Electro-Weak Symmetry Breaking (EWSB) dynamics. The search channel that mostly enabled discovery was the one involving $gg\to H$ production followed by a $H\to \gamma\gamma$ decay, thanks to its cleanliness in the hadronic environment of the LHC and the sharp resolution in the di-photon invariant mass achievable by the LHC detectors, despite this decay being actually very subleading. Other Higgs signals were eventually established and studied in detail in order to measure the $H$ state fundamental parameters, i.e., mass, width and couplings, all broadly consistent with the SM picture. Furthermore, comprehensive analyses investigating the spin and parity of the discovered particle have finally confirmed its most likely spin-0 and Charge/Parity (CP)-even nature, again, well in line with SM predictions. The EWSB dynamics implemented within the SM is minimal in nature, allowing for the existence of only one Higgs boson. However, this needs not be the preferred realisation chosen by Nature. Just like the gauge and Yukawa sectors are not, i.e., there are multiple spin-1 and spin-1/2 states, there is a case for conceiving the possibility of an extended spin-0 sector too. As the Higgs boson so far discovered emerges from a doublet representation, a meaningful approach to surpass the SM is exemplified by 2HDMs \cite{2hdms,hunters,lee}, wherein a second (complex) Higgs doublet is added to the fundamental field representations of the SM. Upon EWSB, this yields five Higgs boson states as physical objects: two neutral CP-even ones ($h$ and $H$ with, conventionally, $m_{H}>m_h$), one neutral CP-odd one ($A$) and two charged ones ($H^\pm$) \cite{gunion}. In the light of the established nature of the Higgs boson signals at the LHC, as mentioned above, in terms of its mass, width, couplings, spin and CP state, there exists therefore the possibility that in a 2HDM the observed SM-like Higgs state can be either the $h$ \cite{carena,bernon} or $H$ \cite{ferreira,bernon2} one. An intriguing possibility is that Nature made the second choice, i.e., the heavy 2HDM CP-even state, so that a pair of the light ones could appear as its decay products, as the $Hhh$ vertex is indeed allowed by the most general 2HDM scalar potential and underlying symmetries, which in fact coincide with those of the SM, except (possibly) for an additional $Z_2$ one introduced to prevent (large) Flavour Changing Neutral Currents (FCNCs) \cite{Glashow:1976nt,Branco:2011iw} that may otherwise emerge in presence of a 2HDM (pseudo)scalar sector. Such a mass hierarchy, i.e., $m_H>2m_h$, can easily be realised over the parameter space of one particular realisation of a 2HDM, so-called type-I (2HDM-I), see next section for details, which in fact allows for $h$ masses down to even 20--30 GeV, well compatible with both theoretical and experimental constraints. However, the requirement that one out of $h$ or $H$ has physical properties consistent with the observed Higgs boson state puts rather stringent bounds on the 2HDM parameter space. For example, it is well known that, in a 2HDM, there exists a `decoupling limit', where $m_{H,A,H^\pm}\gg m_Z$, $\cos(\beta-\alpha)\approx 0$ \cite{gunion} and the couplings of the $h$ state to the SM particles are identical to those of the SM Higgs boson. Alternatively, a 2HDM also possesses an `alignment limit', in which either one of $h$ \cite{carena,bernon} or $H$ \cite{ferreira,bernon2} can mimic the SM Higgs boson. This is a welcome feature, as we will be working in a configuration of the 2HDM-I \cite{type1} parameter space close to the alignment limit realised through the $H$ state, however, we will specifically be concentrating on those parameters which enable the $h$ state to be (nearly) fermiophobic, so that the $h\to \gamma\gamma$ decay (mediated by $W^\pm$ and $H^\pm$ boson loops)) can be dominant. Hence, all this opens up the possibility of a rather spectacular 2HDM-I signal, in the form of the following production and (cascade) decay process, $gg\to H\to hh\to\gamma\gamma\gamma\gamma$, indeed relying upon the aforementioned characteristic of photonic signals in the LHC detectors. Clearly, the presence of two Higgs bosons as intermediate states in such a signature induces a phase space suppression (with respect to the production of a single Higgs state), however, this can be well compensated by the fact that the $H\to hh$ transition is resonant and, as mentioned, di-photon decays of the $h$ state can be dominant in the 2HDM-I \cite{type1} when occurring near its fermiophobic limit. Furthermore, the knowledge of the $H$ mass (125 GeV in our scenario), combined with the ability of reconstructing in each event photon pairs with similar masses, the former thus enabling one to enforce the $m_{\gamma\gamma\gamma\gamma}\approx 125$ GeV requirement and the latter the $m_{\gamma\gamma}\approx m'_{\gamma\gamma}$ one, allows us to exploit two powerful kinematic handles in suppressing the background, again, bearing in mind the high mass resolutions achievable in photon mass reconstructions. In the present study, in essence, we explore the discovery potential of a light scalar Higgs boson $h$ of mass less than $m_H/2$ at the LHC Run 2 (hence with Center-of-Mass (CM) energy $\sqrt{s}=13$ TeV and standard luminosity conditions), where -- as explained -- $H$ represents the SM-like Higgs state \cite{ferreira,bernon2}. Chiefly, we consider the production of a light $h$ pair indirectly via the decay \begin{equation} H\to hh, \end{equation} with the production of $H$ via the standard mechanisms, which are dominated by gluon-gluon fusion. In this connection, it is to be noted that the total Branching Ratio (BR) of the SM-like Higgs boson to undetected Beyond the SM (BSM) decay modes (BR$_{\rm BSM}$) is restricted by current Higgs data and predicted to be \cite{brbsm} \begin{equation} {\rm BR}_{\rm BSM}\leq 0.34~\text{at~95\%~Confidence~Level~(CL).} \end{equation} That is, the presence of non-SM decay modes of SM-like Higgs boson is not completely ruled out, which acts as a further motivation of our study. In carrying out the latter, we borrow from existing experimental results. The ATLAS collaboration carried out searches for new phenomena in events with at least three photons at a CM energy of 8 TeV and with an integrated luminosity of $20.3$ fb$^{-1}$. From the non-observation of any excess, limits are set at 95\% CL on the rate of the relevant signal events in terms of cross section multiplied by a suitable BR combination \cite{beta} \begin{equation} \sigma_{\rm BSM}\times\beta^\prime\leq 10^{-3}\sigma_{\rm SM}, \end{equation} where $\beta^\prime={\rm BR}(H\to AA)\times {\rm BR}(A\to\gamma\gamma)^2$, $\sigma_{\rm BSM}$ is the Higgs production cross section in a possible BSM scenario and $\sigma_{\rm SM}$ is the same, but for the SM Higgs. The above constraint sets an upper limit on $\beta^\prime$ as \begin{equation} \beta^\prime\leq 10^{-3}, \end{equation} provided the Higgs state $H$ in the context of new physics phenomena is the SM-like Higgs boson of mass 125 GeV. In particular, we will validate a numerical toolbox that we have created to carry out a Monte Carlo (MC) analysis against results published therein for the case of $H\to AA$ decays and extrapolate them to the case of $H\to hh$ ones, which constitute the dominant four photon signal in our case. {The plan of the paper is as follows. In Section 2, we introduce 2HDMs \cite{hunters,2hdms,lee} in general and describe in particular our construct (the 2HDM-I \cite{type1}), including dwelling on the theoretical (see \cite{th20,th21,th23,th24,th27,th25}) and experimental (see later on) constraints placed upon its parameter space. (Herein, we also comment on the fermiophobic limit of the 2HDM-I and its experimental status.) Section 3 is devoted to present our numerical results for the (inclusive) four photons cross section and to motivate the selection of our Benchmark Points (BPs). Then we move on to describe the numerical tools we have used and the MC analysis carried out, including illustrating our results for the exclusive cross section in Section 4. We then conclude in Section 5. Some technical details of our calculations are presented in Appendix A. } \section{The 2HDM-I and its fermiophobic limit} \subsection*{The 2HDM scalar potential} The most general 2HDM scalar potential which is $SU(2)_L\otimes U(1)_Y$ invariant with a softly broken $Z_2$ symmetry can be written as \begin{align} V(\phi_1,\phi_2)&=m^2_{11}\phi^+_1\phi_1+m^2_{22}\phi^+_2\phi_2-[m^2_{12}\phi^+_1\phi_2+h.c]\nonumber\\ &+\frac{1}{2}\lambda_1(\phi^+_1\phi_1)^2+\frac{1}{2}\lambda_2(\phi^+_2\phi_2)^2+\lambda_3(\phi^+_1\phi_1)(\phi_2^+\phi_2)\\ &+\lambda_4(\phi_1^+\phi_2)(\phi_2^+\phi_1)+[\frac{1}{2}\lambda_5(\phi_1^+\phi_2)^2+{\rm h.c.}],\nonumber \end{align} where $\phi_1$ and $\phi_2$ have weak hypercharge $Y=+1$ while $v_1$ and $v_2$ are their respective Vacuum Expectation Values (VEVs). Through the minimisation conditions of the potential, $m_{11}^2$ and $m_{22}^2$ can be traded for $v_1$ and $v_2$ and the tree-level mass relations allow the quartic couplings $\lambda_{1-5}$ to be substituted by the four physical Higgs boson masses and the neutral sector mixing term $\sin(\beta-\alpha)$, where $\beta$ is defined through $\tan\beta=v_2/v_1$ and $\alpha$ is the mixing angle between the CP-even interaction states. Thus, in total, the Higgs sector of the 2HDM has 7 independent parameters, which include $\tan\beta$, $\sin(\beta-\alpha)$ (or $\alpha$), $m_{12}^2$ and the four physical Higgs boson masses. As explained in the Introduction, the 2HDM possesses two alignment limits: one with $h$ SM-like \cite{carena,bernon} and an other one with $H$ SM-like \cite{ferreira,bernon2}. In the present study, we are interested in the alignment limit where $H$ is the SM-like Higgs boson discovered at CERN, which implies that $\cos(\beta-\alpha)\approx 1$. Then, we take $m_h<m_H/2\approx 62.5$ GeV, so that the decay channel $H\to hh$ would always be open. From the above scalar potential one can derive the following triple scalar couplings needed for our study: \begin{eqnarray} Hhh &=& -\frac{1}{2}\frac{g c_{\beta-\alpha}}{m_W s^2_{2\beta}}\bigg[ (2 m^2_{h} + m^2_{H}) s_{2\alpha} s_{2\beta} -2 (3 s_{2\alpha}-s_{2\beta}) m^2_{12}\bigg], \nonumber \\ HAA &=& -\frac{g}{2m_W s^2_{2\beta}}\bigg[ (2 m^2_{A} - m^2_{H}) s_{2\beta}^2 c_{\beta-\alpha} +2 m^2_{H} s_{2\beta} s_{\beta+\alpha} -4 m_{12}^2 s_{\beta+\alpha} \bigg], \nonumber \\ hH^\pm H^\mp &=& \frac{1}{2}\frac{g}{m_W s^2_{2\beta}} \bigg[ (m^2_{h} - 2 m^2_{H^\pm})s_{\beta-\alpha} s^2_{2\beta} - 2c_{\beta + \alpha}(m^2_{h} s_{2\beta}- 2 m^2_{12})\bigg], \label{triple-hhh} \end{eqnarray} where $g$ is the $SU(2)$ gauge coupling constant. We have used the notation $s_x$ and $c_x$ as short-hand for $\sin(x)$ and $\cos(x)$, respectively. It is clear from the above couplings that $Hhh$ is proportional to $c_{\beta-\alpha}$ which is close to unity in our case and hence the BR$(H\to hh)$ would not be suppressed. {{Moreover, in the exact fermiophobic limit $\alpha\approx \pm \pi/2$ becomes proportional to $m_{12}^2$.}} The vertex $HAA$ has two terms, one proportional to $c_{\beta-\alpha}$ and the other proportional to $s_{\beta+\alpha}$ which is close to $c_\beta$ in the fermiophobic limit $\alpha\approx \pi/2$. Finally, the coupling $hH^\pm H^\mp$ can be large so as to contribute sizably to the $h\to \gamma\gamma$ decay rate. \subsection*{Fermiophobic limit of the 2HDM-I} In general, in the 2HDM, both Higgs doublets can couple to quarks and leptons exactly as in the SM. However, in such case one has tree level FCNCs which would lead to large contribution to $B$-physics observables in conflict with data. In order to avoid this, the 2HDM needs to satisfy a discrete $Z_2$ symmetry \cite{Glashow:1976nt,Branco:2011iw} which guarantees the absence of this phenomenon. Several type of 2HDMs exist depending on the $Z_2$ charge assignment of the Higgs doublets~ \cite{Branco:2011iw}. In our study, we will focus on the 2HDM-I. In this model only the doublet $\phi_2$ couples to all the fermions as in the SM while $\phi_1$ does not couple to any of the fermions. The Yukawa interactions in terms of the neutral and charged Higgs mass eigenstates in a general 2HDM can be written as: \begin{eqnarray} -\mathcal{L}^{2HDM}_{Yukawa}&&=\Sigma_{f=u,d,l}\frac{m_f}{v}(\xi^h_f\bar{f}fh+\xi^H_f\bar{f}fH-i\xi^A_f \bar{f}\gamma_5 fA)\nonumber\\ &&+\{ \frac{\sqrt{2}V_{ud}}{v}\bar{u}(m_u\xi^A_u P_L+m_d \xi^A_d P_R)dH^+ +\frac{\sqrt{2}m_l\xi_l^A}{v}\bar{\nu}_L l_RH^++{\rm h.c.}\}, \label{l-yuk} \end{eqnarray} where $v^2=v_1^2+v_2^2=(2\sqrt{2}G_F)^{-1}$, $V_{ud}$ is the top-left entry of the Cabibbo-Kobayashi-Maskawa (CKM) matrix and $P_L$ and $P_R$ are the left- and right-handed projection operators, respectively. In the 2HDM-I, we have \begin{eqnarray} && \xi^h_f=\cos\alpha/\sin\beta \quad and \quad \xi^H_f=\sin\alpha/\sin\beta , \quad for \quad f=u,d,l, \nonumber\\ && \xi^A_d=-\cot\beta \quad , \quad \xi^A_u=\cot\beta \quad and \quad \xi^A_l=-\cot\beta. \label{HhA-coup} \end{eqnarray} From the above Lagrangian~(\ref{l-yuk}) and (\ref{HhA-coup}), it is clear that for $\alpha\approx \pm\frac{\pi}{2}$ the tree level coupling of the light CP-even Higgs $h$ to quarks and leptons are very suppressed. Hence $h$ is fermiophobic in this limit~ \cite{Akeroyd:1995hg}. Note that, in the 2HDM-I, the CP-odd Higgs coupling to fermions is proportional to $\cot\beta$ and hence would not be fermiophobic for any choice of $\tan\beta$. Since we are interested in the case where $H$ is SM-like ($\cos(\beta-\alpha)\approx 1$) and $m_h\leq m_H/2 \approx 62.5$ GeV such that the decay $H\to hh$ is open, the main decays of the lightest Higgs state $h$ are into the tree level channels $h\to V^*V^*$ and $h\to Z^* A$ when $m_A<m_h$, otherwise the one loop $h\to Z^*\gamma$ and $h\to \gamma \gamma$ ones dominate. The $1\to 4$ decays $h\to V^*V^*(\to f\bar f f'\bar f')$ have two sources of suppression, the phase space one and the fact that $hVV\propto \sin(\beta-\alpha)\approx 0$, while the $1\to 3$ decay $h\to Z^*\gamma(\to f\bar f\gamma)$ is both loop and phase space suppressed. Therefore, the decays $h\to \gamma \gamma$ and $h\to Z^*A$ ($m_A<m_h$) are expected to compete with each other and dominate in the fermiophobic limit. In fact, it is well known than $h\to \gamma\gamma$ is dominated by the $W^\pm$ loops which interfere destructively with top and charged Higgs loops. In the limit where $\sin(\beta-\alpha)\to 0$, the $W^\pm$ loops vanish and only the top and charged Higgs ones contribute. When $\cos\alpha$ vanishes, the $h$ state, with mass $\leq 62$ GeV, becomes fermiophobic and consequently the {\rm BR}($h\to \gamma\gamma$) can become 100\% if $h\to Z^*A$ is not open. In contrast, the coupling $hZA$ is proportional to $\cos(\beta-\alpha)$, which is close to unity in our scenario, therefore, when $h\to Z^*A$ is open for $m_A<m_h$, it dominates over $h\to \gamma\gamma$. Fermiophobic Higgs bosons have been searched for at LEP and Tevatron. The LEP collaborations used $e^+e^-\to Z^* \to Zh$ followed by the decay $h\to \gamma\gamma$ and set a lower limit on $m_h$ of the order $100$ GeV \cite{Abbiendi:2002yc, Abreu:2001ib,Heister:2002ub,Achard:2002jh}. At the Tevatron, both Higgs-strahlung ($pp\to Vh$, $V=W^\pm,Z$) and vector boson fusion ($qq\to q'q'h$) have been used to search for fermiophobic Higgs decays of the type $h\to \gamma\gamma$ \cite{Abazov:2008ac}, with similar results to those obtained at LEP. Note that both LEP and Tevatron assumed a full SM coupling for $hVV$ ($V=Z,W$) which would not be the case for the CP-even Higgs $h$ in the 2HDM-I where $hVV\propto \sin(\beta-\alpha)$ can be very suppressed, as explained. Therefore one could imagine a scenario with a very light $h$ state ($m_{h}\ll 60$ GeV) which has escaped LEP and Tevatron limits due to suppression in the coupling $hVV$. In addition, the LEP, OPAL and DELPHI collaborations have searched for fermiophobic Higgs decays through $e^+e^-\to Z^*\to Ah$ with $h\to \gamma\gamma$ and $A$ decaying mainly into fermions and set a limit on $\sigma(e^+e^-\to Ah)\times {\rm BR}(h\to \gamma\gamma)\times {\rm BR}(A\to f\bar{f})$ for $m_h\in [20,180]$ GeV \cite{Abbiendi:2002yc,Abreu:2001ib}. Note that this limit will depend on the coupling $ZhA\propto \cos(\beta-\alpha)$ and hence becomes weaker for $\cos(\beta-\alpha)\ll 1$. However, a very light $h$ with $m_h\leq 60$ GeV is sill allowed if the CP-odd is rather heavy. We refer to \cite{Arhrib:2017wmo} for more detail on these aspects. Finally, following phenomenological studies in \cite{Akeroyd:2005pr}, CDF at Tevatron also studied $qq'\to H^\pm h$, which can lead to 4-photon final states for $H^\pm \to W^\pm h$ and $h\to \gamma\gamma$ \cite{Aaltonen:2016fnw}, however, CDF limits are presented only for the exactly fermiophobic scenario and are not readily extendable to our more general setup. \section{Numerical results} As said previously, we are interested in the 2HDM-I for which we perform a systematic numerical scan for its parameter space. We have fixed $m_H$ to 125 GeV and assumed that $2 m_h<m_H$ such that the decay $H\to hh$ is open. The other 2HDM independent parameters are varied as indicated in Tab.~1. We use the 2HDMC (v1.7.0)~ \cite{Eriksson:2009ws} public program to calculate the 2HDM spectrum as well as various decay rates and BRs of Higgs particles. The 2HDMC program also allow us to check several theoretical constraints such as perturbative unitarity, boundedness from below of the scalar potential as well as EW Precision Observables (EWPOs) which are all turned on during the scan. In fact, it is well known that EWPOs constrain the splitting between Higgs masses. In our scenario, since we ask that $m_H=125$ GeV and assume $2m_h<m_H$, if we want to keep the CP-odd also light, it turns out that the charged Higgs boson would be also rather light, $m_{H^\pm}\leq 170-200$ GeV \cite{Enberg:2016ygw}, as it can be seen from Tab.~1. Moreover, the code is also linked to HiggsBounds \cite{Bechtle:2013wla} and HiggsSignals \cite{Bechtle:2013xfa} that allow us to check various LEP, Tevatron and recent LHC searches. Once in the 2HDM-I the decay channels $H\to hh$ and/or $H\to AA$ are open, the subsequent decays of $h$ and/or $A$ into fermions, photons or gluons will lead either to invisible $H$ decays that can be constrained by present ATLAS and CMS data on the Higgs couplings. In our study, we will use the fact that the total BR of the SM-like Higgs boson into undetected BSM decay modes is constrained, as mentioned, by BR$_{\rm BSM}< 0.34$ \cite{brbsm} where BR$_{\rm BSM}$ will designate in our case the sum of BR$(H\to hh)$ and BR$(H\to AA)$. In what follow, we will show our numerical results via three different scans (see Tab.~1). These results mainly concern the BR$_{\rm BSM}$, BR$(H\to hh)$ and the ensuing total cross section for four photons final states which is given by \begin{eqnarray} \sigma_{4\gamma}&=& \sigma(gg\to H)\times {\rm BR}(H\to hh)\times {\rm BR}^2(h\to \gamma\gamma). \label{eq:4gamma} \end{eqnarray} Note that, in writing the above cross section, we have used the narrow width approximation for the SM-like Higgs state $H$ which is justified since the total width of $H$ is of the order of few MeV (see Tab.~3). Furthermore, we are interested into multi-photon signatures coming from $H\to hh\to 4\gamma$ and not from $H\to AA\to 4\gamma$ decay. In the former case, in fact, because $h$ can become totally fermiophobic, its BR$(h\to \gamma\gamma)$ can in turn become maximal when $h\to Z^*A$ is not open. In the latter case, of the CP-odd Higgs state $A$, couplings to fermions are proportional to $1/\tan\beta$, which thus does not vanish, therefore the one loop decay $A\to \gamma\gamma$ will be suppressed compared to the tree level ones $A\to ff$ and $A\to Z^*h$. We first show our results for $\sigma_{4\gamma}$ without imposing constraint from ATLAS searches in events with at least three photons in the final state \cite{beta}. The results of scan-1 are shown in Fig.~\ref{scan1-fig2}. In this scan we only allow $H\to hh$ to be open and deviate from the exact fermiophobic limit by taking $\alpha=\pm\pi/2 \mp \delta$ where $\delta\in [0,0.05]$. It is clear that for some values of $\delta \approx 0$, one can have an exact fermiophobic Higgs with maximal BR for $h\to \gamma\gamma$. In this scenario, The BR$(H\to hh)$ can reach 17\% in some cases. Thus, the four photon cross sections can become of the order of few pb when {\rm BR}($h\to \gamma\gamma$) is close to maximal and BR$(H\to hh)$ large. Here, the maximum cross section is reached for $\sin(\beta-\alpha)\approx -0.06$. {{The output of scan-2, which is for the exact fermiophobic limit, $\alpha=\pi/2$, is shown in Fig.~\ref{scan2-fig3}. Here, we illustrate $\sigma_{4\gamma}$ as a function of $\sin(\beta-\alpha)$ in the left panel with $m_h$ coded with different colours on the vertical axis. The BR$(h\to \gamma\gamma)$ as a function of $\sin(\beta-\alpha)$ is depicted on the right panel of Fig.~\ref{scan2-fig3} with the BR$(H\to hh)$ on the vertical axis. The maximal value reached by BR$(H\to hh)$ in this scenario is again around 17\%. Note that, in this case of exact fermiophobic limit, only $W^\pm$ and $H^\pm$ loops contribute to the $h\to \gamma\gamma$ decay. In fact, in most cases, $W^\pm$ loop contributions to $h\to \gamma\gamma$ dominate over the $H^\pm$ ones except for small $\sin(\beta-\alpha)$, where $W^\pm$ and $H^\pm$ terms could become comparable and interfere destructively. In such a case, it may be possible that BR($h\to \gamma\gamma$) will be suppressed and BR($h\to W^*W^*$) slightly enhanced. This could explain the drop of the BR($h\to \gamma\gamma$) up to $5.5\times 10^{-1}$. Note that, for large $\sin(\beta-\alpha)\approx -0.14$, the off-shell decay $h\to V^*V^*$ can reach $3.5\times 10^{-1}$, $1\times 10^{-1}$ for $V = W$ and $V = Z$, respectively. It is interesting to see that, in this scenario, $\sigma_{4\gamma}$ can be larger than 1 pb for a light $m_h\in [10,50]$ GeV with significant BR$(h\to \gamma\gamma)$. }} {{ In scan-3, we allow $\sin(\beta-\alpha)\in [-0.35,0]$ and the CP-odd Higgs state to be as light as 10 GeV. In this case, we specifically take into account constraints from the LEP measurement of the $Z$ width. The results are illustrated in Fig.~\ref{scan3-fig4}, where we show both $\sigma_{4\gamma}$ and ${\rm BR}(h\to \gamma\gamma)$ as a function of $\sin(\beta-\alpha)$. Note that, for any choice of $\tan\beta$, one can tune $\sin(\beta-\alpha)$ such that $\alpha$ becomes $\pm\pi/2$ and then $h$ is fermiophobic (in which case the previous discussion of scan-2 would apply again). Away from this fermiophobic limit, BR$(h\to b\bar{b})$ becomes sizeable and suppresses ${\rm BR}(h\to \gamma\gamma)$. }} {{ After quantifying the maximal size of the four photons cross section in the previous plots, we proceed to apply ATLAS limits coming from searches in events with at least three photons in the final state \cite{beta}. The results are shown in Fig.~\ref{fig4}. The solid black line is the expected upper limit at 95$\%$ CL from ATLAS with 8 TeV centre-of-mass energy and 20.3 fb$^{-1}$ luminosity. The green and yellow bands correspond, respectively, to a $\pm 1$ $\sigma$ and $\pm 2$ $\sigma$ uncertainty from the resonance search assumption. As it can be seen from this plot, for $m_h$ in the $[10,62]$ GeV range, the ATLAS upper limit on $\sigma_H\times BR(H\to hh)\times {\rm BR}^2(h\to \gamma\gamma)$ is $1\times 10^{-3}\sigma_{SM}$. We also illustrate on this figure our projection for 14 TeV using 300 fb$^{-1}$ luminosity (based on a MC simulation that we will describe below). The dots represent our surviving points from scan-1, scan-2 and scan-3 after passing all theoretical and experimental constraints. Most of the points with significant four photons cross section and large ${\rm BR}(H\to hh)$ and/or ${\rm BR}(h\to \gamma\gamma)$ shown in the previous plots turn out to be ruled out by the aforementioned ATLAS upper limit \cite{beta}. It is clear from Fig.~\ref{fig4} (top-left and -right plots) that scenarios from scan-1 and scan-2 would be completely ruled out (or, conversely, be discovered) by our projection for the 14 TeV LHC run with 300 fb$^{-1}$ luminosity while scenarios from scan-3 (bottom plot) would survive undetected. The maximal four photons cross section we obtain is of the order of 37 fb. It is interesting to note that, for scan-1 and scan-3, the remaining points still enjoy sizable BR$(H\to hh)$ while for the exact fermiophobic limit of scan-2 one can see from Fig.~\ref{fig4} (top-right) that the BR$(H\to hh)$ is less than $5\times 10^{-3}$. This limit is much stronger than the one from invisible SM-like Higgs decays discussed previously. This can be seen in Fig.~\ref{fig5}, where we illustrate the correlation between ${\rm BR}(H\to \gamma\gamma)$ and ${\rm BR}(H\to hh)$ for the three scans. Herein, one can verify that, for scan-1 and scan-3, ${\rm BR}(H\to \gamma\gamma)$ and ${\rm BR}(H\to hh)$ are anti-correlated. }} Based on the results of these three scans we have selected a few Benchmark Points (BPs), which are given in Tab.~2. These BPs can be seen in Fig.~\ref{fig4} as black stars. Note that, in BP1, both $H\to hh$ and $H\to AA$ decays are open while for the other BPs only $H\to hh$ is. For these BPs, we give in Tab.~2 various observables such as: the total widths of $h$ and $H$, $\Gamma_{h}$ and $\Gamma_H$, respectively, ${\rm BR}(H\to hh)$, ${\rm BR}(h\to \gamma\gamma)$, ${\rm BR}(h\to Z^*A)$, ${\rm BR}(A\to \gamma\gamma)$ and the four photons cross section $\sigma_{4\gamma}$ in fb. In fact, for these BPs, we take into account all theoretical constraints as well as the LEP and LHC constraints implemented in the HiggsBounds code plus the limits from ATLAS on multi-photons final states \cite{beta}, as explained in the introduction, {{see Eqs. (1) and (2)}}. It is also interesting to see from Tab.~3 that the ${\rm BR}(A\to \gamma\gamma)$ is always suppressed and cannot be used to generate multi-photon finale states. Finally, it can also be seen from this table that, even for small ${\rm BR}(H\to hh)\approx 10^{-3}$ but with maximal ${\rm BR}(h\to \gamma\gamma)$, one can still get a large $\sigma_{4\gamma}\approx 38$ fb. Before ending this section, we would like to comment on charged Higgs and CP-odd Higgs boson searches. As mentioned previously, the charged Higgs and CP-odd Higgs states are rather light in our scenarios. LHC limits on light charged Higgs states produced from top decay and decaying to $H\pm \to \tau\nu, cb$ in the 2HDM-I can be evaded by advocating the dominance of the $H^\pm \to W^\pm A$ or $H^\pm \to W^\pm h$ BRs (see \cite{Arhrib:2017wmo} for more details). On the one hand, the LHC searched for a CP-odd Higgs state decaying via $A\to ZH$ \cite{Aad:2015wra,Khachatryan:2015tha,Khachatryan:2015lba} and $A\to\tau^+\tau^-$. In our scenario, the BR$(A\to ZH$) will suffer two suppressions: one coming from the coupling $AZH$, which is proportional to $\sin(\beta-\alpha)\approx 0$, and the other one coming from the fact that $A\to Zh$ would dominate over $A\to ZH$ since $h$ is lighter than 125 GeV and the coupling $ZAh$ is proportional to $\cos(\beta-\alpha)\approx 1$. On the other hand, ATLAS and CMS searches for a CP-odd Higgs state decaying to a pair of $\tau$ leptons \cite{Aad:2014vgg,Khachatryan:2014wca}, when applied to the 2HDM-I, only exclude small $\tan\beta\leq 1.5$ for $m_A\in [110,350]$ GeV \cite{Arhrib:2015gra}. This can be understood easily from the fact that $A$ couplings to a pair of fermions in the 2HDM-I are proportional to $1/\tan\beta$, hence both the production $gg\to A$ and the decay $A\to \tau^+\tau^-$ are suppressed for large $\tan\beta$ values. Moreover, in our scenario, BR$(A\to \tau^+\tau^-)$ would receive an other suppression from the opening of the $A\to Z^* h$ channel. Note also that LEP limits on a light $h$ and a light $A$ are implemented in the HiggsBounds code through limits on the processes $e^+e^- \to Zh$ and $e^+e^- \to hA$. \begin{table}[hpbt] \begin{ruledtabular} \begin{tabular}{c c c c } \hline parameters & scan-1 & scan-2 & scan-3\\ \hline $m_H$ (SM-like) & 125& 125 & 125\\ $m_h$ & $[10,62.5]$ & $[10,62.5]$ & $[10,62.5]$\\ $m_A$ & $[62.5,200]$& $[62.5,200]$ & $[10,200]$\\ $m_{H^\pm}$ & $[100,170]$&$[100,170]$& $[100,170]$\\ $\tan\beta$ & $[2 , 50]$ & $[2 , 50]$ & $[2,50]$\\ $\alpha$ & $\alpha$=$\pm\frac{\pi}{2} \mp\delta $& $\alpha$=$\frac{\pi}{2}$ & $s_{\beta-\alpha}=[-0.35,0.0]$\\ $m_{12}^2$ & $[0,100]$ &$[0,100]$& $[0,100]$ \\ $\lambda_6=\lambda_7$ & 0 & 0 & 0\\ \hline \end{tabular} \caption{2HDM parameters scans: all masses are in GeV.} \end{ruledtabular} \end{table} \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth,height=0.4\textwidth]{fig21.pdf} \includegraphics[width=0.45\textwidth,height=0.4\textwidth]{fig22.pdf} \caption{(Left) The $\sigma_{4\gamma}$ rate as a function of $\sin(\beta-\alpha)$ with $m_h$ indicated on the right vertical axis. (Right) The ${\rm BR}(h\to \gamma\gamma)$ as a function of $\sin(\beta-\alpha)$ with ${\rm BR}(H\to hh)$ indicated on the right vertical axis. Both plots are for scan-1.} \label{scan1-fig2} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth,height=0.4\textwidth]{fig31.pdf} \includegraphics[width=0.45\textwidth,height=0.4\textwidth]{fig32.pdf} \caption{(Left) The $\sigma_{4\gamma}$ rate as a function of $\sin(\beta-\alpha)$ with $m_h$ indicated on the right vertical axis. (Right) The ${\rm BR}(h\to \gamma\gamma)$ as a function of $\sin(\beta-\alpha)$ with ${\rm BR}(H\to hh)$ indicated on the right vertical axis. Both plots are for scan-2.} \label{scan2-fig3} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth,height=0.4\textwidth]{fig41.pdf} \includegraphics[width=0.45\textwidth,height=0.4\textwidth]{fig42.pdf} \caption{(Left) The $\sigma_{4\gamma}$ rate as a function of $\sin(\beta-\alpha)$ with $m_h$ indicated on the right vertical axis. (Right) The ${\rm BR}(h\to \gamma\gamma)$ as a function of $\sin(\beta-\alpha)$ with ${\rm BR}(H\to hh)$ indicated on the right vertical axis. Both plots are for scan-3.} \label{scan3-fig4} \end{figure} \begin{table}[h] \begin{ruledtabular} \begin{tabular}{c | c c c c c c c c} \hline & $m_h$ & $m_A$& $m_{H^\pm}$ & $\sin(\beta-\alpha)$ & $\tan\beta$ & $m^2_{12}$ & $\Gamma_{h}$ & $\Gamma_{H}$\\ \hline BP1 & 10.744652 & 78.567614 & 104.864345 & -0.208633 & 4.584063 & 15.802484 & $2.21\times 10^{-9}$ & $4.679\times 10^{-3}$\\ \hline BP2 & 57.440184 & 141.121784 & 116.073489 & -0.114739 & 8.650594 & 9.735405& $5.303\times 10^{-9}$ & $4.376\times 10^{-3}$\\ \hline BP3 & 40.663472 & 121.812799 & 161.021149 & -0.091551 & 11.262490 & 22.299875& $1.369\times 10^{-8}$ & $4.507\times 10^{-3}$\\ \hline \end{tabular} \caption{Input parameters and widths corresponding to the selected BPs. All masses and widths are in GeV and for all points $m_H=125$ GeV.} \end{ruledtabular} \end{table} \begin{table}[h] \begin{ruledtabular} \begin{tabular}{c | c c c c c c c c} \hline & ${\rm BR}(h\to \gamma\gamma)$ & ${\rm BR}(H\to hh)$ & ${\rm BR}(A\to \gamma\gamma)$& ${\rm BR}(h\to ZA)$ & ${\rm BR}(A\to Zh)$& $\Gamma_{Z\to A h} [{\rm MeV}]$ &$\beta$ & $\sigma_{4\gamma}$ [fb]\\ \hline BP1 & 0.1215 & 0.0469 & $7.613\times 10^{-5}$ & 0.0000 & 0.5696 & 0.188995& 0.000694 & 36.92100\\ \hline BP2 & 0.7435 & 0.001257 & $2.07\times 10^{-5}$ & 0.0000 & 0.9488 & 0.0000& 0.000696 & 35.85200\\ \hline BP3 & 0.1427 & 0.03348 & $1.23\times 10^{-5}$ & 0.0000 & 0.9709 & 0.0000 & 0.000682 & 34.93500\\ \hline \end{tabular} \caption{Input parameters, BRs of CP-even and CP-odd Higgs bosons, $Z$ boson width and four photon cross section corresponding to the selected BPs. All widths are in MeV and for all points $m_H=125$ GeV.} \end{ruledtabular} \end{table} \begin{table}[h] \begin{ruledtabular} \begin{tabular}{c | c c c c c} \hline & Allowed By & Allowed by theoretical& Allowed by & Allowed by & Allowed \\ & HiggsBounds & constraints & HiggsSignals & ATLAS band & by all constraints\\ \hline scan-1 & 49.2\% & 20.6\% & 4.5\% & 16.5\% & 0.07\% \\ \hline scan-2 & 81.6\% & 12.5\% & 61.6\% & 5\% & 0.3\% \\ \hline scan-3 & 30,6\% & 2.9\% & 27\% & 63\% & 0.15\% \\ \end{tabular} \caption{Parameters as in Tab. III with $10^6$ points as inputs for all scans}. \end{ruledtabular} \end{table} \section{Signal and Background} {As previously discussed, in Figs.~(\ref{fig4}--\ref{fig5}, we have taken into account the constraints from the ATLAS collaboration reported in \cite{atlas} from 8 TeV data. However, in order to project the sensitivity of the future LHC run at $\sqrt{s}=14$ TeV, we have to rescale these results. To determine the `boost factors', for both signal and background processes, needed to achieve this, we resort to the MC tools. Specifically, we generate parton level events of both signal and background processes by using MadGraph 5 \cite{Alwall:2014hca} and then pass them to PYTHIA 6 \cite{Sjostrand:2006za} to simulate showering, hadonisation and decays. We finally use PGS \cite{pgs} to perform the fast detector simulations. } In order to pick out the relevant events, for our Run 1 analysis, we adopt the same selection cuts of the ATLAS collaboration given in \cite{atlas}, which read as follows. \begin{itemize} \item $We assume n_\gamma \geq 3$, i.e., we consider inclusive three photon events. \item The two leading photons should have a $P_t(\gamma) > 22$ GeV and the third one should have a $P_t(\gamma)> 17$ GeV. \item The photons should be resolved in the range $|\eta| < 2.37$ and do not fall in the endcap region $1.37 < |\eta| < 1.52$. \item The cone separation parameter $\Delta R(\gamma \gamma)$ between a pair of photons should be larger than $0.4$. \end{itemize} \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth,height=0.4\textwidth]{scan-1.pdf} \includegraphics[width=0.45\textwidth,height=0.4\textwidth]{scan-2.pdf} \includegraphics[width=0.45\textwidth,height=0.4\textwidth]{scan-3.pdf} \caption{Upper limit at 95$\%$ CL on $\sigma_{4\gamma}$ in fb as a function of $m_h$ and the $\pm 1$ and $\pm 2$ uncertainty bands resulting from ATLAS searches at 8 TeV (upper band) and our projection for 14 TeV (lower band) for (top-left) scan-1, (top-right) scan-2 and (bottom) scan-3. The dots are points that are allowed by all constraints and the black stars represent the BPs given in Tab. II. } \label{fig4} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth,height=0.4\textwidth]{fig52.pdf} \caption{{The correlation between BR$(h\to \gamma\gamma)$ and BR$(H\to hh)$ in the three scans.}} \label{fig5} \end{figure} One interesting observation is that the kinematics of photons from the processes $gg \to H \to h h \to 4 \gamma$ and that of $gg \to H \to A A \to 4 \gamma$ are similar when $m_h = m_A$, which could be attributed to the fact that, although $h$ and $A$ have different parity, the differential cross sections of these two processes are both proportional to $(k_1 \cdot k_2)^2 ( k_3 \cdot k_4)^2$ (plus permutations) after the sum over photon polarisations (the $k_i$'s, for $i=1,2,3,4$, are the photon four-momenta). We provide an Appendix to demonstrate the details. \begin{figure}[htbp] \centering \subfigure{ \label{Fig1.sub.1}\thesubfigure \includegraphics[width=0.44\textwidth]{m3h.pdf}} \subfigure{ \label{Fig1.sub.2}\thesubfigure \includegraphics[width=0.44\textwidth]{m3a.pdf}} \subfigure{ \label{Fig1.sub.3}\thesubfigure \includegraphics[width=0.44\textwidth]{m23h.pdf}} \subfigure{ \label{Fig1.sub.4}\thesubfigure \includegraphics[width=0.44\textwidth]{m23a.pdf}} \caption{Distributions at detector level: (a) $m_{3\gamma}$ for $gg\to H\to hh \to 4\gamma$, (b) $m_{3\gamma}$ for $gg\to H\to AA \to 4\gamma$, (c) $m_{23}$ for $gg\to H\to hh \to 4\gamma$ and (d) $m_{23}$ for $gg\to H\to AA \to 4\gamma$.}\label{kin1}. \end{figure} In Fig. \ref{kin1}, we expose the similarity between the two processes by showing some kinematic spectra of $gg \to H \to h h \to 4 \gamma$ and $gg \to H \to A A \to 4 \gamma$. In particular, we present the $m_{3 \gamma}$ spectrum (the invariant mass of the three leading $P_t$-ordered photons) as well as the $m_{23}$ spectrum (the invariant mass of the 2nd and 3rd $P_t$-ordered photons). Obviously, these spectra show no significant difference for these two processes $gg \to H \to h h \to 4 \gamma$ and $gg \to H \to A A \to 4 \gamma$, except fluctuations from numerical simulation. Therefore, the experimental methods and results of multi-photon data from $gg \to H \to AA \to 4 \gamma$ can also be applied to $gg \to H \to h h \to 4 \gamma$. In order to establish LHC sensitivity to our signal process, we determine the scaling factors for both signal and backgrounds, necessary to map our own MC simulations onto the real data results of ATLAS. In doing so, we use the Leading Order (LO) cross section to determine such factors. Therefore the latter should encode the $K$-factors due to higher order corrections, the difference between real detector and the fast detector simulations, the mistaging rate of a jet as a photon and an electron as a photon as well, etc. In fact, when the rejection rate of a jet as a photon is considered \cite{atlas1}, the fake rate could be around $10^{-3}$, so for the process $\gamma \gamma +j$ the scaling factor is dominantly determined by the fake rate while for the process $\gamma + jj$ we expect the fake rate to be around $10^{-6}$, the scaling factors demonstrating then that significant background contribution is indeed due to fake rates. \begin{figure}[htbp] \centering \subfigure{ \label{Fig3.sub.1}\thesubfigure \includegraphics[width=0.44\textwidth]{m23simexp.pdf}} \subfigure{ \label{Fig3.sub.2}\thesubfigure \includegraphics[width=0.44\textwidth]{ma3simexp.pdf}} \caption{The comparison of the simulated spectra of (a) $m_{23}$ and (b) $m_{3 \gamma}$ to those experimental ones is demonstrated.}\label{simexp} \end{figure} The scaling factors for each process are listed in Tab. \ref{scalingfactor}, as mentioned, being all determined from the aforementioned ATLAS results at 8 TeV. We also compare the experimental line-shapes with those from our MC events, which are plotted in Fig. \ref{simexp}. Although the spectra from MC are slightly harder and noticeable differences appear in the bins with $m_{3\gamma}<50$ GeV and $m_{2\gamma}<50$ GeV, the total number of predicted events is close to the experimental ones. By assuming the same scaling factors, we examine the boost factor in the LHC sensitivity for an increased collision energy of $\sqrt{s}=14$ TeV. The ensuing cross sections of the signal and background processes are given in Tab. \ref{sen14}. Since the signal production process $gg\to H$ has a larger boost factor when the collision energy increases from 8 TeV to 14 TeV due to the larger enhancement of gluon flux, as compared to the more varied background composition, it is natural to expect a better sensitivity for the future LHC runs, as readily seen in the table. For example, when the integrated luminosity of the LHC is assumed to be 300/fb, the boost factor in cross section (which is defined in the caption) is found to be 32.2 for thew signal and 25.7 for the background. This effect reflects then in the projected sensitivities shown in Fig. \ref{fig4} for the LHC with $\sqrt{s}=14$ TeV and 300/fb of luminosity (blue lines). \begin{center} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline process & $\sigma$ with $\sqrt{s}= 8$ TeV & N.o.E (Theory) & Acc. Eff. & N.o.E (Expected) & N.o.E (Experimental) & Scaling factor \\ \hline $3\gamma$ & $72.5$ fb & $1.47 \times 10^3$ & $18\%$ & $2.65 \times 10^2$ & $340\pm110$ & $1.28$ \\ \hline $2\gamma$ & $109$ pb & $2.21 \times 10^6$ & $0.4\%$ & $8.8 \times 10^3$ & $330\pm50$ & $3.7 \times 10^{-2}$ \\ \hline $2\gamma+j$ & $58.3$ pb & $1.18 \times 10^6$ & $19\%$ & $2.24 \times 10^5$ & $350\pm50$ & $1.56 \times 10^{-3}$ \\ \hline $\gamma+2j$ & $4.39 \times 10^4$ pb & $8.91 \times 10^8$ & $15\%$ & $1.33 \times 10^8$ & $110\pm40$ & $8.31 \times 10^{-7}$ \\ \hline $\gamma e^+e^-$ & $5.91$ pb & $1.2 \times 10^5$ & $23.5\%$ & $2.8 \times 10^4$ & $89\pm11$ & $3.2 \times 10^{-3}$ \\ \hline $2\gamma e^+e^-$ & $30$ fb & $6.1 \times 10^2$ & $34\%$ & $2.07 \times 10^2$ & $85\pm22$ & $0.41$ \\ \hline $\gamma W +X$ & $24.4$ pb & $4.95 \times 10^5$ & $2.9\%$ & $1.43 \times 10^4$ & $11.4\pm1.5$ & $0.8 \times 10^{-3}$ \\ \hline \end{tabular} \end{center} \caption{\label{scalingfactor} The scaling factors for each process of SM background are shown where N.o.E denotes the ``Number of Events" at 8 TeV when the integrated luminosity is taken as 20.3/fb and where Acc. Eff. denotes the ``Acceptance Efficiency'' which is determined by the selection cuts of the mentioned ATLAS analysis.} \end{table} \end{center} \begin{center} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \hline & $\sigma$ with $\sqrt{s}= 8$ TeV & $\sigma$ with $\sqrt{s}= 14$ TeV & Boost factor \\ \hline Signal & 19.3 $\times \beta$ pb & 42.0 $\times \beta$ pb & 32.2 \\ \hline Background & 67.5 fb & 117 fb & 25.7 \\ \hline \end{tabular} \end{center} \caption{\label{sen14} The projected LHC sensitivity at 14 TeV is given, where $\beta={\rm BR}(H\to hh ) {\rm BR}^2(h\to \gamma \gamma)$, expressed in terms of the boost factor, defined as $\frac{\sigma(\sqrt{s}=\textrm{14 TeV}) \times L_{\textrm{14 TeV}}}{\sigma(\sqrt{s}=\textrm{8 TeV}) \times L_{\textrm{8 TeV}}}$, where $L_{\textrm{14 TeV}}$ is assumed to be 300/fb and $L_{\textrm{8 TeV}}$ is taken as 20.3/fb. } \end{table} \end{center} \section*{Conclusions} In this paper, we built upon previous results of some of ours, which had extracted a region of parameter space of the 2HDM-I where very light $h$ and $A$ states, down to 15--20 GeV or so, can be found, when the $H$ one is assumed to be the SM-like one discovered at the LHC in 2012. This spectrum is well compatible with all standard theoretical constraints (unitarity, vacuum stability, etc.) and all available experimental data (including flavour as well as Higgs data) and thus offers the possibility of testing Higgs cascade decays of the type $H\to hh$ and $H\to AA$, compatibly with the total $H$ width extracted by global fits to the 125 GeV Higgs data. Amongst the possible decays of the $h$ and $A$ states we concentrated here upon those yielding di-photons, the overall signature then being a $4\gamma$ one, primarily induced by $gg\to H$ creation. We do so as the 2HDM-I can develop, over the aforementioned region of parameter space, a (nearly) fermiophobic limit, so that $h$ and $A$ decays into fermions (chiefly, $b\bar b$ and $\tau^+\tau^-$) are negligible. In fact, the availability of an ATLAS analysis performed on Run 1 samples of the LHC looking for these specific multi-photon signals allowed us, on the one hand, to validate our MC tools against the full detector environment of a multi-purpose LHC experiment and, on the other hand, to project our finding into the future by extrapolating our results to a collider energy of 14 TeV and luminosity of 300/fb. This exercise revealed that the portion of 2HDM-I parameter space where the above phenomenology is realised, while being just below the current LHC sensitivity, is readily accessible at future stages of the LHC. To confirm or disprove its existence is of paramount importance as this would almost univocally point to a specific realisation of a generic 2HDM construct as such light and fermiophobic $h$ and $A$ states cannot be realised in alternative formulations of it. \section*{Acknowledgements} AA, RB and SM are supported by the grant H2020-MSCA-RISE-2014 no. 645722 (NonMinimalHiggs). This work is also supported by the Moroccan Ministry of Higher Education and Scientific Research MESRSFC and CNRST: Projet PPR/2015/6. SM is supported in part through the NExT Institute. RB is supported in part by the Chinese Academy of Sciences (CAS) President's International Fellowship Initiative (PIFI) program (Grant No. 2017VMB0021). Q.S. Yan and X.H. Zhang are supported by the Natural Science Foundation of China under the grant no. 11575005.
{ "timestamp": "2017-12-25T02:09:03", "yymm": "1712", "arxiv_id": "1712.05332", "language": "en", "url": "https://arxiv.org/abs/1712.05332" }
\section{Introduction} The nitrogen-vacancy (NV) center in diamond is currently one of the most studied and anticipated platforms for high spatial-resolution sensing of magnetic fields \cite{Balasubramanian2008,Rondin2014,Schirhagl2014}, electric fields \cite{Dolde2011} and temperature \cite{Kucsko2013,Neumann2013} at ambient conditions. Several novel applications using diamond sensors are currently being developed in the fields of neuroscience \cite{Hall2012,Barry2016}, cellular biology \cite{Steinert2013,Glenn2015}, nanoscale magnetic resonance microscopy \cite{Lovchinsky2016}, paleomagnetism \cite{Fu2014}, and microelectronics \cite{Kolkowitz2015,Jakobi2016}. Many magnetometer schemes using NV centers are based on recording the change in the detected fluorescence level upon a shift of the electron spin precession frequency due to a change of an external magnetic field \cite{Clevenson2015a,Glenn2015,Barry2016,Sepehr2017}. The fluorescence contribution of an ensemble of NV centers to the signal increases the optically detected magnetic resonance (ODMR) amplitude and hence boosts the sensitivity by $\sqrt{N}$, where $N$ is the number of NV centers \cite{Taylor2008}. However, the high refractive index of diamond (\mbox{$\sim$ 2.4}) together with the near uniform emission of NV center ensembles trap most of the generated fluorescence due to total internal reflection. This limits the collection efficiency and, thus, the smallest detectable magnetic field change. To increase the fluorescence collection from a diamond, several techniques have been demonstrated such as fabricating a solid immersion lens \cite{Hadden2010}, side-collection detection \cite{LeSage2012}, employing a silver mirror \cite{Israelsen2014}, using a dielectric optical antenna \cite{Riedel2014}, emission into fabricated nanopillar waveguides \cite{Momenzadeh2015} and employing a parabolic lens \cite{Wolf2015}. Alternatively, magnetic fields can also be sensed by observing the change in the shelving-state infrared absorption \cite{Jensen2014}, or the change in fluorescence when transitioning through the ground state level anti-crossing of the NV center \cite{Wickenbrock2016}. In this article, we report on a new measurement technique for NV ensemble magnetometry, which is based on monitoring the spin-dependent absorption of the pump field. Using the absorption detected magnetic resonance (ADMR) measurement technique in conjunction with a cavity resonant with the pump field, we fully circumvent challenges associated with inefficient collection of fluorescence, by detecting the absorption through the transmitted cavity mode. We demonstrate a NV ensemble magnetometer for low-frequency magnetic field sensing with a measured noise floor of \mbox{$\sim$ 100 nT/$\sqrt{\textrm{Hz}}$} spanning a bandwidth up to \mbox{125 Hz}. Intriguingly, using the reflection of an impedance-matched cavity and a diamond crystal with an optimized NV concentration, we project an estimated sensitivity of \mbox{$\sim$ 1 pT/$\sqrt{\textrm{Hz}}$}. \section{Absorption detected magnetic resonance} The electronic level structure of the NV defect is summarized in \mbox{Fig. \ref{Cavity}(a)}. It consists of a $^3$A$_2$ spin-triplet ground state, a $^3$E spin-triplet excited state, and a \mbox{$^1$A$_1$ $\leftrightarrow$ $^1$E} shelving state. Pumping with a \mbox{532 nm} laser results in an excitation above the zero phonon line, which decays on a picosecond timescale \cite{Huxter2013} to the $^3$E excited states by non-radiative transitions. Moreover, there exists a non-radiative decay path through the shelving state which is more probable for \mbox{$m_s=\pm 1$} of the excited state $|$4$\rangle$. Continuous optical pumping depopulates the \mbox{$m_s=\pm 1$} spin sublevel and accumulates the population in \mbox{$m_s=0$}. The zero-field splitting of the ground state levels $|$1$\rangle$ and $|$2$\rangle$ is \mbox{$\sim$ 2.87 GHz} at room temperature, making the transition between these levels accessible using microwave (MW) fields. The presence of a local magnetic field lifts the degeneracy of \mbox{$m_s=\pm$1} with a splitting proportional to 2$\gamma_e B_{\text{NV}}$, where \mbox{$\gamma_e$ = 2.8 GHz/T} is the gyromagnetic ratio of the electron spin and $B_{\text{NV}}$ corresponds to the magnetic field projection along the NV symmetry axis. A change in the external magnetic field hence results in a detectable shift in the electron spin resonance frequency of the ODMR or the ADMR spectrum, respectively. The continuous-wave sensitivity of the spin resonances to small changes of an external magnetic field is proportional to max[$\frac{d}{d_w}S]^{-1}$, where $\frac{d}{d_w}$ is the derivative with respect to the MW frequency $\omega/2\pi$ of the ADMR signal $S$. Using a cavity around the diamond host crystal, a change in $S$ can be detected by a measurement of the remaining pump light either transmitted through or reflected off the cavity. Intriguingly, by appropriately tailoring the impedance of the cavity it is possible to obtain a unity contrast in the reflected light power, which in turn may lead to a sensitivity in the \mbox{pT/$\sqrt{\textrm{Hz}}$} range. \begin{figure} \includegraphics[scale = 0.31]{Fig1} \caption{(a) Summary of the NV center energy levels and transitions between them. Green laser light excites the NV center with a rate $\Gamma_p$ to a quasi-continuous vibronic state which decays quickly to the optical excited states. The decay between two states is shown by $k_{ab}$, and $\Omega$ corresponds to the Rabi frequency of the MW drive. Presence of a magnetic field lifts the degeneracy of \mbox{$m_s=\pm 1$} proportional to 2$\gamma_e B_{\text{NV}}$. Non-radiative transitions are shown by dashed arrows. (b) Schematic of our experimental setup to perform ADMR measurements through the cavity transmission (see the main text and the supplemental material for further experimental details).} \label{Cavity} \end{figure} \section{Experiment} We use the native $^{14}$NV$^-$ concentration of an off-the-shelf single-crystal diamond grown by chemical vapor deposition. A schematic of the experimental setup is shown in \mbox{Fig. \ref{Cavity}(b)}. The optical cavity consists of two concave mirrors with a \mbox{10 cm} radius of curvature set in a confocal configuration, resulting in a minimum beam waist of \mbox{92 $\mu$m} with a Rayleigh length of \mbox{$\sim$ 50 mm}. The mirrors have the measured reflectivities of \mbox{$R_1$ = 94.8$\%$ $\pm$ 0.1$\%$} and \mbox{$R_2$ = 99.8$\%$ $\pm$ 0.1$\%$} at the pump wavelength of \mbox{532 nm}. With the diamond rotated at its Brewster angle \mbox{($\theta\simeq$ 67$^{\circ}$)}, the round-trip beam path in the diamond is \mbox{$l=$ 2 $\times$ 1.3 mm} and the estimated excitation volume is \mbox{$\sim$ 3.5 $\times$ 10$^{-2}$ mm$^3$}, accounting for the standing wave and the transverse beam profile. The finesse of a cavity is defined by \mbox{$F=\pi\sqrt{\rho}/(1-\rho)$}, where \mbox{$\rho=\sqrt{R_1R_2e^{-\alpha}}$} corresponds to the cumulative round-trip loss product and $\alpha$ is the propagation loss coefficient. In the absence of the diamond, the finesse solely depends on the product of the mirror reflectivities $R_1R_2$ and is calculated as \mbox{$F = 113.4 \pm 4.4$}, which is confirmed by the measured finesse of \mbox{$F = 114 \pm 0.1$}. Incorporating the diamond into the cavity reduces the finesse to \mbox{$F = 45.1 \pm 0.1$}, which indicates that all the effective loss in the loaded cavity can be attributable solely to losses occurring through the diamond. The corresponding cumulative round-trip loss of the loaded cavity shows that the cavity is slightly under-coupled. The propagation loss can be decomposed to \mbox{$\alpha=\alpha_{abs}l+\alpha_r$}, in which $\alpha_{abs}$ is the absorption loss coefficient and $\alpha_{r}$ is attributed to all other loss channels such as surface-based absorption, scattering losses, and birefringence losses. The total fraction of reflected light from the diamond to intra-cavity power was measured as \mbox{$\sim$ 0.006}, of which approximately 80$\%$ was $s$-polarized light. This translates to an absorption loss coefficient of \mbox{$\alpha_{abs}\sim$ 0.0301 mm$^{-1}$,} taking \mbox{$\alpha_{r}\sim$ 0.006}. With an independent measurement using a confocal microscope, we determined the NV$^-$ concentration, [NV$^-$], to be \mbox{$\sim$ 2.9 $\times$ 10$^{10}$ mm$^{-3}$} (\mbox{$\sim$ 0.16 ppb}) corresponding to \mbox{$\sim$ 10$^{9}$ NV$^-$} centers within the excitation volume. Considering the absorption cross section of a single $^{14}$NV$^-$ at \mbox{532 nm} (\mbox{$\sigma_{\text{NV}}=$ 3.1 $\times$ 10$^{-15}$ mm$^2$} \cite{Wee2007}), a NV related absorption loss coefficient of \mbox{$\alpha^{\text{NV}}_{abs}\sim$ 9 $\times$ 10$^{-5}$ mm$^{-1}$} is obtained. Hence, in our diamond sample most of the propagation loss is attributed to non-NV loss channels. Using the NV absorption loss coefficient, we estimate the ratio between the excitation rate and the intra-cavity power \mbox{$\epsilon=\Gamma_p /P_{cav}\sim$ 75 kHz/W}, where the intra-cavity and incident powers are linked through \mbox{$ P_{cav} = P_{in}(1-R_1)/|1-\sqrt{R_1R_2e^{-\alpha}}|^2$}. \begin{figure} \includegraphics[scale = 0.31]{Fig2} \caption{Measured frequency-modulated ADMR spectrum using (a) single-frequency excitation and (b) three-frequency excitation. SR1 and SR4 correspond to the electron spin resonances of single crystallographic orientation of NVs, while SR2 and SR3 correspond to the electron spin resonances of the other three crystallographic orientations. The purple dot in (b) indicates the point that is most sensitive to small changes in the magnetic field.} \label{ODMR} \end{figure} \subsection{Spectrum} We performed ADMR measurements by recording the remaining pump-light transmitted through the diamond loaded cavity while sweeping the MW drive frequency across the spin resonance. To reduce the technical noise level in our measurement, we tapped off some laser light before the cavity, recorded it with a second photodetector and subtracted the two photocurrents, as indicated in \mbox{Fig. \ref{Cavity}(b)}. In order to remove low frequency technical noise, we applied lock-in detection with a frequency modulated MW drive, directly yielding $S^{P_t}_{LI}$ at the output, where $P_t$ indicates the transmitted power through the cavity and $LI$ refers to lock-in (further experimental details can be found in the supplemental material). A typical frequency modulated ADMR spectrum is presented in \mbox{Fig. \ref{ODMR}(a)}. In these measurements, a static magnetic field was aligned along the [111] axis, resulting in the outermost electron spin resonances (SR1,SR4), while the inner peaks (SR2,SR3) correspond to the electron spin resonances of the other three crystallographic orientations. The three-peak feature of the ADMR spectrum in \mbox{Fig. \ref{ODMR}(a)} is a consequence of the hyperfine interaction between the NV electron spin and the intrinsic $^{14}$N nuclear spin with a coupling constant of \mbox{$A_{||}=2.16$ MHz} \cite{Smeltzer2009}. To enhance max[$\frac{d}{d_w}(S^{P_t}_{LI}$)], we excited all three $^{14}$N hyperfine transitions simultaneously by mixing the modulation frequency $f_c$ with a \mbox{$f_m$ = $A_{||}$} signal. The three-frequency excitation results in five peaks for each electron spin resonance, as shown by the measured spectrum in \mbox{Fig. \ref{ODMR}(b)}. \begin{figure} \includegraphics[scale = 0.3]{Fig3} \caption{(a) Simulated and measured frequency-modulated ADMR spectra using three-frequency excitation. The used parameters are \mbox{$P_{in}$ = 0.4 W}, \mbox{$\Omega$ = 0.3 MHz}, \mbox{$R_1$ = 94.8 $\%$}, \mbox{$R_2$ = 99.8 $\%$}, \mbox{$\alpha^0_{abs}$ = 0.0781}, \mbox{$\alpha_r$ = 0.006}, \mbox{[NV$^-$] = 0.16 ppb}, \mbox{$l$ = 2$\times$1.3 mm}, \mbox{$\epsilon$ = 75 kHz/W}, \mbox{$\gamma^*_2$ = 1/3 MHz}, \mbox{$\gamma_1$ = 0.182 kHz}, and \mbox{$GV_0$ = 65 $\times$ 10$^6$ V}. (b) Measured and (c) simulated slopes of three-excitation, frequency-modulated ADMR spectra at \mbox{$\Delta=0$} as a function of $P_{in}$ and $\Omega$. The maximum measured slope in (b) is obtained for \mbox{$P_{in}$ = 0.4 W} and \mbox{$\Omega\sim$ 0.3 MHz}.} \label{map} \end{figure} \begin{figure*} \includegraphics[scale = 0.3]{Fig4} \caption{(a) Measurements of the magnetic noise spectral density: when the MW drive is set on the maximum slope of the frequency modulated ADMR (corresponding to the purple dot in \mbox{Fig. \ref{ODMR}(b)} - magnetically sensitive), when the MW drive is far from any spin resonance (magnetically insensitive), when we do not cancel out the correlated laser noise (blocked the reference detector), and the noise floor of the lock-in and blocked detectors for the same gain setting. (b) Measurements of the Allan deviation of magnetic noise of the traces in (a). The drop with the slope of -1/2 identifies the white noise in the system. For the magnetically sensitive trace, there is a minimum at \mbox{$\sim$ 3.3 s} which increases at higher averaging time due to thermal or mechanical drift in the system. The Allan deviation was calculated using the overlapping method.} \label{Pulsed} \end{figure*} \subsection{Model} An ADMR spectrum $S_{LI}$ may be obtained either by recording the pump beam reflected from the cavity, $S^{P_r}_{LI}$, or transmitted through the cavity, $S^{P_t}_{LI}$, as a function of the applied MW frequency, and may be modeled using a set of optical Bloch equations considering the five electronic levels and the transitions summarized in \mbox{Fig. \ref{Cavity}(a)} \cite{Haitham2017}. The steady-state level populations $\rho^{ss}$ are then obtained as a function of Rabi frequency $\Omega$, optical excitation rate $\Gamma_p$, and MW detuning $\Delta$ from the spin \mbox{$m_s=0 \leftrightarrow m_s=\pm 1$} transition. The cavity reflection or transmission itself is a function of loss inside the cavity which is dominated by the absorption in diamond, while the NV absorption in diamond depends on the NV ensemble ground state spin population. Applying a resonant MW field (\mbox{$\Delta=0$}) increases the population in the shelving state $|5\rangle$, which possesses a longer lifetime (\mbox{$ > 150$ ns} \cite{Robledo2011a, Acosta2010a}) than the $^3$E excited states, and hence, a lower average population remains in the ground states $|1\rangle$ and $|2\rangle$ to absorb the pump photons. Ultimately, the resonant MW field decreases the optical loss inside the cavity which can be monitored through the light transmitted or reflected from the cavity. The steady-state population of the optical ground state can be written as: \begin{equation} \rho^{ss}_g(\Omega,\Gamma_p,\Delta) = \rho^{ss}_{11} + \rho^{ss}_{22}, \end{equation} where $\rho^{ss}_{11}$ and $\rho^{ss}_{22}$ are the steady-state population of $|$1$\rangle$ and $|$2$\rangle$, respectively. As the absorption of a NV ensemble directly depends on $\rho^{ss}_g$, a change in the propagation loss as a function of [NV$^-$] can be described as: \begin{equation} \alpha(\Omega,\Gamma_p,\Delta,[\text{NV}^-])=\alpha^0_{abs}+[\text{NV}^-]\sigma_{\text{NV}}l\rho^{ss}_g+\alpha_r, \end{equation} where $\alpha^0_{abs}$ is the loss coefficient attributed to non-NV absorption. As pump absorption in our sample is dominated by non-NV related processes, the absorption-based spin contrast $C_{\text{ADMR}}$ related to the fraction $\alpha^{\text{NV}}_{abs}/\alpha$ is on the order of $10^{-6}$ when monitoring the absorption through the cavity transmission. The steady-state cavity outputs as a function of MW detuning are then reformulated in terms of transmitted and reflected powers: \begin{equation} \frac{P_t}{P_{in}} = \frac{T_1T_2e^{-\alpha(\Omega,\Gamma_p,\Delta,[\text{NV}^-])}}{|1-\sqrt{R_1R_2e^{-\alpha(\Omega,\Gamma_p,\Delta,[\text{NV}^-])}}|^2}, \label{Ptra} \end{equation} \begin{equation} \frac{P_r}{P_{in}} = \frac{(R_1-\sqrt{R_1R_2e^{-\alpha(\Omega,\Gamma_p,\Delta,[\text{NV}^-])}})^2}{R_1|1-\sqrt{R_1R_2e^{-\alpha(\Omega,\Gamma_p,\Delta,[\text{NV}^-])}}|^2}, \label{Pref} \end{equation} where $P_{in}$ is the laser input power to the cavity, $T_1$ and $T_2$ are the transmissions of the first and the second mirror, respectively, and we assume \mbox{$R_i+T_i=1$}. For the sake of simplicity, the intra-cavity excitation rate (\mbox{$\Gamma_p=\epsilon P_{cav}$}) is calculated in terms of the input power and the propagation loss when no MW field is applied (\mbox{$\Omega = 0$}, \mbox{$\Delta\rho^{ss}_g$ = 1}). The lock-in signal $S^{P_i}_{LI}$ can be described as a function of detuning between the carrier frequency $f_c$ and the resonance frequency $f_0$ (\mbox{$\Delta=f_c - f_0$}), and the modulation depth $\delta$ through: \begin{equation} \begin{split} S^{P_i}_{LI}(\Delta)=\frac{GV_0}{2} & \sum\limits_{m_x}\sum\limits_{m_l}[P_i(\Delta+\delta+(m_l+m_x)A_{||}) \\ & -P_i(\Delta-\delta+(m_l+m_x)A_{||})], \end{split} \label{Si} \end{equation} where $G$ is the lock-in gain factor, $V_0$ is the off-resonant detected voltage, and $P_i$ is either the reflected or transmitted cavity power. The expression is summed over the $^{14}$N nuclear spin quantum number \mbox{$m_l$ = $\{$-1,0,1$\}$}, and the three frequencies $m_x$ separated by $A_{||}$ in order to account for the simultaneous drive of all three hyperfine transitions. Using \mbox{Eq. (\ref{Si})}, $S^{P_t}_{LI}$ is plotted as a solid line in \mbox{Fig. \ref{map}(a)}, taking \mbox{$P_{in}$ = 0.4 W}, \mbox{$\Omega$ = 0.3 MHz}, a pure dephasing rate of \mbox{$\gamma^*_2$ = 1/3 MHz}, a longitudinal relaxation rate of \mbox{$\gamma_1$ = 0.182 kHz}, and level decay rates $k_{ab}$ extracted from \cite{Robledo2011a}. We also plot the measured ADMR spectrum as a dashed line in \mbox{Fig. \ref{map}(a)}. The ADMR spectrum was recorded with the same $P_{in}$ and $\Omega$ as the simulated spectrum. The match between the simulated and measured traces is very good, with just a small mismatch due to the uncertainty in the estimation of the parameters $\epsilon$, $\gamma^*_2$, and $\alpha^0_{abs}$ in the simulation. \begin{figure*} \includegraphics[scale = 0.3]{Fig5} \caption{Simulated plots of the shot-noise-limited sensitivity as a function of [NV$^-$], $P_{in}$, and $\Omega$ using the following parameters \mbox{$R_1=R_2e^{-\alpha}$}, \mbox{$R_2$ = 99.9 $\%$}, \mbox{$\alpha^0_{abs}=\alpha^{\text{NV}}_{abs}$}, \mbox{$\alpha_r$ = 0.006}, \mbox{$l$ = 2$\times$1.3 mm}, \mbox{$\epsilon$ = 75 kHz/W}, \mbox{$\gamma^*_2$ = 1/3 MHz}, \mbox{$\gamma_1$ = 0.182 kHz}. (a) and (b) are calculated from transmission through the cavity for \mbox{$\Omega$ = 0.5 MHz} and \mbox{$P_{in}$ = 0.5 W}, respectively. (c) and (d) are calculated from reflection of the cavity for \mbox{$\Omega$ = 0.5 MHz} and \mbox{$P_{in}$ = 0.5 W}, respectively.} \label{Density} \end{figure*} \subsection{Sensitivity} To optimize the magnetic field sensitivity, we measure the dependence of $\frac{d}{d_w}(S^{P_t}_{LI})$ of three-frequency excitation spectra on the pump power and Rabi frequency, $P_{in}$ and $\Omega$, at \mbox{$\Delta=0$}. The results of these measurements are presented in \mbox{Fig. \ref{map}(b)}. The maximum slope is achieved at \mbox{$P_{in}$ = 0.4 W} and \mbox{$\Omega\sim$ 0.3 MHz}, where the optical excitation rate by virtue of the cavity enhancement overcomes the MW power-induced broadening, allowing for a narrowing regime to be reached \cite{Jensen2013}. The simulated slopes are presented in \mbox{Fig. \ref{map}(c)} and obtained using the same parameters as in \mbox{Fig. \ref{map}(a)}. We observe a very good agreement with respect to the overall trend, the slope magnitude, and the location of the slope maximum. For deducing the sensitivity of the magnetometer, we independently measured four time traces of the lock-in signal for \mbox{$P_{in}$ = 0.4 W}. The first trace was measured in the optimal magnetically sensitive configuration, with the MW drive on resonance with a spin transition (\mbox{$\Delta=0$}) corresponding to the purple dot in \mbox{Fig. \ref{ODMR}(b)}. The second trace was measured in the magnetically insensitive configuration, with the MW drive frequency far-detuned from any spin resonance (\mbox{$\Delta \rightarrow \infty$}). The third trace was measured by blocking the reference detector which monitored the laser output. The last trace was measured with all detectors blocked, which shows the sum of electronic noise from the lock-in detector and photodetectors. The Fourier transforms of these time traces with a frequency resolution of \mbox{0.24 Hz} are presented in \mbox{Fig. \ref{Pulsed}(a)} where the y-axis is displayed in units of sensitivity. It shows a \mbox{125 Hz} bandwidth and a \mbox{12 dB/octave} roll-off that is generated by the low-pass filter of the lock-in detector. The choice of this bandwidth is a consequence of the low ADMR contrast (\mbox{$C_{\text{ADMR}} \sim 10^{-6}$}) measured through the cavity transmission. When the MW drive is off resonance, a noise floor of \mbox{$\sim$ 100 nT/$\sqrt{\textrm{Hz}}$} is achieved. The increased noise floor when we blocked the reference detector and only monitored the transmission through the cavity shows the impact of substantial technical noise at the \mbox{35 kHz} modulation frequency. Next, we calculated the Allan deviation of both magnetically insensitive and magnetically sensitive traces, which allows us to investigate the intrinsic noise in the system. The results are presented in \mbox{Fig \ref{Pulsed}(b)}. The drop of the Allan deviation with a slope of -1/2 in both traces is a signature of white noise. For the magnetically sensitive measurements, the white noise reaches a minimum at \mbox{$\sim$ 3.3 s}. The increase of the Allan deviation at higher averaging time is a sign of thermal or mechanical drift in the system. \section{Outlook} To better understand the context and magnitude of the measured sensitivity, we estimate the shot-noise-limited sensitivity for a single-peak ADMR as a function of [NV$^-$], $P_{in}$, and $\Omega$. Using the same physical dimensions as in our setup (cavity length and diamond thickness), we assume a diamond host where \mbox{$\alpha^0_{abs}=\alpha^{\text{NV}}_{abs}$} for any [NV$^-$]. In addition, we consider that the reflectivity of the incoupling mirror is such that \mbox{$R_1=R_2e^{-\alpha}$} when \mbox{$\Omega=0$} for a given optical input power, ensuring that the cavity is impedance-matched. The intra-cavity power is thereby always maximized and there is no cavity reflection when no MW field is applied. The shot-noise-limited sensitivity was estimated from the ratio of the shot-noise level to max[$\frac{d}{d_w}S$]. The results of this calculation are presented in \mbox{Fig. \ref{Density}} for both transmitted (a,b) and reflected (c,d) powers. We have fixed \mbox{$\Omega=0.5$ MHz} for (a,c) and \mbox{$P_{in}=0.5$ W} for (b,d). By monitoring the transmitted power $P_t$ and optimizing [NV$^-$], $P_{in}$, and $\Omega$, a shot-noise-limited sensitivity in the \mbox{sub-100-pT/$\sqrt{\textrm{Hz}}$} range can be expected. In comparison, by monitoring the reflected power $P_r$, a sensitivity in the \mbox{pT/$\sqrt{\textrm{Hz}}$} range is projected. As the cavity is impedance-matched, applying no MW field results in \mbox{$P_r(\Omega=0)\sim0$}. However, applying $\Omega$ on resonance with a spin transition reduces the loss in the cavity and pushes the cavity into the over-coupled regime. For the case presented in \mbox{Fig. \ref{Density}(d)} with a fixed input power \mbox{$P_{in}=0.5$ W}, the optimal sensitivity of \mbox{$\sim$ 1 pT/$\sqrt{\textrm{Hz}}$} is obtained for \mbox{$\Omega = 0.21$ MHz} and \mbox{[NV$^-$] $\sim$ 70.8 ppb}. At these settings, the cavity finesse is 13.7, the intra-cavity power reaches \mbox{$P_{cav}=5.35$ W} and the maximum reflected power \mbox{$P_r(\Delta=0)$ = 0.15 $\mu$W }. The total reflected power of such an over-coupled cavity contributes to the ADMR signal. \section{Conclusion} In this article, we report on magnetic field sensing using an ensemble of NV centers based on the variation of a cavity's transmitted pump power due to electron-spin absorption. Frequency-modulated ADMR spectra were measured, which was used to measure the local magnetic noise spectral density with a noise floor of \mbox{100 nT/$\sqrt{\textrm{Hz}}$} spanning a bandwidth up to \mbox{125 Hz}. Our simulations show that a photon shot-noise-limited sensitivity of \mbox{$\sim$ 1 pT/$\sqrt{\textrm{Hz}}$} can be achieved when measuring a cavity's reflected power near the impedance-matched point and using a diamond with an optimized NV density. Cavity-based ADMR is an alternative to its ODMR counterpart, and has advantageous in terms of both detection contrast and device application. With the appropriate cavity design and sample optimization, it is anticipated that the work and technique presented here will provide a solid foundation for NV-based magnetometers. \section{Acknowledgments} We would like to thank Kristian Hagsted Rasmussen for help with the diamond sample preparation. We are also grateful to Jonas Schou Neergaard-Nielsen for fruitful discussions. This work was supported by the Danish Innovation Foundation through the EXMAD project and the Qubiz center, as well as the Danish Research Council through the Sapere Aude project (DIMS).
{ "timestamp": "2017-12-15T02:06:16", "yymm": "1712", "arxiv_id": "1712.05161", "language": "en", "url": "https://arxiv.org/abs/1712.05161" }
\section{INTRODUCTION} \label{sec:intro} The kinetic Sunyaev--Zel'dovich (kSZ) effect \citep{kSZ1980} is a powerful probe of the physics of the epoch of reionisation (EoR), as it is sensitive to patchiness of ionised bubbles \citep[see][and references therein]{Park2013}. Measurements of the power spectrum of the kSZ, however, face two challenges. First, we can only measure the sum of the kSZ power spectra from the EoR and post EoR, and the latter is larger than the former by at least a factor of two \citep[e.g.,][]{Shaw2012,Park2016}. Second, the kSZ power spectrum is sub-dominant compared to other components including the primary cosmic microwave background (CMB) temperature anisotropy, foreground sources, and the thermal Sunyaev--Zel'dovich effect \citep{George2015spt}, and thus inaccurate modeling of these components results in inaccurate estimation of the kSZ power spectrum. These issues arise because we have no redshift information of the kSZ. Cross-correlating the kSZ with 21 cm fluctuations from neutral hydrogen atoms would allow us to do ``tomography'' of the kSZ as a function of redshift because the frequencies of measured 21 cm lines can be translated into $z$ \citep{Alvarez2006,Adshead2008,Alvarez2015}. This cross-correlation not only helps measurements of the kSZ from the EoR, but also 21 cm signals, as the latter are contaminated by the Galactic and extragalactic foreground emission and instrumental systematics arising from, e.g., miscalibration of gains, polarisation-to-intensity leakages, etc \citep{Patil2017}, which are not correlated with the CMB data\footnote{There is still a possibility that unresolved radio sources have both the 21 cm and CMB data that can correlate.}. In this paper we use semi-numerical simulations of the EoR \citep{Mesinger2011} to study the cross-correlation between kSZ and 21 cm signals. In particular, we investigate the cross-correlation between {\it squared} kSZ fluctuations and 21 cm signals. This was considered in the context of cross-correlation with weak lensing by the large-scale structure \citep{dore2004}, as well as with galaxies \citep{Hill2016ksz,Ferraro2016ksz2} in a low redshift universe. This approach works better because it avoids line-of-sight cancellation of the kSZ-density correlation. The rest of the paper is organised as follows. We describe our simulations in section~\ref{sec:simu}. In section~\ref{sec:res} we first test the fidelity of our simulated kSZ maps by using the kSZ auto power spectrum, and then show that the kSZ-21 cm correlation suffers from line-of-sight cancellation on small angular scales. We then present our results for the kSZ$^2$-21 cm correlations. In section~\ref{sec:sn} we discuss the detectability of the kSZ$^2$-21 cm correlations from the EoR by calculating signal-to-noise ratios of some representative experimental configurations. We conclude in section~\ref{sec:dis}. Throughout this paper we use the best-fitting cosmological parameters from {\it Planck}+WP+highL+BAO data \citep{Planck2013}: cosmological constant $\Omega_{\Lambda}=0.6914$, matter density $\Omega_{M}=0.3086$, baryon matter density $\Omega_{b}=0.0483$, scalar spectral index $n_{s}=0.9611$, matter fluctuation amplitude $\sigma_8=0.8288$, and Hubble constant $H_0=67.77\,{\rm Km/s/Mpc}$ ($h=0.6777$). \section{SIMULATIONS} \label{sec:simu} Temperature anisotropy due to the kSZ effect is given by \citep{kSZ1980}: \begin{equation} \label{ksz_eq} \delta T_{\rm kSZ} (\hat{\gamma})=-{T_0}\int {\rm d} \tau e^{-\tau} \frac{\hat{\gamma} \cdot \pmb{v}}{c}, \end{equation} where $T_0=2.728\,\rm K$ is the CMB temperature at $z=0$, $c$ the speed of light, $\pmb{v}$ the bulk peculiar velocity of ionised gas, $\hat{\gamma}$ a unit vector of the line-of-sight (LOS), and $\tau$ the optical depth of free electron scattering, i.e., \begin{equation} \label{tau_eq} {\rm d} \tau = \sigma_{T} N_{b,0} (1+z)^2 (1+\delta) x_{e} {\rm d} s, \end{equation} where $\sigma_{T}$ is the Thomson scattering cross-section, $N_{b,0}=0.2\,(\Omega_b\,h^2/0.022)~{\rm m^{-3}}$ the average atomic number density at $z=0$, $\delta$ the matter overdensity, $x_{e}$ the ionisation fraction (assuming the same ionisation fraction for HI and HeI and no HeII ionised during the EoR), ${\rm d} s = c/H(z) {\rm d} z$ the differential comoving distance $s$, and $H(z)=H_{0}\sqrt{\Omega_{M}(1+z)^3+\Omega_{\Lambda}}$ the Hubble expansion rate at $z$. As $\tau =\int_{0}^{z} {\rm d} \tau$ is usually much smaller than unity, we shall ignore fluctuations in $e^{-\tau} \approx 1-\tau$. Then the kSZ is determined by a specific ionised momentum field defined by $\pmb{q}\equiv x_{e} (1+\delta) \pmb{v}$. The offset of the 21 cm brightness temperature from the CMB, $\delta T_{\rm 21cm}$, is expressed as \citep{Mesinger2011}: \begin{equation} \label{eq_21cm} \delta T_{\rm 21cm} = \Psi_{\rm 21cm} (1- x_{e}) (1+\delta) \left(\frac{H}{{\rm d}v_{s}/{\rm d}s+H}\right) \left[ 1-\frac{T_{0}(1+z)} {T_{\rm S}} \right], \end{equation} where $\Psi_{\rm 21cm} \approx 27{\rm mK} [(1+z)/10]^{1/2}$, $dv_{s}/ds$ is a gradient of the comoving velocity along the LOS, and $T_{\rm S}$ is the gas spin temperature. As in this paper we are only considering the 21 cm signals perpendicular to the LOS, we ignore the redshift space distortion term, $dv_{s}/ds$, in the denominator. We use the semi-numerical simulation code 21cmFAST \citep{Mesinger2011} to calculate the ionisation fraction, matter density, and peculiar velocity fields. The simulations start at $z=30$ with a box of comoving $2000\,\rm Mpc$ per side and a grid of $400^3$ cells. This gives a resolution of $5\,\rm Mpc$ per cell. The volume of the box is large enough to encompass the redshift range from $z=20$ to $z=7.5$, where we save outputs. We integrate Eq.~\ref{ksz_eq} in each resolution element through the box to obtain 2-D kSZ maps (see left panel of Fig.~\ref{image}), by using the plane-parallel approximation instead of tracing the actual LOS. 21cmFAST also provides the 21 cm field, as shown in the middle panel of Fig.~\ref{image}. Regarding the ionisation processes, 21cmFAST keeps track of both UV and X-ray radiation, i.e. a cell is ionised if $\zeta_{\rm UV}f_{\rm coll}(R)\ge 1-x_{e,\rm X}$, where $f_{\rm coll}$ is the collapsed fraction inside a sphere with a radius $R$, $\zeta_{\rm UV}$ is the ionising efficiency factor of UV photons, and $x_{e,\rm X}$ is the fraction ionised by X-ray radiation \citep[for details, see ][]{Mesinger2013}. The X-ray emission is due to stellar remnants, e.g. X-ray binaries whose ionising efficiency factor is related to the star formation, $\zeta_{\rm X}=({N_{\rm X}}/10^{56}~{\rm M}_{\odot})({f_{\ast}}/{0.1})$, where $N_{\rm X}$ is the X-ray ($\ge 0.3\,{\rm KeV}$) photon number emitted per solar mass during the whole life of stars (in units of ${\rm M}_{\odot}^{-1}$), and $f_{\ast}$ is the fraction of collapsed baryons converted into stars \citep{Mesinger2013}. The values we adopted for these parameters are $\zeta_{\rm UV}=31.5$ and $\zeta_{\rm X}=1$. We generate 20 realisations to obtain good statistics. Each realisation gives three independent 2-D kSZ maps along three axes of the snapshot; thus, we have 60 realisations of kSZ maps and the corresponding 21 cm fields. \begin{figure*} \centering \includegraphics[width=0.95\linewidth]{figures/image.eps} \caption{{\bf Left panel:} One realisation of kSZ map ($\delta T_{\rm kSZ}$ in units $\rm \mu K$). The size of the map is approximately $12.7^\circ\times 12.7^\circ$. {\bf Middle panel:} A slice of 21 cm brightness temperature ($\delta T_{\rm 21cm}$ in units $\rm mK$) at $x_{e}=0.5$ ($z=10.8$). The physical linear size of this map is $2000$ comoving Mpc. {\bf Right panel:} Reionisation history ($x_{e}$) as a function of redshift.} \label{image} \end{figure*} The right panel of Fig.~\ref{image} shows the reionisation history of our simulations. The reionisation completes at $z\approx 8$, with half-ionisation at $z=10.8$. Assuming that hydrogen is fully ionised at $z<8$ and helium is singly ionised ($x_{\rm HeII}=1$) at $z>3$ and doubly ionised ($x_{\rm HeIII}=1$) at $z<3$, we obtain an optical depth of $\tau \approx 0.0996$, which is high compared to the latest determination by Planck \citep{Planck2016}. The left panel of Fig.~\ref{image} shows one realisation of the kSZ map, while the middle panel shows a slice of the 21 cm brightness temperature at $z=10.8$ ($x_{e} = 0.5$). The kSZ signal is primarily generated by a long-wavelength peculiar velocity field modulated by small-scale electron density fluctuations \citep{Hu2000}. This is the reason that the kSZ map is dominated visually by long-wavelength modes. \section{Angular Power Spectra} \label{sec:res} In the flat-sky approximation, the angular spectrum is computed by \begin{equation} <\tilde{X}^{\ast}(\pmb{l})\tilde{Z}(\pmb{l'})> = C_{X-Z}(l)\delta_{\rm D}(\pmb{l}-\pmb{l'}), \end{equation} where $\delta_{\rm D}$ is the Dirac delta function, $X$ and $Z$ are maps, and $\tilde{X}$ and $\tilde{Z}$ are their Fourier transforms: \begin{equation} \tilde{X}(\pmb{l})=\frac{1}{2\pi}\int d^2\hat{\gamma} X(\hat{\gamma}) e^{-i\pmb{l}\cdot \hat{\gamma}}. \end{equation} If $X=Z$, $C_{X-X}(l)$ is the auto power spectrum of a 2-D map $X$. \subsection{kSZ auto power spectrum} \begin{figure} \centering \includegraphics[width=0.88\linewidth]{figures/ps_cl_auto.eps} \caption{{\bf Top panel:} kSZ auto power spectrum, $\Delta^{2}(l)=l(l+1)C(l)/2\pi$, averaged over 60 realisations (solid red line). The error bars are the errors on the average (i.e., r.m.s. scatter divided by $\sqrt{60}$). The solid (dotted) black line shows the power spectrum of transverse (longitudinal) momentum derived from the 3-D power spectra integrated over redshifts using Limber's approximation. The dashed line shows the primary CMB power spectrum. {\bf Middle panel:} kSZ-21 cm cross power spectra at $x_{e}=0.2$ ($z=13$; dashed blue), $x_{e}=0.5$ ($z=10.8$; dash-dotted cyan), and $x_{e}=0.9$ ($z=9.3$; dotted green). The solid black lines are the corresponding Limber results. The horizontal dashed line shows zero. {\bf Bottom panel:} Same as the middle panel but for kSZ$^2$--21 cm cross correlations.} \label{pscl_all} \end{figure} In the top panel of Fig.~\ref{pscl_all}, we show the kSZ auto power spectrum with the error bars on the average derived from 60 realisations. To check the fidelity of the maps, we also compute the kSZ power spectrum by directly integrating the 3-D power spectra of specific momentum fields over redshifts. The momentum fields consist of two components: the transverse mode whose direction is perpendicular to the wavenumber, $\pmb{k}$, i.e., $\pmb{k}\perp{\pmb{q}_{\perp}}$, and the longitudinal mode, $\pmb{k}\parallel {\pmb{q}_{\parallel}}$. Using Limber's approximation \citep{Limber1953}, we obtain \citep{Park2013,Alvarez2015} \begin{eqnarray} \label{q_trans} C_{\pmb{q}_{\perp}}(l)&=&\left(\frac{\sigma_{T}N_{b,0}T_{0}}{c}\right)^2\int (1+z)^{4} \frac{{\rm d} s}{s^2}~e^{-2\tau} \frac{P_{\pmb{q}_{\perp}}(\frac{l}{s},z)}{2},\\ \label{q_para} C_{\pmb{q}_{||}}(l) &=& \frac1{l^2}\int {\rm d} s~ \Psi_{||}^2 \frac{P_{\delta \delta}(\frac{l}{s},z)}{(l/s)^2}, \end{eqnarray} where in the last line we have used linear theory to relate the longitudinal velocity with the matter density. Here, $P_{\delta \delta}$ is the power spectrum of matter density, $\Psi_{||}\equiv \frac{T_{0}}{c D} {\rm d} (a \dot{D} \frac{{\rm d} \tau}{{\rm d} s} )/{\rm d}s$, $D$ is the growth factor of linear matter density fluctuations, and $\dot{D}$ is the time derivative of $D(z)$. We evaluate these integrals using $P_{\pmb{q}_{\perp}}$ and $P_{\delta \delta}$ measured from the simulations. The factor of $l^{-2}$ in Eq.~\ref{q_para} is a consequence of LOS cancellation; namely, the kSZ is caused by the LOS component of velocities, and the longitudinal velocities are parallel to $\pmb{k}$. As the longitudinal modes change signs along the LOS, short wavelength modes suffer from cancellations \citep{Vishniac1987}. Therefore, the longitudinal modes dominate at large angular scales. We find that the Limber formula agrees well with the kSZ power spectrum measured from the maps at $l\gtrsim 10^3$. However, at much lower multipoles, the power measured from the maps is substantially larger than the Limber formula for the longitudinal mode. This large-scale mismatch originates from boundary effects as the LOS cancellation does not occur at the near/far boundaries of our lightcone (i.e. at $z\sim7.5$ and $z\sim20$). \subsection{kSZ-21 cm correlation} The middle panel of Fig.~\ref{pscl_all} shows the cross power spectra of kSZ with 21 cm at $x_{e}=0.2$, 0.5 and 0.9, together with the results from the Limber formula \citep{Alvarez2006}\footnote{We have added a term including spin temperature $T_{\rm S}$, while when $T_{\rm S} \gg T_{\rm CMB}$, Eq.~\ref{eq_21cm_ksz} can be simplified to the Eq.~17 in \cite{Alvarez2006}.}: \begin{align} \label{eq_21cm_ksz} C_{\rm kSZ-21cm}(l) &= \frac1{l^2} \Psi_{||} \Psi_{21cm} \kappa \Big\{- x_{e}P_{\delta \delta_{x}}\left(\frac{l}{s},z\right) \nonumber \\ & \qquad{} + (1-x_{e})\left[P_{\delta \delta}\left(\frac{l}{s},z\right)+P_{\delta \delta_{\kappa}}\left(\frac{l}{s},z\right)\right]\Big\}, \end{align} where $\kappa \equiv 1-T_{0}(1+z)/T_{\rm S}$, $P_{\delta \delta_{x}}$ is the cross power spectrum of matter density and ionisation fraction fluctuations, and $P_{\delta \delta_{\kappa}}$ is the cross power spectrum of matter density and $\kappa$ fluctuations. Here, the factor $l^{-2}$ is again due to LOS cancellation, as the correlation is dominated by the longitudinal modes correlated with density fields. The correlation between the transverse modes and density fields involves a three-point correlation of $(\pmb{v}\delta)_\perp\delta$, which vanishes for Gaussian fluctuations. We evaluate Eq.~\ref{eq_21cm_ksz} using all the 3-D power spectra measured from the simulations. We find excellent agreement between the power spectra measured from maps and the Limber results, which indicates that the contribution from the transverse modes is indeed negligible. In the top panel of Fig.~\ref{ps_ksz_vz} we show the redshift evolution of the kSZ-21 cm cross power spectrum at $l=100$ and 500 together with the Limber results. Because the correlations are overestimated at the lightcone boundaries, we crop the evolution to $z=16.7-8.5$. Our results are in agreement with \cite{Alvarez2006}, apart from high redshifts where the spin temperature makes the cross-correlation more negative. This happens because $T_{S}$ strongly correlates with the matter density, i.e., $P_{\delta\delta_{\kappa}} \gg P_{\delta\delta}$, the magnitude of the kSZ-21 cm correlation becomes larger as $z$ increases. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figures/ps_ksz_vz.eps} \caption{{\bf Top panel:} Redshift evolution of the kSZ-21 cm cross power spectra at $l=100$ (dash-dotted yellow) and 500 (dashed violet). The black lines are the corresponding Limber results. The horizontal dashed line shows zero. {\bf Bottom panel:} Same as the top panel but for kSZ$^2$--21 cm cross correlations of $l=500$, 1000, and 3000.} \label{ps_ksz_vz} \end{figure} \subsection{kSZ$^2$-21 cm correlation} In order to detect the cross-correlation between kSZ and 21 cm signals, we need to overcome the LOS cancellation. One way to achieve this is to cross-correlate {\it squared} kSZ fields with density fields \citep{dore2004}. Then the correlation between kSZ$^2$ and 21 cm signals would persist at small angular scales. To avoid the contamination of boundary effects at large scales, we remove the kSZ signals at $l<100$ before squaring, and only focus on the correlations at $z=16.7-8.5$ and $l>100$. In the bottom panel of Fig.~\ref{pscl_all} we show the kSZ$^2$-21 cm cross power spectra at $x_{e}=0.2$, 0.5 and 0.9 as a function of multipoles with the error bars on the average of 60 realisations, while in the bottom panel of Fig.~\ref{ps_ksz_vz} we show redshift evolution of the spectra at $l=500$, 1000, and 3000. These spectra evolve rapidly with ionisation fraction. For example, at $x_{e}=0.2$ ($z=13$) kSZ$^2$ correlates negatively with the 21 cm signal at small scales ($l>10^3$), but no significant correlations are visible at large scales ($l<10^3$), whereas at $x_{e}=0.5$ ($z=10.8$) the kSZ$^2$-21 cm correlation is negative in the entire multipole range. At a later stage of reionisation, when $x_{e}=0.9$ ($z=9.3)$, the correlation turns slightly positive. The evolution of kSZ$^2$-21 cm cross power spectra can be understood in terms of the highly non-Gaussian nature of reionisation. When ionisation is low, e.g. at $x_{e}<0.7$ ($z>10$), the kSZ$^2$-21 correlation is dominated by rare ionised bubbles (see the left part of illustration in Fig.~\ref{ksz_car}) that anti-correlate with the 21 cm signal arising from the neutral medium, resulting in negative cross spectra. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figures/ksz.eps} \caption{Simplified diagram to illustrate the dependence of kSZ$^2$ fluctuations on the reionisation phase, assuming the velocities along LOS appear as pairs. During the early stages of reionisation (left), when the Universe is lowly ionised, the ionised bubbles (gray) have a larger probability to meet neutral pair partners (white) than ionised ones and then remain pure kSZ signals (embedded in red dashed lines). In this case, the kSZ$^2$ fluctuations are dominated by the ionised bubbles. At a later stage of reionisation (right), when the Universe is highly ionised, the situation is reversed and the neutral regions have a higher probability to remain pure kSZ signals compared to the ionised bubbles. The kSZ$^2$ fluctuations are thus dominated by the neutral regions.} \label{ksz_car} \end{figure} On the other hand, kSZ$^2$ shows positive correlations with 21 cm at $x_{e}<0.15$ ($z>14$), when the spin temperature dominates the 21 cm signal. This is because $T_S$ correlates strongly with the matter density at these epochs. When the Universe is highly ionised, e.g. at $x_{e}>0.8$ ($z<9.7$), kSZ$^2$ fluctuations are dominated by the small remains of the neutral medium (see the right part of illustration in Fig.~\ref{ksz_car}), resulting in positive cross spectra with 21 cm signals. \section{Signal-to-noise Ratio} \label{sec:sn} We calculate the expected signal-to-noise ratios (S/N) of the kSZ$^2$-21 cm correlations. We adopt specifications similar to those of LOFAR \citep{Vrbanec2016} and SKA \citep{Koopmans2015} for 21 cm observations, and those of the current generation of ground-based CMB observatories such as SPT-3G \citep{SPT3G} and Advanced ACT \citep{AdvACT}. We estimate the S/N per multipole bin as \citep{dore2004} \begin{equation} \left( \frac{S}{N} \right)^2 = \frac{f_{\rm sky}(2l+1) l_{\rm bin} C_{\rm kSZ^2, 21cm}^{2}} {C_{\rm CMB^2} (C_{\rm 21cm} + N_{\rm 21cm})+C_{\rm kSZ^2,21cm}^{2}}, \label{eq:sn} \end{equation} where $l_{\rm bin}\approx 0.46l$ ($({\rm log_{10}} l)_{\rm bin}=0.2$) is the bin width at a given $l$, $f_{\rm sky}$ the fraction of sky observed by both CMB and 21 cm experiments, $C_{\rm CMB^2}$ the power spectrum of CMB-squared, and $C_{\rm 21cm}$ and $N_{\rm 21cm}$ are the auto spectrum of the 21 cm signal and its noise, respectively. In the CMB maps we include primary CMB and the thermal SZ effect \citep{Dolag2015}. To this we also add the Poisson and clustered power from dusty star-forming galaxies and the Poisson power from radio galaxies, for which we use the values estimated by the SPT Collaboration \citep{George2015spt}. Finally we add Gaussian, white instrumental noise of $3.4~\mu{\rm K}$~arcmin which corresponds to a noise per pixel of $\sigma_{\rm pix}=2 \,\mu{\rm K}$ with our pixel size of 1.7 arcmin. We then use these maps to calculate $C_{\rm CMB^2}$ in Eq.~\ref{eq:sn}. The auto spectra of 21 cm signals, $C_{\rm 21cm}$, come from our simulations, and the noise power is given by $N_{\rm 21cm}=[(1+z)/9.5]^2\sigma_{\rm pix}^2\theta_{\rm FWHM}^2$ \citep{dore2004}. We adopt $\theta_{\rm FWHM}=3.5\,\rm arcmin$ and $\sigma_{\rm pix}=76\,\rm mK$ at 150MHz ($z\approx8.5$) for LOFAR \citep{Vrbanec2016}, assuming 600 hours of integration and a bandwidth of 0.5~MHz. SKA will have a superior angular resolution of $\theta_{\rm FWHM}=1\,\rm arcmin$ \citep{Koopmans2015}, and a lower noise level. We adopt $\sigma_{\rm pix}=10\,\rm mK$ at 150~MHz, which corresponds to $\sim$10 hours of integration and 1~MHz of bandwidth. Finally, we assume that both CMB and 21 cm experiments will have an overlapping region of 100~deg$^2$ ($f_{\rm sky}=0.0024$) for SKA, and 25~deg$^2$ for LOFAR. These parameters are listed in Table~\ref{tab1}. Note that both squared fields and 21~cm signals are non-Gaussian, but Eq.~\ref{eq:sn} is valid only for Gaussian fields. Thus, the S/N estimate given here is only approximate. \begin{table} \label{tab1} \centering \begin{tabular}{c c c c} \hline\hline Experiment & $\theta_{\rm FWHM}$ & $\sigma_{\rm pix} (\mu \rm K)$ & $f_{\rm sky}$\\ \hline CMB & 1.7 & 2 & 0.0024 \\ LOFAR & 3.5 & $76\times10^3$ & 0.0006 \\ SKA & 1.0 & $10\times10^3$ & 0.0024 \\ \hline \end{tabular} \caption{Characteristics of CMB and 21 cm experiments.} \end{table} We find that the S/N is small ($\ll 1$) for any combinations of CMB and 21 cm experiments, mainly because of the large noise from the primary CMB signal. To mitigate this problem, in the next section we apply the commonly adopted Wiener filtering \citep{dore2004}. \subsection{Wiener filtering} To suppress primary CMB ``noise'', we apply the following filter \citep{dore2004, Hill2016ksz, Ferraro2016ksz2} \begin{equation} \label{eq:wf} F(l)=\frac{C_{\rm kSZ}(l)} {C_{\rm kSZ}(l) + C_{\rm pCMB}(l) + C_{\rm fore}(l)}, \end{equation} where $C_{\rm pCMB}$ is the primary CMB power spectrum, and $C_{\rm fore}$ is the sum of the foreground terms including the thermal SZ, dusty star-forming galaxies and radio galaxies. As Wiener filtering will automatically suppress low $l$ power, we do not crop the CMB fluctuations at $l<100$ before applying the filtering. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figures/s_n_ratio_vl.eps} \caption{Predicted S/N of the kSZ$^2$-21 cm correlations per multipole bin after Wiener filtering for $x_{e}=0.2$ ($z=13$; dashed blue), $x_{e}=0.5$ ($z=10.8$; dash-dotted cyan), and $x_{e}=0.9$ ($z=9.3$; dotted green). The thick and thin lines refer to the SKA and LOFAR case, respectively. The horizontal line marks S/N $=10$.} \label{sn_ratio_vl} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figures/s_n_ratio_vz.eps} \caption{Predicted S/N of the kSZ$^2$-21 cm correlations after Wiener filtering as a function of redshift for multipole bins of $l=500$ (dashed violet), 1000 (dotted turquoise), and 3000 (solid purple). The thick and thin lines refer to the SKA and LOFAR case, respectively. The horizontal line shows S/N $=10$.} \label{sn_ratio_vz} \end{figure} In Fig.~\ref{sn_ratio_vl} we show the predicted S/N as a function of multipoles at $x_{e}=0.2$, 0.5 and 0.9 after filtering. We find that the S/N for LOFAR is always below 5. SKA though is much more promising, having a S/N per multiple bin $>10$ at $x_e=0.9$ in the range $l=200-1000$, at $x_{e}=0.5$ for $l>200$, and at $x_{e}=0.2$ for $l\sim400$ and $l>2000$. In Fig.~\ref{sn_ratio_vz} we show the S/N as a function of redshift for $l=500$, 1000 and 3000. SKA is expected to have a S/N $>10$ over a wide redshift range, except two narrow gaps at $z\sim 10.5$ ($x_{e}\sim 0.6$) and $z\sim 13$ ($x_{e}\sim 0.2$), where the correlations are very weak (see the bottom panel of Fig.~\ref{ps_ksz_vz}). \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figures/cum_s_n_ratio_vz.eps} \caption{Cumulative S/N as a function of the maximum multipole. The meaning of the lines is the same as that in Fig.~\ref{sn_ratio_vl}.} \label{cum_sn_ratio_vl} \end{figure} To obtain the total S/N rather than the S/N per multipole bin, we compute the cumulative S/N as \begin{equation} \left(\frac{S}{N} \right)^2_{\rm cum} (<l')= \displaystyle\sum_{i} \frac{f_{\rm sky} (2l_{i}+1) \Delta l_{i} C_{\rm kSZ^2, 21cm}^{2}} {C_{\rm CMB^2} (C_{\rm 21cm} + N_{\rm 21cm})+C_{\rm kSZ^2,21cm}^{2}}, \end{equation} where $i$ denotes the $i$th bin, and $l_i$ is the central multipole of the $i$th bin with bin width $\Delta l_{i}$. We display it in Fig. ~\ref{cum_sn_ratio_vl} as a function of the maximum multipole $l'$. The cumulative S/N for LOFAR is $<10$, while the SKA would have S/N as large as 51.4, 60.2, and 36.8 at $x_{e}=0.2$, 0.5 and 0.9, respectively. \section{Conclusions and Discussion} \label{sec:dis} We have used the semi-numerical 21cmFAST simulations \citep{Mesinger2011} to study cross-correlations of the kSZ and 21 cm signals from the EoR. As the kSZ-21 cm correlation suffers from the line-of-sight cancellation at small angular scales, we have focused on the cross-correlation between {\it squared} kSZ fields and 21 cm signals. We find that the line-of-sight cancellation is mitigated and the cross-correlation persists at small angular scales. The goal of this work is to capture the general behavior of the kSZ$^2$-21 cm cross-correlation signal. We caution that the simplified scheme of 21cmFAST may not be adequate to quantify the signal in detail, and one may need more sophisticated reionisation simulations \citep[e.g. ][]{Park2013}. It is also important to acknowledge that we have performed our calculations only for one specific set of cosmological parameters and reionisation history; thus, we have not investigated how the predicted signals change when we change the physics of reionisation. In particular, the optical depth from the latest Planck data \citep{Planck2016} ($\tau = 0.058\pm 0.012$) is lower than that from our simulation ($\tau \approx 0.1$), and its impact is not obvious. Finally, we neglected any correlation between foregrounds in 21 cm and kSZ maps which, if present, would contaminate the reionisation signal. The contribution from foregrounds, such as continuum radio sources, would merit further study as well. The kSZ$^2$-21 cm cross-correlation signal exhibits interesting features, such as a sign change according to phases of reionisation. For example, the correlation is positive for ionisation fraction $x_{e}<0.15$ (when density fluctuations dominate the 21 cm signal) and at $x_{e}>0.8$ (when the Universe is highly ionised), while it is negative for intermediate ionisation fractions. Thus, not only is the cross-correlation powerful for minimising the foreground emission and instrumental systematics of either kSZ or 21 cm data, but also it offers a powerful probe of reionisation. In general, prospects for measuring the kSZ$^2$-21 cm cross-correlation signal are good: SKA cross-correlated with on-going CMB experiments such as SPT-3G and Advanced ACT should yield high signal-to-noise ratio measurements of the cross-correlation over a wide range of multipoles and redshifts. \section*{Acknowledgments} We acknowledge the helpful discussions with V. Jelic and M. Alvarez. The tools for bibliographic research are offered by the NASA Astrophysics Data Systems and by the JSTOR archive. QM is supported by the National Natural Science Foundation of China (Grant Nos 11373068 and 11322328), the National Basic Research Program (973 Program) of China (Grant Nos 2014CB845800 and 2013CB834900) and the Strategic Priority Research Program The Emergence of Cosmological Structures (Grant No. XDB09000000) of the Chinese Academy of Sciences. EK is supported in part by JSPS KAKENHI Grant Number JP15H05896. KH acknowledges support from the Icelandic Research Fund, Grant Number 173728-051. \bibliographystyle{mnras}
{ "timestamp": "2017-12-15T02:09:44", "yymm": "1712", "arxiv_id": "1712.05305", "language": "en", "url": "https://arxiv.org/abs/1712.05305" }
\section{Introduction} In the past four decades, the semilinear wave equations in the following form \begin{equation*} \Box \varphi = Q(\partial \varphi, \partial \varphi), \end{equation*} have been studied intensively and have found many deep applications in geometry and physics. We assume that the field $\varphi(t,x)$ is defined on $\mathbb{R}^{n+1}$, where $x\in \mathbb{R}^n$ and $t\in \mathbb{R}$. The symbol $Q$ denotes a real valued quadratic form on $\mathbb{R}^{n+1}$. Let $\xi \in \mathbb{R}^{n+1}$ be a vector. Thus, in local frame, we have \begin{equation*} Q(\xi,\xi)= Q_{\mu\nu}\xi^\mu\xi^\nu. \end{equation*} The form $Q$ is called a \emph{null form} if for all null vectors $\xi$, we have $ Q(\xi,\xi)= 0$. The symbol $Q(\partial \varphi, \partial \varphi)$ in the equation denotes the nonlinearity $Q^{\mu\nu}\partial_\mu\varphi\partial_\nu\varphi$. We will briefly summarize the progress on small data theory for this type of equations. \medskip The approach to understand the small data problem is based on the decay mechanism of linear waves. For $n\geq 4$, since linear waves decay at the rate $(1+t)^{-\frac{n-1}{2}}$(which is integrable in $t$), the small-data-global-existence type theorems hold for generic quadratic nonlinearities, see Klainerman \cite{K-80} and \cite{K-84}. However, in $\mathbb{R}^{3+1}$, the slower decay rate $(1+t)^{-1}$ just barely fails to be integrable in time, which may result in a finite time blow up of the solution even with arbitrarily small data. For example, John \cite{J-79} showed that any $C^3$ solution of the following equation \[ \Box\varphi=|\partial_t\varphi|^2 \] in $\mathbb{R}^{3+1}$ with nontrivial data blows up in finite time. In other words, additional conditions have to be made on the nonlinearity in order to construct a global solution. The breakthrough along this direction was made by Klainerman in \cite{K-85} by introducing the celebrated null conditions. More precisely, if the quadratic part $Q$ of the nonlinearity is a null form, Klainerman \cite{KL-86} and Christodoulou \cite{Ch-86} have independently provided proofs for the small-data-global-existence results. Although their approaches are different, both proofs rely on the special cancelations of the null form. We remark here that in $\mathbb{R}^{3+1}$ the null condition is a sufficient but not necessary condition to obtain a small-data-global-existence result, see e.g. \cite{lindblad-weak}, \cite{igor-Msta}. For $\mathbb{R}^{2+1}$, the aforementioned classical null condition is not sufficient to guarantee a small-data-global-existence result as general cubic terms may lead to a finite time blow up of the solution. Nevertheless, Alinhac \cite{Ali-01} introduced a more restricted type of null conditions for a class of two dimensional quasilinear wave equations and under those conditions he was able to establish a small-data-global-existence result. \medskip All the above mentioned results are based on the following idea: the smallness of the initial data implies that the nonlinear equation can be solved for a sufficiently long time. The global solution can then be constructed once the nonlinearity decays sufficiently. The lower decay rate in low dimensions can be compensated by the special structure of the nonlinearity, namely, the null condition mentioned above. This idea may fail in $\mathbb{R}^{1+1}$ as waves in $\mathbb{R}^{1+1}$ do not decay, see e.g. \cite{john:12dwave:nonex}. Similar to $\mathbb{R}^{3+1}$, special structure of the equation may be of importance to study the asymptotic behavior of solutions of nonlinear wave equations in $\mathbb{R}^{1+1}$. Gu in \cite{gu:1dwavemap} investigated the wave map problem from $\mathbb{R}^{1+1}$ to a complete Riemannian manifold and showed that the map is regular for all time. The proof heavily relies on the geometric structure of the equations and the global regularity of the solution is a consequence of the conserved length of the tangent vector fields on the target manifold. For general nonlinear equations, Nakamura in \cite{nakamura:1dwave} studied the long time behavior of the solutions and obtained a lower bound on the life span for nonlinearity satisfying the above null condition. Moreover he was able to obtain a global solution if the nonlinear term is of the form $h(\varphi, \partial \varphi)Q^2(\partial \varphi, \partial \varphi)$, where $h$ is smooth function and $Q$ is a null form. The proof is based on an integrated local energy estimate adapted to the linear wave equation in $\mathbb{R}^{1+1}$. Such estimate plays a crucial role in the study of the asymptotic behavior of solutions of linear or nonlinear wave equations in high dimensions, see e.g. \cite{igor:redshif} and references therein. It should be noted that the integrated local energy estimate is a spacetime integral with negative weights and hence contains limited decay information on the solution. This is the reason why Nakamura requires $Q^2$ instead of $Q$ in the nonlinearity in order to obtain a global solution. \medskip The aim of the present paper is to introduce a new type of weighted energy estimates with positive weights for linear waves in $\mathbb{R}^{1+1}$. Among other things, these new estimates allow us to improve the decay estimates on the null form $Q(\partial \varphi, \partial \varphi)$. As a consequence, we strengthen the result of Nakamura in the sense that it is sufficient to require the nonlinearity to be of the form $h(\varphi, \partial \varphi)Q(\partial \varphi, \partial \varphi)$ instead of $h(\varphi, \partial \varphi)Q^2(\partial \varphi, \partial \varphi)$ in order to construct a global solution for the associated nonlinear equations. \bigskip We now elaborate on our main result of this paper. Let $\Phi(t,x):\mathbb{R}\times\mathbb{R} \rightarrow \mathbb{R}^n$ be a vector valued function. More explicitly, we can write $\Phi(t,x)=\big(\Phi_1(t,x),\cdots,\Phi_n(t,x)\big)$. We consider the following system of wave equations: \begin{equation}\label{main equation} \begin{split} \Box\Phi &= N(t,x),\\ (\Phi,\partial_t \Phi)\big|_{t=0}&= \varepsilon(F(x),G(x)). \end{split} \end{equation} Here $\varepsilon \geq 0$ is a constant and $F, G$ are smooth real-valued functions; $N(t,x) = \big(N_1(t,x),\cdots,N_n(t,x)\big)$ are quadratic nonlinearities in $\partial \Phi$ with null conditions. More precisely, for $i,k,l=1,2,\cdots, n$, there exist constants $C^{kl}_i$ so that $N_i(t,x)$ can be written as \begin{equation*} \begin{split} N_{i}(t,x) &=\sum_{k,l=1}^n C^{kl}_{i}\cdot (\partial_t+\partial_x)\Phi_k \cdot (\partial_t-\partial_x)\Phi_l\\ &=\sum_{k,l=1}^n C^{kl}_{i}\cdot L\Phi_k \cdot {\underline{L}}\Phi_l, \end{split} \end{equation*} where $L=\partial_t+\partial_x$ and ${\underline{L}}=\partial_t-\partial_x$ are the two principal null vectors. Thus, the quadratic nonlinearities $N_{i}$'s are linear combinations of the quadratic forms of the following type: \begin{equation*} Q(\partial \Phi_k,\partial \Phi_l) = \alpha L \Phi_k {\underline{L}} \Phi_l+\beta {\underline{L}} \Phi_k L \Phi_l, \end{equation*} where $\alpha$, $\beta$ are real numbers. In particular, this means that in the frame $(L,{\underline{L}})$, as a matrix, $Q$ can be written as $ \left( {\begin{array}{cc} 0 & \alpha \\ \beta & 0 \\ \end{array} } \right)$. This means $Q(L,L)=0$ and $Q({\underline{L}},{\underline{L}})=0$. As a conclusion, $Q$ is a null form on $\mathbb{R}^{1+1}$. On the other hand, it is obvious that a null form must be of this form. Therefore, the system \eqref{main equation} represents all semilinear wave equations with quadratic null form nonlinearities. Schematically, we also write \eqref{main equation} as \begin{equation*} \begin{split} \Box\Phi &= Q(\partial\Phi,\partial \Phi), \end{split} \end{equation*} to emphasize that $Q$ is a null form. \medskip We remark that this system of equations indeed can be viewed as a model problem for incompressible MHD systems placed in a strong magnetic background. Thus, we can use \eqref{main equation} to describe the propagation of Alfv\'en waves and we refer to \cite{HXY} for more details. \bigskip The main theorem of the paper is as follows \begin{theorem} In the setting of system \eqref{main equation}, we have the following: For all $0<\delta<1$, there exists a universal small constant $\varepsilon_0 > 0$ such that the following holds. Suppose \begin{equation*} \sum_{k=0,1}\int_{\mathbb{R}}(1+|x|)^{2+2\delta}\big(|\partial_x^k \partial_x F|^2+|\partial_x^k G|^2\big)dx \leq 1. \end{equation*} Then for all positive constants $\varepsilon<\varepsilon_0$, system \eqref{main equation} admits global solutions. \end{theorem} In other words, as long as the functions $F$ and $G$ appearing in the initial data of \eqref{main equation} have suitable decay at infinity as measured by the weighted Sobolev norm, we can construct a global solution for the system \eqref{main equation}. We briefly discuss the key idea behind the proof. As we have mentioned above, in $\mathbb{R}^{1+1}$, there is no decay for linear waves. However, we claim that although the solution $\Phi$ does not decay, the nonlinearity $N(t,x)$ does decay. This is the key observation that allows us to prove the small data global existence result. The geometric interpretation for this is the following: If we think of the solution behaves as linear waves, then we may regard $(\partial_t+\partial_x)\Phi_k $ as left-traveling waves and $(\partial_t-\partial_x)\Phi_l$ as right-traveling waves. For sufficiently long time, which is ensured by the smallness of the initial data, these two families of waves will be separated in space. On the other hand, the null conditions can be phrased as left-traveling waves coupled only with right-traveling waves, since they are far away from each other for large time, the spatial decay now yields decay in time. This new decay mechanism is strongly in contrast with that in the higher dimensional cases, where the improved decay comes from the tangential derivative of the waves along outgoing light cones. \textbf{Acknowledgments} The first author is partially supported by NSF grant DMS-1554733. The second author is partially supported by NSFC-11701017. The third author is grateful to UC Davis for the support through the New Research Initiatives and Collaborative Interdisciplinary Research Grant during his visit. \section{Preliminaries: the geometry of $\mathbb{R}^{1+1}$ and linear estimates} On the two dimensional Minkowski spacetime $\mathbb{R}^{1+1}$, we will use two coordinate systems: the standard Cartesian coordinates $(t,x)$ and the null coordinates $(u,\underline{u})$. The coordinate functions in the null coordinates are the standard optical functions defined as follows \begin{equation*} u=\frac{1}{2}(t-x), \ \ \underline{u}=\frac{1}{2}(t+x). \end{equation*} We use $g_{\alpha\beta}$ to denote the standard Minkowski metric on $\mathbb{R}^{1+1}$. In other words, the metric can be written down explicitly in the Cartesian coordinates as \begin{equation*} g= -dt^2+dx^2. \end{equation*} In the null coordinates, we have \begin{equation*} g= -2du d\underline{u}. \end{equation*} We have two globally defined null vector fields \begin{equation*} L=\partial_t+\partial_x, \ \ {\underline{L}}=\partial_t-\partial_x. \end{equation*} The metric $g$ can be expressed by the null frame as follows \begin{equation}\label{null} g(L,L)=g({\underline{L}},{\underline{L}})=0, \ \ g(L,{\underline{L}})=-2. \end{equation} By definition, we also have \begin{equation}\label{vectorfield} Lu=0, \ \ L \underline{u} =1, \ \ {\underline{L}}\underline{u} = 0, \ \ {\underline{L}} u=1. \end{equation} \medskip We use $\Sigma_{t_0}$ to denote the following time slice in $\mathbb{R}^{1+1}$: \begin{equation*} \Sigma_{t_0} :=\big\{(t,x) \,\big|\, t= t_0\big\}. \end{equation*} We write$\mathcal{D}_{t_0}$ to denote the following spacetime region: \begin{equation*} \mathcal{D}_{t_0} :=\big\{(t,x) \,\big|\, 0\leq t \leq t_0\big\}. \end{equation*} In other words, we have $\displaystyle \mathcal{D}_{t_0} = \bigcup_{0\leq t\leq t_0} \Sigma_t$. The level sets of the optical functions $u$ and $\underline{u}$ define two global null foliations of $ \mathcal{D}_{t_0}$. More precisely, given $t_0 >0$, $u_0$ and $\underline{u}_0$, we define the right-going null curve segment $\mathcal{C}_{u_0}^{t_0}$ as \begin{equation*} \mathcal{C}_{u_0}^{t_0} :=\big\{(t,x) \,\big|\, u=\frac{t-x}{2}=u_0, 0\leq t \leq t_0\big\}, \end{equation*} and the left-going null curve segment $\underline{\mathcal{C}}_{\underline{u}_0}^{t_0}$ as \begin{equation*} \underline{\mathcal{C}}_{\underline{u}_0}^{t_0} :=\big\{(t,x) \,\big|\, \underline{u} =\frac{t+x}{2}=\underline{u}_0, 0\leq t\leq t_0\big\}. \end{equation*} We also define spatial segments \begin{equation*} \Sigma^+_{t_0,u_0} :=\big\{(t,x) \,\big|\, t= t_0, x \geq t_0-2u_0 \big\}, \end{equation*} and \begin{equation*} \Sigma^-_{t_0,\underline{u}_0} :=\big\{(t,x) \,\big|\, t= t_0, x \leq 2\underline{u}_0-t_0 \big\}. \end{equation*} Similarly, we define spacetime regions \begin{equation*} \mathcal{D}^+_{t_0,u_0} :=\big\{(t,x) \,\big|\, 0 \leq t \leq t_0, x \geq t-2u_0 \big\}, \end{equation*} and \begin{equation*} \mathcal{D}^-_{t_0,\underline{u}_0} :=\big\{(t,x) \,\big|\, 0\leq t \leq t_0, x \leq 2\underline{u}_0-t \big\}. \end{equation*} We depict the above geometric constructions in the following the following picture: \ \ \ \ \ \ \ \ \ \ \ \ \ \includegraphics[width=5in]{pic1.pdf} \noindent The grey regions are $\mathcal{D}^+_{t,u}$ and $\mathcal{D}^-_{t,\underline{u}}$. The entire region enclosed by $\Sigma_t$ and $\Sigma_0$ is $\mathcal{D}_t$. \bigskip Let $Z$ be a smooth vector field defined on $\mathbb{R}^{1+1}$. We recall that its deformation tensor $\,^{(Z)}\pi_{\mu\nu}$ is defined as $\displaystyle \frac{1}{2}\mathcal{L}_Z g$ (the Lie derivative) where $g$ is the Minkowski metric. In other words, the deformation tensor of $Z$ is a two tensor and its components are given by \begin{equation*} \,^{(Z)}\pi_{\mu\nu}=\frac{1}{2}(\nabla_\mu Z_\nu + \nabla_\nu Z_\mu), \end{equation*} where $\nabla$ is the Levi-Civita connection of $g$. If there exists a function $\Omega$ so that $\,^{(Z)}\pi_{\mu\nu} = \Omega \cdot g_{\alpha \beta}$, we say that $Z$ is a conformal Killing vector field. Indeed, for a conformal Killing vector field, its one parameter subgroup of the diffeomorphisms consists of conformal transformations of $(\mathbb{R}^{1+1},g)$. \begin{lemma}\label{conformal vector fields} Let $\Lambda$ and ${\underline{\Lambda}}$ be two smooth $\mathbb{R}$-valued one variable functions. Then the vector field \begin{equation}\label{zconformal} Z ={\underline{\Lambda}}(\underline{u})L+\Lambda(u){\underline{L}} \end{equation} is a conformal Killing vector field on $\mathbb{R}^{1+1}$. Moreover, we have \begin{equation}\label{zconformald} \,^{(Z)}\pi_{\mu\nu} =\frac{1}{2}\big({\underline{\Lambda}}'(\underline{u})+\Lambda'(u)\big)g_{\mu\nu}. \end{equation} \end{lemma} \begin{proof} It suffices to check \eqref{zconformald}. By linearity and symmetry, it suffices to show that \begin{equation*} \,^{(Z)}\pi_{\mu\nu} =\frac{1}{2}\Lambda'(u)\cdot g_{\mu\nu}, \end{equation*} for $Z =\Lambda(u){\underline{L}}$. Indeed, by \eqref{null} and \eqref{vectorfield}, we have \begin{align*} \,^{(Z)}\pi_{LL}&=g\big(\nabla_L (\Lambda(u){\underline{L}}),L\big)=0,\ \ \,^{(Z)}\pi_{{\underline{L}}\Lb}=g\big(\nabla_{\underline{L}} (\Lambda(u){\underline{L}}),{\underline{L}}\big)=0,\\ \,^{(Z)}\pi_{L{\underline{L}}}&=\frac{1}{2}g\big(\nabla_L (\Lambda(u){\underline{L}}),{\underline{L}}\big)+\frac{1}{2}g\big(\nabla_{\underline{L}} (\Lambda(u){\underline{L}}),L\big)=-\Lambda'(u). \end{align*} This proves the lemma. \end{proof} We consider a solution $\varphi(t,x)$ to the following scalar linear wave equation on $\mathcal{D}_t$: \begin{equation*} \Box\varphi = \rho. \end{equation*} The energy-momentum tensor associated to $\varphi$ is defined as \begin{equation*} T_{\mu\nu}=\nabla_\mu \varphi \nabla_\nu \varphi -\frac{1}{2}g_{\mu\nu}\nabla^\alpha\varphi \nabla_\alpha \varphi. \end{equation*} It is straightforward to see that $T(L,L)=|L\varphi|^2$, $T({\underline{L}},{\underline{L}})=|{\underline{L}}\varphi|^2$ and $T(L,{\underline{L}})=0$. We can also compute the divergence of $T_{\mu\nu}$: \begin{equation*} \nabla^\nu T_{\mu\nu}= \rho \nabla_\mu \varphi. \end{equation*} \begin{remark}[Conformal property] On $\mathbb{R}^{1+1}$, the theory of linear wave equations is a conformal theory, i.e., the associated deformation tensor $T_{\mu\nu}$ is trace-free: \begin{equation*} g^{\mu\nu}T_{\mu\nu}=0. \end{equation*} \end{remark} Let $Z$ be a smooth (multiplier) vector field. Moreover, we assume that $Z$ is a conformal Killing vector field. Therefore, the current \begin{equation*} ^{(Z)}J_\mu= T_{\mu\nu}Z^\nu, \end{equation*} satisfies the following divergence identity: \begin{equation}\label{divergence identity} \nabla^\mu \,^{(Z)}J_\mu=\rho Z(\varphi). \end{equation} This can be proved by the following computation: \begin{align*} \nabla^\mu \,^{(Z)}J_\mu&=\nabla^\mu \big(T_{\mu\nu}Z^\nu\big)=\rho Z(\varphi)+\underbrace{T_{\mu\nu}\nabla^\mu Z^\nu}_{=T_{\mu\nu}\,^{(Z)}\pi^{\mu\nu}\sim T_{\mu\nu} g^{\mu\nu}}. \end{align*} We used the fact that $Z$ is a conformal Killing vector field in the last step. Since $T_{\mu\nu}$ is tracefree, the last term vanishes and this proves \eqref{divergence identity}. In applications, we will always take $Z ={\underline{\Lambda}}(\underline{u})L+\Lambda(u){\underline{L}}$ as in Lemma \ref{conformal vector fields}. \bigskip We will integrate the divergence identity \eqref{divergence identity} in the domain $\mathcal{D}_{t,\underline{u}}^-$ and the domain is depicted as follows: \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \includegraphics[width=3.2in]{pic2.pdf} \noindent We remark that the domain $\mathcal{D}_{t,\underline{u}}^-$ is foliated by $\underline{\mathcal{C}}^t_{\underline{u}'}$ for $\underline{u}'\leq \underline{u}$. Since the (Lorentian) normal of $\underline{\mathcal{C}}_{\underline{u}}^t$ is ${\underline{L}}$, the Stokes formula implies that \begin{equation*} \int_{\Sigma_{t,\underline{u}}^-}T(\partial_t,Z)+ \int_{\underline{\mathcal{C}}_{\underline{u}}^t} T({\underline{L}},Z)=\int_{\Sigma_{0,\underline{u}}^-}T(\partial_t,Z)+\int\!\!\!\int_{\mathcal{D}^{-}_{t,\underline{u}}}\rho Z\varphi. \end{equation*} We take $Z=\Lambda(u){\underline{L}}$. In view of the fact that $T(L,{\underline{L}})=0$, we obtain \begin{equation*} \frac{1}{2}\int_{\Sigma_{t,\underline{u}}^-}\Lambda(u) ({\underline{L}}\varphi)^2+ \int_{\underline{\mathcal{C}}_{\underline{u}}^t}\Lambda(u) |{\underline{L}}\varphi|^2=\frac{1}{2}\int_{\Sigma_{0,\underline{u}}^-}\Lambda(u) ({\underline{L}}\varphi)^2+\int\!\!\!\int_{\mathcal{D}^{-}_{t,\underline{u}}} \Lambda(u){\underline{L}}\varphi\cdot\rho \end{equation*} From now on, we will assume that the weight function $\Lambda \geq 0$. By enlarging the domains from $\Sigma^-_{0,\underline{u}}$ to $\Sigma_{0}$ and from $\mathcal{D}^{-}_{t,\underline{u}}$ to $\mathcal{D}_{t}$, we obtain that \begin{equation*} \int_{\Sigma_{t,\underline{u}}^-}\Lambda(u) ({\underline{L}}\varphi)^2+ \int_{\underline{\mathcal{C}}_{\underline{u}}^t}\Lambda(u) |{\underline{L}}\varphi|^2 \lesssim \int_{\Sigma_{0}}\Lambda(u) ({\underline{L}}\varphi)^2+\int\!\!\!\int_{\mathcal{D}_{t}} \Lambda(u)|{\underline{L}}\varphi||\rho|. \end{equation*} Here and in the sequel, the notation $A\lesssim B$ means that there is a universal constant $C$ such that $A\leq CB$. Since this inequality holds for all $\underline{u}$, we obtain that \begin{equation}\label{lambda1} \int_{\Sigma_{t}}\Lambda(u) ({\underline{L}}\varphi)^2+ \sup_{\underline{u}\in \mathbb{R}}\int_{\underline{\mathcal{C}}_{\underline{u}}^t}\Lambda(u) |{\underline{L}}\varphi|^2 \lesssim \int_{\Sigma_{0}}\Lambda(u) ({\underline{L}}\varphi)^2+\int\!\!\!\int_{\mathcal{D}_{t}} \Lambda(u)|{\underline{L}}\varphi||\rho|. \end{equation} Similarly, we can work on $\mathcal{D}^{+}_{t,u}$ by using the multiplier $Z={\underline{\Lambda}}(\underline{u})L$ and this yields \begin{equation}\label{lambda2} \int_{\Sigma_{t}}{\underline{\Lambda}}(\underline{u}) (L\varphi)^2+ \sup_{u\in \mathbb{R}}\int_{\mathcal{C}_{u}^t}{\underline{\Lambda}}(\underline{u}) |L\varphi|^2 \lesssim \int_{\Sigma_{0}}{\underline{\Lambda}}(\underline{u}) (L\varphi)^2+\int\!\!\!\int_{\mathcal{D}_{t}} {\underline{\Lambda}}(\underline{u})|L\varphi||\rho|. \end{equation} By adding estimates \eqref{lambda1} and \eqref{lambda2} together, we finally obtain the energy estimates for linear equations: For $\Lambda \geq 0$, we have \begin{equation}\label{linear energy estimates} \begin{split} & \ \ \int_{\Sigma_{t}}\Lambda(u) ({\underline{L}}\varphi)^2+{\underline{\Lambda}}(\underline{u}) (L\varphi)^2+\sup_{\underline{u}\in \mathbb{R}}\int_{\underline{\mathcal{C}}_{\underline{u}}^t}\Lambda(u) |{\underline{L}}\varphi|^2 +\sup_{u\in \mathbb{R}}\int_{\mathcal{C}_{u}^t}{\underline{\Lambda}}(\underline{u}) |L\varphi|^2\\ & \leq C_0\Big(\int_{\Sigma_{0}}\Lambda(u) ({\underline{L}}\varphi)^2+{\underline{\Lambda}}(\underline{u}) (L\varphi)^2+\int\!\!\!\int_{\mathcal{D}_{t}} \big(\Lambda(u)|{\underline{L}}\varphi|+{\underline{\Lambda}}(\underline{u})|L\varphi|\big)|\rho|\Big), \end{split} \end{equation} where we can take $C_0$ to be at least $1$. \section{The proof of the main theorem} For the sake of simplicity, we will stick to the schematic form $\Box \Phi = Q(\partial \Phi, \partial \Phi)$ of the main equation \eqref{main equation} by ignoring all the constants and indices in \eqref{main equation}. In such a way, we may think of the system as a single equation for scalar functions. On the other hand, we can replace $\Phi$ in the following proof by $\Phi_k$ and then sum over $k$ to complete the proof for the original system. \begin{lemma}\label{lemma commute derivatives} For all $k\leq 1$, we have \begin{equation*} \Box \partial^k_x\Phi = \sum_{i+j=k}Q(\partial \partial_x^i \Phi,\partial \partial_x^j \Phi), \end{equation*} where the $Q$'s are null forms and we have omitted all the irrelavent constants in front of $Q$. \end{lemma} \begin{proof} We simply commute $\partial^k$ with $\Box \Phi = Q(\partial \Phi, \partial \Phi)$. The proof is straightforward. \end{proof} In the rest of the paper, we choose $\Lambda(u)$ and ${\underline{\Lambda}}(\underline{u})$ as follows: \begin{equation}\label{lambdafunction} \Lambda(u) = (1+|u|^2)^{1+\delta}, \ \ {\underline{\Lambda}}(\underline{u})=(1+|\underline{u}|^2)^{1+\delta}. \end{equation} In view of \eqref{linear energy estimates}, for $k=0,1$, we define energy norms as follows: \begin{equation*} \begin{split} \mathcal{E}_k(t)& = \int_{\Sigma_{t}}\Lambda(u) ({\underline{L}}\partial^k_x \Phi)^2+{\underline{\Lambda}}(\underline{u}) (L\partial^k_x \Phi)^2,\\ \mathcal{F}_k(t)& = \sup_{\underline{u}\in \mathbb{R}}\int_{\underline{\mathcal{C}}_{\underline{u}}^t}\Lambda(u) |{\underline{L}}\partial^k_x\Phi|^2 +\sup_{u\in \mathbb{R}}\int_{\mathcal{C}_{u}^t}{\underline{\Lambda}}(\underline{u}) |L\partial^k_x\Phi|^2. \end{split} \end{equation*} We also define the total energy norms as follows: \begin{equation*} \mathcal{E}(t) = \sum_{k=0}^1 \mathcal{E}_k(t),\ \ \mathcal{F}(t) = \sum_{k=0}^1 \mathcal{F}_k(t). \end{equation*} We notice that if $t=0$, we have $\mathcal{F}(0)=0$. The data determine $\mathcal{E}(0)$. Indeed, the functions $F$ and $G$ determine a constant $C_1$ so that \begin{equation}\label{initial energy} \mathcal{E}(0) = C_1 \varepsilon^2. \end{equation} Now we use the method of continuity: We assume that the solution $\Phi$ exists for $t\in [0,T^*]$ so that it has the following bound \begin{equation}\label{bootstrap assumption} \mathcal{E}(t)+\mathcal{F}(t) \leq 6 C_0 C_1 \varepsilon^2. \end{equation} Since this bound holds for $t=0$, we can always find such a $T^*$. In the rest of the paper, we will show that under assumption \eqref{bootstrap assumption}, we can indeed prove a better bound. Namely, for all $t\in[0,T^*]$, we will show that there exists a universal constant $\varepsilon_0$, so that we have the improved estimate: \begin{equation}\label{bootstrap assumption improved} \mathcal{E}(t)+\mathcal{F}(t) \leq 4 C_0 C_1 \varepsilon^2 \end{equation} for all $\varepsilon <\varepsilon_0$. The constant $\varepsilon_0$ will be independent of $T_*$. Thus assumption \eqref{bootstrap assumption} will never be saturated so that we can always continue $T_*$ to $+\infty$. This will prove the global existence for small data solutions of \eqref{main equation}. Therefore, the crux of the matter boils down to proving \eqref{bootstrap assumption improved} under \eqref{bootstrap assumption}. \medskip In view of \eqref{bootstrap assumption}, we first have the following pointwise bounds: \begin{lemma}\label{lemma pointwise bound} Under assumption \eqref{bootstrap assumption}, there exists a universal constant $C_2$ so that \begin{equation*} |L\Phi(t,x)|\leq \frac{C_2\varepsilon}{{\underline{\Lambda}}(\underline{u})^\frac{1}{2}}, \ \ |{\underline{L}}\Phi(t,x)|\leq \frac{C_2\varepsilon}{\Lambda(u)^\frac{1}{2}}. \end{equation*} \end{lemma} \begin{proof} It suffices to prove the first inequality. The second can be proved in exactly the same way. Indeed, according to the Sobolev inequality on $\mathbb{R}$, since $|\partial_x{\underline{\Lambda}}(\underline{u})^\frac{1}{2}|\leq {\underline{\Lambda}}(\underline{u})^\frac{1}{2}$, we have \begin{align*} |{\underline{\Lambda}}(\underline{u})^\frac{1}{2}L \Phi|^2 &\lesssim \|{\underline{\Lambda}}(\underline{u})^\frac{1}{2}L \Phi\|^2_{L^2(\mathbb{R}_x)}+\|\partial_x\big({\underline{\Lambda}}(\underline{u})^\frac{1}{2}L \Phi\big)\|^2_{L^2(\mathbb{R}_x)}\\ &\lesssim \|{\underline{\Lambda}}(\underline{u})^\frac{1}{2}L \Phi\|^2_{L^2(\mathbb{R}_x)}+\|{\underline{\Lambda}}(\underline{u})^\frac{1}{2} L \partial_x \Phi\|^2_{L^2(\mathbb{R}_x)} \\ &\lesssim \mathcal{E}(t) \\ &\leq 6C_0C_1 \varepsilon^2. \end{align*} The desired inequality follows immediately. \end{proof} \begin{remark} The lemma shows $|L\Phi(t,x)|\lesssim \frac{\varepsilon}{(1+|t+x|)^{1+\delta}}$ and $|{\underline{L}}\Phi(t,x)|\leq \frac{\varepsilon}{(1+|t+x|)^{1+\delta}}$. Thus, although the waves have no decay in time, they still decay spatially away from their centers. \end{remark} Similar to the above lemma, we also have \begin{lemma}\label{lemma pointwise bound different weights} Under assumption \eqref{bootstrap assumption}, there exists a universal constant $C_3$ so that \begin{equation*} \|\frac{\Lambda(u)^\frac{1}{2}}{{\underline{\Lambda}}(\underline{u})^\frac{1}{4}}{\underline{L}}\Phi(t,x)\|_{L^\infty(\Sigma_t)}\leq C_3\Big(\|\frac{\Lambda(u)^\frac{1}{2}}{{\underline{\Lambda}}(\underline{u})^\frac{1}{4}}{\underline{L}}\Phi(t,x)\|_{L^2(\Sigma_t)}+\|\frac{\Lambda(u)^\frac{1}{2}}{{\underline{\Lambda}}(\underline{u})^\frac{1}{4}}{\underline{L}}\partial_x\Phi(t,x)\|_{L^2(\Sigma_t)}\Big), \end{equation*} and \begin{equation*} \|\frac{{\underline{\Lambda}}(\underline{u})^\frac{1}{2}}{\Lambda(u)^\frac{1}{4}}L\Phi(t,x)\|_{L^\infty(\Sigma_t)}\leq C_3\Big(\|\frac{{\underline{\Lambda}}(\underline{u})^\frac{1}{2}}{\Lambda(u)^\frac{1}{4}}L\Phi(t,x)\|_{L^2(\Sigma_t)}+\|\frac{{\underline{\Lambda}}(\underline{u})^\frac{1}{2}}{\Lambda(u)^\frac{1}{4}}L\partial_x\Phi(t,x)\|_{L^2(\Sigma_t)}\Big). \end{equation*} \end{lemma} \begin{proof} By using the new weight function $\frac{\Lambda(u)^\frac{1}{2}}{{\underline{\Lambda}}(\underline{u})^\frac{1}{4}}$, this inequality can be derived in exactly the same manner as in Lemma \ref{lemma pointwise bound}. In fact, it suffices to notice that \begin{align*} \partial_x\Big(\frac{\Lambda(u)^\frac{1}{2}}{{\underline{\Lambda}}(\underline{u})^\frac{1}{4}}\Big)\lesssim \frac{\Lambda(u)^\frac{1}{2}}{{\underline{\Lambda}}(\underline{u})^\frac{1}{4}}. \end{align*} This can be checked by a direct computation. \end{proof} \medskip For each $k=0,1$, we take $\varphi = \partial_x^k \Phi$ in the linear energy estimates \eqref{linear energy estimates}. In view of Lemma \ref{lemma commute derivatives}, we obtain \begin{equation*} \begin{split} \mathcal{E}_k(t)+\mathcal{F}_k(t)\leq C_0\Big(\mathcal{E}_k(0)+\sum_{i+j= k}\int\!\!\!\int_{\mathcal{D}_{t}} \big(\Lambda(u)|{\underline{L}}\partial_x^k\Phi|+{\underline{\Lambda}}(\underline{u})|L\partial_x^k\Phi|\big)\big|Q(\partial \partial_x^i \Phi,\partial \partial_x^j \Phi)\big|\Big). \end{split} \end{equation*} We can also take the sum of all the above estimates and this yields \begin{equation} \label{energy estimates} \begin{split} \mathcal{E}(t)+\mathcal{F}(t)&\leq 2C_0C_1\varepsilon^2+C_0\sum_{i+j= k, \atop 0\leq k \leq 1}\int\!\!\!\int_{\mathcal{D}_{t}} \big(\Lambda(u)|{\underline{L}}\partial_x^k\Phi|+{\underline{\Lambda}}(\underline{u})|L\partial_x^k\Phi|\big)\big|Q(\partial \partial_x^i \Phi,\partial \partial_x^j \Phi)\big|\\ &\leq 2C_0C_1\varepsilon^2+C_0\underbrace{\sum_{i+j= k, \atop 0\leq k \leq 1}\int\!\!\!\int_{\mathcal{D}_{t}} \Lambda(u)|{\underline{L}}\partial_x^k\Phi|\big|Q(\partial \partial_x^i \Phi,\partial \partial_x^j \Phi)\big|}_{\mathbf{I}}\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +C_0\sum_{i+j= k, \atop 0\leq k \leq 1}\int\!\!\!\int_{\mathcal{D}_{t}}{\underline{\Lambda}}(\underline{u})|L\partial_x^k\Phi|\big|Q(\partial \partial_x^i \Phi,\partial \partial_x^j \Phi)\big|. \end{split} \end{equation} To bound the nonlinear terms, in view of the symmetry, it suffices to bound the term $\mathbf{I}$ in \eqref{energy estimates}. Since $Q$ is a null form, we have \begin{equation*} |Q(\partial \partial_x^i \Phi,\partial \partial_x^j \Phi)\big|\lesssim |L\partial_x^i \Phi||{\underline{L}}\partial_x^j \Phi|+|{\underline{L}}\partial_x^i \Phi||L\partial_x^j \Phi|. \end{equation*} Therefore, we may rewrite $\mathbf{I}$ as \begin{equation}\label{first bound on I} \mathbf{I}\lesssim \sum_{i+j= k, \atop 0\leq k \leq 1}\int\!\!\!\int_{\mathcal{D}_{t}} \Lambda(u)|{\underline{L}}\partial_x^k\Phi||L\partial_x^i \Phi||{\underline{L}}\partial_x^j \Phi|. \end{equation} We may classify the terms in the integrand into two cases according to $i=0$ or $i=1$. \medskip \noindent {\bf Case 1: $i=0$}. We have to bound the term \begin{align*} \mathbf{I_1}=\int\!\!\!\int_{\mathcal{D}_{t}} \Lambda(u)|{\underline{L}}\partial_x^k\Phi||L \Phi||{\underline{L}}\partial_x^j \Phi|. \end{align*} Applying Lemma \ref{lemma pointwise bound} to $|L\Phi|$, we have \begin{align*} \mathbf{I_1}&\lesssim \int\!\!\!\int_{\mathcal{D}_{t}} \frac{\varepsilon}{{\underline{\Lambda}}(\underline{u})^\frac{1}{2}}\Lambda(u)|{\underline{L}}\partial_x^k\Phi||{\underline{L}}\partial_x^j \Phi|\\ &\lesssim \int\!\!\!\int_{\mathcal{D}_{t}} \frac{\varepsilon}{{\underline{\Lambda}}(\underline{u})^\frac{1}{2}}\Big(\Lambda(u)|{\underline{L}}\partial_x^k\Phi|^2+ \Lambda(u)|{\underline{L}}\partial_x^j \Phi|^2\Big). \end{align*} Since the spacetime region $\mathcal{D}_{t}$ is foliated by $\underline{\mathcal{C}}_{\underline{u}}^t$ for $\underline{u}\in \mathbb{R}$, by Fubini's theorem, we have \begin{align*} \mathbf{I_1}&\lesssim \int_\mathbb{R}\Big[\int_{\underline{\mathcal{C}}_{\underline{u}}^t} \frac{\varepsilon}{{\underline{\Lambda}}(\underline{u})^\frac{1}{2}}\Big(\Lambda(u)|{\underline{L}}\partial_x^k\Phi|^2+ \Lambda(u)|{\underline{L}}\partial_x^j \Phi|^2\Big)\Big]d\underline{u}\\ &= \int_\mathbb{R}\frac{\varepsilon}{{\underline{\Lambda}}(\underline{u})^\frac{1}{2}}\Big(\underbrace{\int_{\underline{\mathcal{C}}_{\underline{u}}^t} \Lambda(u)|{\underline{L}}\partial_x^k\Phi|^2+ \Lambda(u)|{\underline{L}}\partial_x^j \Phi|^2}_{\leq \mathcal{F}(t)}\Big)d\underline{u}\\ &\lesssim \int_\mathbb{R}\frac{\varepsilon^3}{{\underline{\Lambda}}(\underline{u})^\frac{1}{2}}d\underline{u}. \end{align*} On the other hand, we know that ${\underline{\Lambda}}(\underline{u})^\frac{1}{2}\sim (1+|\underline{u}|)^{1+\delta}$ for $|\underline{u}|\rightarrow \infty$, thus the above integral is finite. As a consequence, we obtain that \begin{align*} \mathbf{I_1}\lesssim \varepsilon^3. \end{align*} \noindent {\bf Case 2: $i=1$}. In this case, we must have $j=0$ and $k=1$, since $i+j =k \leq 1$. We now have to bound the term \begin{align*} \mathbf{I_1}&=\int\!\!\!\int_{\mathcal{D}_{t}} \Lambda(u)|{\underline{L}}\partial_x\Phi||L \partial_x \Phi||{\underline{L}} \Phi|\\ &=\int\!\!\!\int_{\mathcal{D}_{t}} \underbrace{\Big({\underline{\Lambda}}(\underline{u})^{-\frac{1}{4}}\Lambda(u)^\frac{1}{2}|{\underline{L}}\partial_x\Phi|\Big)}_{L^2_tL^2_x}\underbrace{\Big({\underline{\Lambda}}(\underline{u})^\frac{1}{2}|L\partial_x \Phi|\Big)}_{L^\infty_tL^2_x}\underbrace{\Big({\underline{\Lambda}}(\underline{u})^{-\frac{1}{4}}\Lambda(u)^\frac{1}{2}|{\underline{L}} \Phi|\Big)}_{L^2_tL^\infty_x}\\ &\leq \underbrace{\Big(\int\!\!\!\int_{\mathcal{D}_{t}} \frac{\Lambda(u)|{\underline{L}}\partial_x\Phi|^2}{{\underline{\Lambda}}(\underline{u})^\frac{1}{2}}\Big)^\frac{1}{2}}_{\mathbf{I_2}}\underbrace{\sup_{t\in[0,T^*]}\Big(\int_{\Sigma_{t}} {\underline{\Lambda}}(\underline{u})|L\partial_x\Phi|^2\Big)^\frac{1}{2}}_{\mathbf{I_3}}\underbrace{\Big(\int_{0}^t \|\frac{\Lambda(u)^\frac{1}{2}}{{\underline{\Lambda}}(\underline{u})^{\frac{1}{4}}}|{\underline{L}} \Phi|\|^2_{L^\infty(\Sigma_\tau)}d\tau\Big)^\frac{1}{2}}_{\mathbf{I_4}}. \end{align*} The term $\mathbf{I_2}$ can be treated in a similar manner as for $\mathbf{I_1}$: \begin{align*} (\mathbf{I_2})^2 &\leq \int_\mathbb{R}\int_{\underline{\mathcal{C}}_{\underline{u}}^t} \frac{1}{{\underline{\Lambda}}(\underline{u})^\frac{1}{2}} \Lambda(u)|{\underline{L}}\partial_x\Phi|^2 d\underline{u}\\ &= \int_\mathbb{R}\frac{1}{{\underline{\Lambda}}(\underline{u})^\frac{1}{2}}\Big(\underbrace{\int_{\underline{\mathcal{C}}_{\underline{u}}^t} \Lambda(u)|{\underline{L}}\partial_x\Phi|^2}_{\leq \mathcal{F}_1(t)}\Big)d\underline{u}\\ &\lesssim \int_\mathbb{R}\frac{\varepsilon^2}{{\underline{\Lambda}}(\underline{u})^\frac{1}{2}}d\underline{u}. \end{align*} Since ${\underline{\Lambda}}(\underline{u})^\frac{1}{2}$ is integrable in $\underline{u}$, we have \begin{align*} I_2 \lesssim \varepsilon. \end{align*} The term $\mathbf{I_3}$ is part of the energy norm $\mathcal{E}_1(t)$, thus, by \eqref{bootstrap assumption}, we have \begin{align*} I_3 \lesssim \varepsilon. \end{align*} For the term $\mathbf{I_4}$, according to Lemma \ref{lemma pointwise bound different weights}, we have \begin{align*} \mathbf{I_4} &\lesssim \Big(\int_{0}^t \|\frac{\Lambda(u)^\frac{1}{2}}{{\underline{\Lambda}}(\underline{u})^\frac{1}{4}}{\underline{L}}\Phi(t,x)\|^2_{L^2(\Sigma_\tau)}+\|\frac{\Lambda(u)^\frac{1}{2}}{{\underline{\Lambda}}(\underline{u})^\frac{1}{4}}{\underline{L}}\partial_x\Phi(t,x)\|^2_{L^2(\Sigma_\tau)}d\tau\Big)^\frac{1}{2}\\ &\lesssim \Big(\int\!\!\!\int_{\mathcal{D}_{t}} \frac{\Lambda(u)|{\underline{L}}\Phi|^2}{{\underline{\Lambda}}(\underline{u})^\frac{1}{2}}\Big)^\frac{1}{2}+\Big(\int\!\!\!\int_{\mathcal{D}_{t}} \frac{\Lambda(u)|{\underline{L}}\partial_x\Phi|^2}{{\underline{\Lambda}}(\underline{u})^\frac{1}{2}}\Big)^\frac{1}{2}. \end{align*} Both of the terms are in the same form as $\mathbf{I_2}$. Thus they can be bounded in the same manner. We then have \begin{align*} \mathbf{I}_4 \lesssim \varepsilon. \end{align*} Finally, in Case 2, we still have \begin{align*} \mathbf{I} \lesssim \varepsilon^3. \end{align*} \bigskip By putting all the estimates together in \eqref{energy estimates}, for some universal constant $C_4$, we obtain that for all $t\in [0,T^*]$, \begin{equation*} \mathcal{E}(t)+\mathcal{F}(t)\leq 2C_0C_1\varepsilon^2+C_4\varepsilon^3. \end{equation*} We then take $\varepsilon_0$ such that \begin{equation*} \varepsilon_0\leq \frac{2C_0C_1}{C_4}. \end{equation*} Therefore, for $\varepsilon\leq \varepsilon_0$ and for all $t\in [0,T^*]$, we have \begin{align*} \mathcal{E}(t)+\mathcal{F}(t)&\leq 2C_0C_1\varepsilon^2+C_4\varepsilon^2 \times \frac{2C_0C_1}{C_4}\\ &\leq 4C_0C_1\varepsilon^2. \end{align*} This proves the improved estimate \eqref{bootstrap assumption improved}. This completes the proof of the main theorem.
{ "timestamp": "2017-12-15T02:03:14", "yymm": "1712", "arxiv_id": "1712.05076", "language": "en", "url": "https://arxiv.org/abs/1712.05076" }
\section{Introduction} Many $C^*$-algebras can be modeled as groupoid $C^*$-algebras (see e.g. \cite{Renault}), which allows the use of the additional structural information to answer general questions in the theory of $C^*$-algebras. For example, J. L. Tu showed in \cite{Tu} that groupoids which satisfy the Haagerup property (e.g. amenable groupoids) have $C^*$-algebras which satisfy the UCT. The collection of $C^*$-algebras to which this viewpoint applies is expanded further by considering twisted groupoid $C^*$-algebras. In \cite{Renault-Cartan}, J. Renault shows that every Cartan pair must necessarily correspond to a Cartan pair $(C^*(G,\sigma),C_0(\G0))$ where $G$ is an \'etale groupoid with a twist $\sigma$. One of the strengths of the groupoid approach to $C^*$-algebras comes from the ability to give geometric interpretations to $C^*$-algebra properties. For example, C. Laurent-Gengoux, J. L. Tu, and P. Xu conjecture in \cite{GTX} a geometric interpretation of classes in $K_0(C^*(G,\sigma))$ for Lie groupoids $G$ with a twist $\sigma$ as twisted vector bundles over the underlying groupoid $G$. In \cite{FG}, E. Gillaspy and C. Farsi extend these ideas to locally compact Hausdorff groupoids and give positive results for a class of \'etale groupoids. Our main objective in this paper is to construct inverse systems of groupoids which induce direct systems of $C^*$-algebras and for which the groupoid $C^*$-algebra of the inverse limit groupoid is equal to the direct limit of the induced directed system of $C^*$-algebras. A key observation (see \cref{morphisms}) is that there are morphisms between groupoids with Haar systems such that the pullback map preserves the convolution product. We call such maps Haar measure preserving. A consequence of this observation is that inverse systems of groupoids with Haar measure preserving bonding maps induce direct system of $C^*$-algebras. In other words, we can construct a ``spectral'' picture at the level of groupoids for certain inductive limits of $C^*$-algebras. The following is our main existence theorem for inverse limits of groupoids, proven in \cref{proofoftheorema}: \begin{restatable}{LetterTheorem}{limitexists}\label{mainthm.limitexists} Let $\{G_\alpha,\sigma_\alpha, \{\mu_\alpha^y:y\in \G0_\alpha\},q^\alpha_\beta,A\}$ be an inverse system of groupoids with Haar systems and 2-cocycles and with bonding maps which are proper, continuous, surjective, Haar system preserving and cocycle preserving. The inverse limit groupoid $G= \varprojlim_{\alpha}G_\alpha$ exists and has a Haar system of measures $\{\mu^x:x\in \G0\}$ and 2-cocycle $\sigma$ such that $(q_\alpha)_*(\mu^x) = \mu_\alpha^{q_\alpha(x)}$ and $q_\alpha^*(\sigma) = \sigma_\alpha$. Moreover, the pullback morphisms induce a direct system $\{C_c(G_\alpha,\sigma_\alpha), (q^\alpha_\beta)^*,A\}$ of convolution algebras that extends to a direct system of maximal completions (i.e. $C^*(G,\sigma) = \varinjlim_\alpha C^*(G_\alpha,\sigma_\alpha)$). \end{restatable} In a recent paper (see \cite{BussSims}), A. Buss and A. Sims show that $C^*$-algebras that are not isomorphic to their opposite algebras cannot be groupoid $C^*$-algebras. Nonetheless, there are a lot of examples of $C^*$-algebras that admit groupoid models, although uncovering the underlying groupoid structure of a $C^*$-algebra, if it exists, can be a non-trivial task. It is shown in \cite{EP} by R. Exel and E. Pardo that every Kirchberg $C^*$-algebra is the groupoid $C^*$-algebra of an \'etale groupoid. Another common technique for creating groupoids with prescribed groupoid $C^*$-algebras is via \'etale equivalence relations on the Cantor set, see \cite{DPS} or \cite{P}. It is known that groupoid $C^*$-algebras of \'etale groupoids $G$ are simple if and only if $G$ is minimal and principal (see \cite{BCFS}). Hence, if one can find minimal and principal \'etale equivalence relations on the Cantor set with the correct $K$-theory, one can then appeal to the classification program of Elliot. One possible application of \cref{mainthm.limitexists} is to use an inductive limit description of a specific $C^*$-algebra to construct an inverse system of groupoids whose dual is the given direct system. However, \cref{mainthm.limitexists} is not general enough to handle some of the standard examples. In \cite{AM}, the first author and A. Mitra plan to adjust \cref{mainthm.limitexists} to construct groupoids whose groupoid $C^*$-algebras are equal to any UHF-algebra, infinite tensor powers of finite dimensional $C^*$-algebras, the Jiang-Su algebra, and the Razak-Jacelon algebra respectively. Inverse systems of groupoids of the type mentioned in \cref{mainthm.limitexists} are also constructed for our approximations of $\sigma$-compact groupoids. This part of the paper arose out of an investigation of possible connections between geometric property (T) from coarse geometry and property (T) for topological groupoids. Many groupoid results are stated for second countable groupoids; unfortunately, the groupoids arising from coarse geometry are not second countable, though they are $\sigma$-compact. The well known fact that $\sigma$-compact spaces are inverse limits of second countable spaces suggested to us a way of creating an ``easy'' bridge between results known in the second countable case to results about $\sigma$-compact groupoids. We accomplish this goal with the following theorem, proven in \cref{guts}: \begin{LetterTheorem}\label{mainthm.approximation} Let $(G,\sigma)$ be a locally compact, Hausdorff and $\sigma$-compact groupoid with Haar system of measures $\{\mu^x:x\in G^0\}$ and $2$-cocycle $\sigma:\G2\to {\mathbb T}$. One can obtain $(G,\sigma)$ as the inverse limit of an inverse system of second countable, locally compact, and Hausdorff groupoids $\{(G_\alpha,\sigma_\alpha),q^\alpha_\beta,A\}$ in the category of locally compact groupoids with proper continuous groupoid morphisms. Moreover, there are induced Haar systems of measures and 2-cocycles on the $G_\alpha$'s, compatible with the bonding maps, such that the pullback morphisms induce a directed system of topological $*$-algebras $\{C_c(G_\alpha,\sigma_\alpha),(q^\alpha_\beta)^*,A\}$ such that $C_c(G,\sigma) = \varinjlim_\alpha C_c(G_\alpha,\sigma_\alpha)$. This directed system can be extended to the maximal $C^*$-algebra completions. \end{LetterTheorem} In the above theorem, some of the properties of $G$ can be passed on to the approximation groupoids $G_\alpha$; for example, if $G$ is \'etale or transitive, the construction can be modified such that the same is true for all $G_\alpha$ (see \cref{section.approximationproperties}). The reason we did not state any results for the case of reduced groupoid $C^*$-algebras is because the reduced norms of the approximations are not necessarily compatible, not even with the reduced norm on $C_c(G)$. This is an issue even in the case of groups, as can be seen from \cite{BKKO}. The strategy to prove \cref{mainthm.approximation} is based on a classic uniform space theory construction: all uniform structures on a set $X$ are inverse limits of pseudo-metric uniform structures on $X$. The idea with this approximation technique is to start with a uniform cover $\mathcal{U}$ of $X$, construct a ``minimal'' uniform substructure on $X$ that contains $\mathcal{U}$, and throw away all other uniform covers. The basics of this construction are described in \cref{topapprox}. In our case, we start with a uniform cover and show that one can construct ``minimal'' uniform structures on the underlying set of the groupoid $G$ such that the resulting metrizable space will be locally compact and Hausdorff and can be endowed with a groupoid structure. We have to further sharpen our approximations if we want to also account for Haar systems and groupoid 2-cocycles. The interested reader might compare our construction for the proof of \cref{mainthm.approximation} to the groupoid approximations constructed for \'etale groupoids by Censor and Markiewicz in \cite{CM}; one notable difference is that our approximations are deconstructive whereas the approximations in their paper are constructive. One novelty about our approximations is that our groupoid quotients (algebraic quotients) allow us to deform the object space and the space of arrows simultaneously. As an application of our approximation construction in \cref{mainthm.approximation}, we show how to easily extend the maximal version of Renault's Equivalence Theorem. The notion of equivalence for groupoids was introduced and was shown to be connected to Morita equivalence of maximal groupoid $C^*$-algebras by J. Renault in \cite{Renault1} and soon thereafter by P. Muhly, J. Renault, and D. Williams in \cite{MRW}. In \cref{renaultequivalencetheorem}, we prove the following theorem: \begin{Theorem}[Equivalence Theorem]\label{equivalencetheorem} Let $G$ and $H$ be $\sigma$-compact groupoids with Haar systems of measures. If $G$ and $H$ are equivalent then $C^*(G)$ is Morita equivalent to $C^*(H)$. \end{Theorem} The purpose of including a proof for the above is to demonstrate how our approximation technique might be applied to extend known results from second countable to $\sigma$-compact groupoids. It came to our attention after we finished this work that the equivalence theorem is indeed true in full generality and was proven concurently and independently by A. Buss, R. Holkar, and R. Meyer in \cite{BHM} using their generalization of the disintegration theorem of Renault. \vskip10pt\noindent \textbf{Acknowledgements} The authors would like to extend their gratitude to Bill Chen, Adam Dor-On, Saak Gabrielyan, Claire Anantharaman-Delaroche, Shirly Geffen, Jean Renault, and Dana Williams for their feedback and suggestions during the process of writing this paper. We would like to offer special thanks to Joav Orovitz for his tremendous help throughout the project. This project would not exist without him. The first author would also like to thank Michael Levin for all his helpful conversations and useful advice throughout the duration of this project. \section{Preliminaries on Groupoids and Their \texorpdfstring{$C^*$-Algebras}{C*-Algebras}} \label{section.groupoidsintro} We use all the conventions and notations (except for second countability assumptions) as the survey paper \cite{Buneci} by M. Buneci; we highlight the most important for ease of reference. A \textbf{groupoid} is a small category in which every morphism is invertible. A groupoid $G$ is a \textbf{topological groupoid} if $\G1$ is equipped with a locally compact and Hausdorff topology for which the inverse and partial multiplication functions are continuous. We denote the partial multiplication map by $m$, and usually write $gh$ for $m(g, h)$; we use function composition order when composing arrows (i.e $gh$ defined if and only if $s(g) = t(h)$). The object space $\G0 \subset \G1$ is given the subspace topology; then the source and target maps, $s(z) = z^{-1}z$ and $t(z) = zz^{-1}$ respectively, are continuous. For $n\ge2$, the set $G^{(n)}$ of composable $n$-tuples is given the subspace topology induced by the product topology on $G^n = G \times G \times \ldots \times G$ ($n$ times). \begin{Definition}\label{morphism} A \textbf{morphism of topological groupoids} is a continuous groupoid morphism (functor). \end{Definition} \begin{Definition}\label{opendef} A topological groupoid is said to be \textbf{$\sigma$-compact} (resp. \textbf{paracompact}) if $\G1$ and hence\footnote{Recall that $\G0$ is a closed subset of $\G1$ and that paracompactness and $\sigma$-compactness are weakly hereditary properties.} $\G0$ have $\sigma$-compact (resp. paracompact) topologies. \end{Definition} \begin{Remark} One can show that $\G2$ is a closed set in $G \times G$ (using the Hausdorff property of $\G0$ and continuity of $s$ and $t$). Therefore, if $\G1$ is $\sigma$-compact (resp. paracompact), then so is $\G2$. \end{Remark} \begin{Notation} Let $G$ be a groupoid and let $x\in \G0$. We define $G^x$ to be the collection of arrows in $G$ that have target $x$, i.e. all $g\in G$ with $t(g) = x$. \end{Notation} \begin{Definition}\label{Haarysituation} Let $G$ be a topological groupoid. A \textbf{Haar system of measures on $G$} is a collection $\{\mu^x:x\in \G0\}$ of positive regular Radon measures on $G$ such that: \begin{enumerate} \item $\mu^x$ is supported on $G^x$. \item For each $f\in C_c(G)$, the function $x\to \int_{G}f(g)d\mu^x(g)$ is continuous on $\G0$. \item For all $g\in \G1$ and $f\in C_c(G)$, the following equality holds: $$\int_{G}f(h)d\mu^{t(g)}(h) = \int_{G}f(gh)d\mu^{s(g)}(h).$$ \end{enumerate} \end{Definition} Note that if $K$ is a compact set of arrows, the set of values its measure can take across the whole Haar system is bounded. This simple observation will be needed in proving continuity results later, so we state it below as a lemma; the proof is straight-forward and hence omitted. \begin{noproofLemma}\label{compactmaxmeasure} If $K \subset \G1$ is a compact set, then $\sup_{x\in \G0} \mu^x(K) < \infty$. \end{noproofLemma} \begin{Definition}\label{defcocycle} A 2-cocycle is a map $\sigma : \G2 \to {\mathbb T}$ such that whenever $(g, h), (h, k) \in \G2$ we have $$\sigma(g, h)\sigma(gh, k) = \sigma(g, hk)\sigma(h, k).$$ \end{Definition} The map $\sigma(g, h) = 1$ for all $(g, h) \in \G2$ is always a 2-cocycle, called the \textbf{trivial cocycle}; the set of 2-cocycles on $G$ form a group under pointwise multiplication and pointwise inverse. \begin{Definition} If $(G,\sigma_G)$ and $(H,\sigma_H)$ are locally compact groupoids with $2$-cocycles, a morphism $q:(G,\sigma_G) \to (H, \sigma_H)$ is said to be \textbf{cocycle preserving} if it is a proper morphism of groupoids such that $\sigma_G(g,h) = \sigma_H(q(g),q(h))$. \end{Definition} \begin{Definition}\label{approximate} Let $G$ be a topological groupoid. An inverse system $\{G_\alpha, q^\alpha_\beta:G_\alpha \to G_\beta\}_{\alpha \in A}$ of topological groupoids with proper and surjective morphisms $q^\alpha_\beta$ and with directed indexing set $A$ is said to be an \textbf{inverse approximation of $G$} if \begin{enumerate} \item $q^\alpha_\alpha = id_{G_\alpha}$ for all $\alpha$, \item for each $\alpha \ge \beta \in A$ there exists $q^\alpha_\beta:G_\alpha \to G_\beta$ and, moreover, $q^\alpha_\beta \circ q^\beta_\gamma = q^\alpha_\gamma$ whenever $\alpha \ge \beta \ge \gamma$, and \item $\varprojlim_\alpha G_\alpha = G$ in the category of topological groupoids (with proper continuous groupoid homomorphisms). We denote the canonical projections from $G$ to the inverse system by $q_\alpha:G\to G_\alpha$. \end{enumerate} \end{Definition} If $(G, \sigma)$ is a topological groupoid with Haar system $\{\mu^x:x\in \G0\}$, we let $C_c(G,\sigma)$ denote the collection of compactly supported continuous complex valued functions on $G$. With the following multiplication, adjoint operation, and norm $\|\cdot\|_I$, $C_c(G,\sigma)$ becomes a topological *-algebra. \label{ccdefinition} $$ f*g(x) = \int_{G} f(xy)g(y^{-1})\sigma(xy,y^{-1})\,d\mu^{s(x)}(y) $$ $$ f^*(x) = \overline{f(x^{-1})\sigma(x,x^{-1})}$$ $$\|f\|_I = \max \left\{\sup_{x \in \G0} \int_{G} |f(g)|\,d\mu^x(g), \sup_{x \in \G0} \int_{G} |f(g^{-1})|\,d\mu^x(g)\right\}.$$ The\footnote{The designation "The" here is actually too strong, since in general Haar systems on groupoids are not unique. For our purposes, $G$ comes with a fixed Haar system, so we do not make a note of the Haar system in our notation. However, one should keep in mind that different Haar systems can give rise to different convolution algebras. It is known that, in the second countable case, these algebras are all Morita equivalent (see section 4 of \cite{Buneci}).} \textbf{maximal twisted groupoid $C^*$-algebra of $G$}, denoted by $C^*(G,\sigma)$, is defined to be the completion of $C_c(G, \sigma)$ with the following norm $$\|f\|_{max} = \sup_{\pi} \|\pi(f)\|,$$ where $\pi$ runs over all continuous (with respect to the norm $\|\cdot\|_I$) *-representations of $C_c(G, \sigma)$. The maximal groupoid $C^*$-algebra of $G$ is obtained via the same construction when $\sigma$ is the trivial cocycle. \section{Pushing Haar Systems to Quotients and \texorpdfstring{\\}{} Induced Morphisms of \texorpdfstring{$C^*$}{C*}-algebras}\label{morphisms} As an introduction to the ideas of this section, consider first the case when $G$ and $H$ are locally compact \emph{groups}. Suppose $\mu_G$ is a Haar measure on $G$, and $\phi:G\to H$ is a proper and surjective topological group homomorphism. It is well-known that $\phi_* (\mu_G) := \mu_G \circ \phi^{-1}$ is a Haar measure on $H$; we will denote this measure by $\mu_H$. The usual pullback of $\phi$ induces a ${\mathbb C}$-module homomorphism $\phi^*: C_c(H)\to C_c(G)$. We claim that it is a $*$-algebra morphism. Let $f_1, f_2 \in C_c(H)$ and $y \in G$. We define $F_1, F_2 \in C_c(H)$ by $F_1(h) = f_1(\phi(y)h)$ and $F_2(h) = f_2(h^{-1})$ for $h \in H$. We have \begin{align*} (\phi^*(f_1) \ast \phi^*(f_2))(y) &= \int_G f_1(\phi(yx))f_2(\phi(x^{-1}))\,d\mu_G(x) \\ &= \int_G (F_1 \cdot F_2)(\phi(x))\,d\mu_G(x) \\ &= \int_H (F_1 \cdot F_2)(z)\,d\mu_H(z) \\ &= \int_H f_1(\phi(y)z)f_2(z^{-1})\,d\mu_H(z) \\ &= (f_1 * f_2)(\phi(y)) = (\phi^*(f_1 \ast f_2))(y). \end{align*} An even easier computation shows that $\phi^*$ preserves the adjoint and hence is a *-morphism. The main theme of this paper is the approximation of topological groupoids by their images under proper surjective morphisms, allowing us to approximate groupoid $C^*$-algebras by subalgebras, namely by the groupoid $C^*$-algebras of the approximation groupoids. It is not clear that if $G$ is a topological groupoid with Haar system then such an image of $G$ (under a proper surjective morphism) will have a Haar system. Even if it does admit a Haar system, it does not seem obvious to the authors that the pullback map should induce a *-morphism of convolution algebras. It is the purpose of this section to establish a criterion for (proper) morphisms $G\to H$ of groupoids such that \begin{enumerate} \item The pullback map is a *-morphism from $C_c(H)$ to $C_c(G)$ (endowed with the algebra operations and norms described on p. \pageref{ccdefinition}). \item A Haar system on $G$ passes to a Haar system on $H$. \end{enumerate} We first establish a criterion for when morphisms of groupoids with Haar systems of measures induce *-morphisms of convolution algebras. \begin{Definition}\label{haarpreserving} Let $G$ and $H$ be locally compact groupoids with Haar systems of measures $\{\mu^x:x\in \G0\}$ and $\{\nu^y:y\in H^{(0)}\}$. A groupoid morphism $q:G\to H$ is said to be \textbf{Haar system preserving} if $q$ is proper and satisfies either of the following two equivalent (see \cref{pushingforward}) conditions: \begin{enumerate} \item For all $z\in H^{(0)}$ and for all $x \in q^{-1}(z)$, we have that $q_*\mu^x = \nu^z$. \item For all $f\in C_c(H)$, for all $z\in H^{(0)}$ and for all $x \in q^{-1}(z)$ we have $\int_G (f \circ q) \, d\mu^x = \int_H f d\nu^z$. % \end{enumerate} \end{Definition} \cref{haarpreserving} gives a large class of morphisms of groupoids that induce $*$-morphisms of groupoid $C^*$-algebras, but does not cover all possibilities. For example, the inclusion ${\mathbb Z} \hookrightarrow {\mathbb Z} \times {\mathbb Z}/2{\mathbb Z}$ does induce a $*$-morphism of convolution algebras (the restriction morphism) but is not Haar system preserving. In \cite{AM}, the first author along with A. Mitra add the required flexibility to cover this case by considering partial morphisms. In analogy with the situation for groups, proper groupoid morphisms which are Haar system preserving induce *-morphisms of the convolution algebras: \begin{Proposition}\label{inducedmorphism} Let $q:(G,\sigma_G)\to (H,\sigma_H)$ be a cocycle and Haar system preserving morphism of locally compact Hausdorff groupoids with Haar systems $\{\mu^x:x\in \G0\}$ and $\{\nu^y:y\in H^{(0)}\}$, respectively. The pullback map $q^*:C_c(H,\sigma_H)\to C_c(G,\sigma_G)$ is a *-morphism of topological *-algebras. If additionally $q$ is surjective, then $q^*$ is I-norm preserving. \end{Proposition} \begin{proof} The calculations to show that $q^*$ respects convolution and adjoints are similar to those for group morphisms discussed at the beginning of this section, and are thus omitted. We check that $q^*$ is continuous (and an isometry when $q$ is surjective). Since $q$ is Haar system preserving, by definition, if $f\in C_c(H)$ and $z \in H$, then $\int_H |f| \,d\nu^z = \int_G |f \circ q| \, d\mu^x$ for all $x \in q^{-1}(z)$. From the definition of I-norm (see p. \pageref{ccdefinition}), it thus follows easily that the I-norm of $q^*f$ in $C_c(G, \sigma_G)$ is less than or equal to the I-norm of $f$ in $C_c(H, \sigma_H)$, with equality if $q$ is surjective. \end{proof} \begin{Fact}\label{pushingforward} We give here a short proof of the fact that if $X$ and $Y$ are locally compact Hausdorff spaces and $f:X\to Y$ is a proper continuous function, then the pushforward of a regular Radon measure on $X$ is a regular Radon measure on $Y$. It is easy to check inner regularity and local finiteness of the pushforward measure. To prove outer regularity, use the fact that proper maps to locally compact spaces are closed. Then one can show that, for every $B\subset Y$ and every open neighborhood $U$ of $f^{-1}(B)$, the set $V = (f^{-1}(f(U^c)))^c$ is an open and saturated\,\footnote{Recall that a subset $A\subset X$ is saturated with respect to $f$ provided that $f^{-1}(f(A)) = A$.} neighborhood of $f^{-1}(B)$ such that $V \subset U$. Saturated open sets get mapped to open sets by closed maps, completing the proof. This observation about the pushforward of a regular Radon measure, along with the Riesz-Markov-Kakutani Theorem, gives us the equivalence of the two conditions presented in \cref{haarpreserving}. \end{Fact} Below, we discuss some conditions under which a Haar system of measures on a groupoid can be pushed to a quotient: \begin{Proposition}\label{Haarquotient} Let $G$ and $H$ be topological groupoids and let $q:G\to H$ be a proper surjective morphism. Suppose moreover that $q$ is topologically a quotient map. If $G$ has a Haar system of measures $\{\mu^x:x\in \G0\}$ such that for all $f\in C_0(H)$, for all $z\in H^{(0)}$ and for all $x,y\in q^{-1}(z)$ we have $$\int_G (f \circ q) \, d\mu^x = \int_G (f \circ q) \, d\mu^y,$$ then $H$ admits a natural Haar system of measures $\{\nu^y:y\in H^{(0)}\}$ that makes $q$ Haar system preserving. \end{Proposition} \begin{proof} For each $x \in H^{(0)}$ and each Borel subset $E\subset H$ we define $\nu^x(E) = q_*\mu^y(E)$ for any $y\in q^{-1}(x)$. By \cref{haarpreserving} and \cref{pushingforward}, the measure $\nu^x$ is well-defined and a regular Radon measure. We check that $\{\nu^x : x \in \gpdH0\}$ is a Haar system; the fact that $q$ is then Haar system preserving follows immediately. Let $h\in H^{(1)}$, $g\in q^{-1}(h)$ and $f\in C_c(H)$. Notice \begin{align*} \int_H f(y) \, d\nu^{t(h)}(y) = \int_{H} f(y)\,d(q_* \mu^{t(g)})(y) &= \int_{G} (q^*f)(y)\,d\mu^{t(g)}(y) \\ & = \int_{G} (q^*f)(gy)\,d\mu^{s(g)}(y) \\ & = \int_{H} f(hy)\,d(q_*\mu^{s(g)})(y) = \int_H f(hy)\,d\nu^{s(h)}(y), \end{align*} and so the left invariance condition holds. For the continuity of the Haar system, let $f\in C_c(H)$ and $\Omega \subset {\mathbb C}$ be an open set. Let $V = \{x\in H^{(0)}: \int_{H} f(y) d\nu^x(y) \in \Omega\}$. Since $q$ is proper, $f \circ q \in C_c(G)$; by the continuity of the Haar system on $G$, the set $W = \{z\in \G0: \int_G (f \circ q)(y) d\mu^z(y) \in \Omega \}$ is open in $\G0$. The conditions on $q$ and the definitions of the measures $\nu^x$ ensure that $q^{-1}(V) = W$; since $q$ is a quotient map, $V$ must be open. The continuity of the Haar system $\{\nu^x : x \in \gpdH0\}$ follows. \end{proof} \subsection{Examples of Haar System Preserving Morphisms} The concept of modeling $C^*$-algebras over topological spaces (i.e. preforming operations on a tensor factor of $C_0(X)$ for some locally compact Hausdorff space $X$) has had a tremendous impact to the theory of $C^*$-algebras, including in classification. In \cite{ENST}, Elliot et al, as a stepping stone to proving that the decomposition rank of $\mathcal{Z}$-stable subhomogeneous $C^*$-algebras is at most $2$, prove that one can locally approximate any unital subhomogenous $C^*$-algebra by noncommutative CW-complexes which, in the commutative case, have exactly the same topological dimension. Jiang and Su in \cite{JS}, Razak in \cite{R}, Jacelon in \cite{J}, and Evans and Kishimoto in \cite{EK} successfully use interval algebras and the folding thereof to classify large classes of algebras and to create new examples of $C^*$-algebras. The following proposition is straightforward to prove and provides a powerful tool for modeling groupoids over topological spaces in an analogous way. \begin{Proposition}\label{Exampleshaarmeasurepreserving} Let $G$ be a topological groupoid with Haar system and let $f:X\to Y$ be a proper continuous function of locally compact Hausdorff spaces. The morphism $id_G\times f: G\times X\to G\times Y$ is Haar measure preserving (here, we take the Haar system to be $\mu^y \times \delta_x$ for $(x,y)\in \G0\times X$). \end{Proposition} For the following examples, let $G$ be a topological groupoid with Haar system of measures. \begin{enumerate} \item Let $X= \{0,1,2, \hdots n\}$ be an $n$-point set and notice that $G\times X \to G$ is Haar measure preserving by \cref{Exampleshaarmeasurepreserving} and the dual pullback morphism $f^*:C^*(G) \to \oplus_{i=1}^n C^*(G)$ is $f\to \oplus_{i=1}^n f$. \item More generally, let $X$ be a compact Hausdorff space and notice that the projection $\pi:G\times X \to G$ is Haar system preserving by \cref{Exampleshaarmeasurepreserving} and observe that the pullback morphism $\pi^*:C^*(G) \to C^*(G)\otimes C(X)$ is given by $f \to f\otimes id_{C(X)}$. \item Let $X= [0,1]^n$ and let $Y= S^{n-1}$ and let $i:S^{n-1}\to [0,1]^n$ denote the inclusion. Notice that $G\times S^{n-1} \to G\times [0,1]^n$ defined by $id_G\times i$ is Haar system preserving. This example is extremely important for a future work of the first author in building groupoid models of noncommutative cell complexes. \end{enumerate} We conclude this section with examples of canonical morphisms of groupoids that have problematic pullbacks. In \cite{AM}, these constructions will be adjusted so that the pullbacks become unital maps between the groupoid $C^*$-algebras, allowing the authors to recover direct limit descriptions of standard $C^*$-algebras and construct new examples. For each $n$, let $G_n$ denote the smallest groupoid whose object space consists of $n$ points and for which there exists an arrow between any two objects; i.e. $G_n$ is the product groupoid $\{1,2,3,\hdots n\}^2$. For example, the following are depictions of $G_2$ and $G_3$ respectively: \begin{center} \begin{minipage}[c]{.25\linewidth} \begin{tikzpicture} \node (a) at (0, 0) {}; \node (b) at (2, 0) {}; \filldraw (a) circle(0.05); \filldraw (b) circle(0.05); \draw[->] (a) to[out=-150, in=-90] (-0.5, 0) to[out=90, in=150] (a); \draw[->] (b) to[out=-30, in=-90] (2.5, 0) to[out=90, in=30] (b); \draw[->] (a) to[out=20, in=160] (b); \draw[->] (b) to[out =-160, in=-20] (a); \end{tikzpicture} \end{minipage} \hspace{.5in} \begin{minipage}[c]{.25\linewidth} \begin{tikzpicture} \node (a) at (0, 0) {}; \node (b) at (2, 0) {}; \node (c) at (1,1.7) {}; \filldraw (a) circle(0.05); \filldraw (b) circle(0.05); \filldraw (c) circle(0.05); \draw[->] (a) to[out=-150, in=-90] (-0.5, 0) to[out=90, in=150] (a); \draw[->] (b) to[out=-30, in=-90] (2.5, 0) to[out=90, in=30] (b); \draw[->] (c) to[out=120, in=180] (1, 2.3) to[out=0, in=60] (c); \draw[->] (a) to[out=20, in=160] (b); \draw[->] (b) to[out = -160, in=-20] (a); \draw[->] (a) to[out=80, in=-140] (c); \draw[->] (c) to[out=-100, in=40] (a); \draw[->] (b) to[out=100, in=-40] (c); \draw[->] (c) to[out=-80, in=140] (b); \end{tikzpicture} \end{minipage} \end{center} It is straightforward to check that the groupoid $C^*$-algebra $C^*(G_n)$ is the $n \times n$ matrix algebra $\mathbb{M}_n$. The most natural maps to consider here are the projections $G_n\times G_m \to G_n$. Note that one needs to weight the Haar systems appropriately to make the projection map Haar measure preserving; even worse, the pullback morphism takes an $n\times n$ matrix $M$ to the matrix $\sum_{1\leq i,j\leq m} e_{i,j}\otimes M$ and hence the pullback map will not be unital. One can also see that any candidate for a bonding map from $G_n\times G_m \to G_n$ will essentially have the same problem, and that problem stems from the fact that the pullback of a function $f$ supported everywhere on $G_n$ will be supported everywhere on $G_n\times G_m$, and hence cannot be of the form $M\to M\otimes id_{M_m}$. \section{Inverse Systems: Proof of Theorem A}\label{proofoftheorema} In this section we prove \cref{mainthm.limitexists}, restated here from the introduction for ease of reference: \limitexists* \begin{proof} Let $G = \varprojlim_\alpha G_\alpha$ in the category of locally compact Hausdorff spaces (which exists and projects surjectively onto each of the factors in the inverse system). Let $q_\alpha:G\to G_\alpha$ denote the projections onto the pieces of the inverse system; by assumption, the $q_\alpha$'s are all proper and continuous. It is also easy to see that $G$, as a set, carries a groupoid structure such that the projections $q_\alpha$ are groupoid morphisms. We claim that the inversion and multiplication operations on $G$ are continuous. To see that $m$ is continuous, let $U \subset G$ be open, and let $(x,y)\in m^{-1}(U)$. Because $G$ is a subspace of the product $\Pi_\alpha G_\alpha$, there exist $k \ge 1$, indices $\alpha_1, \ldots, \alpha_k$ and open sets $U_{\alpha_i} \subset G_{\alpha_i}$ such that $ xy\in U_{\alpha_1}\times U_{\alpha_2} \times \hdots \times U_{\alpha_k} \times \Pi_{\alpha \notin \{\alpha_i: 1\leq i\leq k\}}G_\alpha \subset U$. As the multiplication\footnote{We use $m$ for the multiplication operation of any groupoid; the meaning should always be clear from context.} $m$ on $G_\alpha$ is continuous, for each $i$ there exists an open set $W_{\alpha_i} \subset \G2_{\alpha_i}$ containing $(q_{\alpha_i}(x),q_{\alpha_i}(y))$ such that $W_{\alpha_i}\subset m^{-1}(U_{\alpha_i})$. Notice that $(q_\alpha(x), q_\alpha(y))_\alpha \in W_{\alpha_1} \times W_{\alpha_2} \times \hdots \times W_{\alpha_k} \times \Pi_{\alpha \neq \alpha_i} \G2_\alpha \subset m^{-1}(U)$. The same method of proof can be used to show that inversion is continuous. It follows that $G$ is a topological groupoid. We define $\sigma$ to be the inverse limit of the maps $\sigma_\alpha$; it is straightforward to see that it is continuous and that it satisfies the cocycle condition. Notice that, for each $x\in \G0$, $\{G_\alpha,\mu^{q_\alpha(x)}_\alpha,q^\beta_\alpha,A\}$ is an inverse limit of topological Borel measures spaces (see Definition 6 of \cite{Choksi}) that satisfies the maximal sequentiality condition (see Definition 4 of \cite{Choksi}) because our bonding maps are proper. By Theorem 2.1 of \cite{Choksi}, there exists a measure $\nu^x$ on $G$, defined on a sigma-subalgebra of the Borel sigma-algebra that contains a basis for $G$ and contains all compact subsets of $G$, such that the pushforward measure of $\nu^x$ along $q_\alpha$ is equal to $\mu_\alpha^{q_\alpha(x)}$ for each $\alpha$. It is easy to see that $\nu^x$ is positive as it is an inverse limit of positive measures (see the proof of Theorem 2.1 on page 325 of \cite{Choksi}). Using the Riesz-Markov-Kakutani Theorem, there exists a unique regular Radon measure $\mu^x$ such that integration against $\mu^x$ agrees with integration against $\nu^x$ for functions in $C_c(G)$. We claim that $\{\mu^x:x\in \G0\}$ is a Haar system for $G$. Note first that the support of $\nu^x$, and hence also of $\mu^x$, must be contained in $G^x$ by construction. To see that the system $\{\mu^x:x\in \G0\}$ is continuous, let $f\in C_c(G)$ and $\epsilon >0$. Choose $M \ge sup_{y\in \G0} \mu^y(supp(f))$ such that $M > 0$ ($M$ exists by \cref{compactmaxmeasure}). Using a standard partition of unity argument, we can find $\alpha\in A$ and $h_\alpha \in C_c(G_\alpha)$ such that for $h = q_\alpha^*(h_\alpha)$ we have $|f - h| \leq \frac{\epsilon}{3M}$ and $supp(h) \subset supp(f)$. Let $x\in \G0$ and notice that $U_\alpha := \{y\in \G0_\alpha: \int h_\alpha \, d\mu^y_{\alpha} - \int h_\alpha \, d\mu^{q_\alpha(x)}_{\alpha} < \frac{\epsilon}{3}\}$ is an open subset in $\G0_\alpha$ and hence its pre-image $q_\alpha^{-1}(U_\alpha)$ is an open set in $\G0$ containing $x$. Notice also that if $y \in q^{-1}_\alpha(U_\alpha)$ then we have \begin{align*} \left|\int_G f d\mu^y - \int_G f d\mu^x\right| \leq \left|\int_G f d\mu^y-\int_G h d\mu^y\right| + \left|\int_G h d\mu^y-\int_G h d\mu^x\right| + \left|\int_G h d\mu^x - \int_G f d\mu^x\right|. \end{align*} By the choice of $h$, we have that $\left|\int_G f d\mu^y-\int_G h d\mu^x\right| + \left|\int_G f d\mu^y-\int_G h d\mu^y\right| \leq 2\epsilon/3$ and, by properties of the pushforward measure, we have that $ \left|\int_G h d\mu^y-\int_G h d\mu^x\right| = \left|\int_{G_\alpha} h_\alpha d\mu_\alpha^{q_\alpha(y)}-\int_{G_\alpha} h_\alpha d\mu_\alpha^{q_\alpha(x)}\right| \leq \epsilon/3$. Thus $y \in \{z\in \G0: \left|\int_G fd\mu^z - \int_G fd\mu^x\right|< \epsilon\}$. It therefore follows that the collection $\{\mu^x:x\in \G0\}$ satisfies the continuity assumption in \cref{Haarysituation}. The proof of left invariance follows essentially from the same kind of argument by using the fact that the measures $\mu^x_\alpha$ are left invariant. $G$ is thus a topological groupoid with Haar system of measures and with 2-cocycle and the projection maps $q_\alpha$ are clearly Haar system preserving and cocycle preserving. It follows from \cref{inducedmorphism} that the pullbacks of the projection mappings induce $I$-norm embeddings from the directed system $\{C_c(G_\alpha,\sigma_\alpha),(p^\alpha_\beta)^*,A\}$ into $C_c(G,\sigma)$. The fact that $C^*(G)$ is the direct limit of the algebras $C^*(G_\alpha,\sigma_\alpha)$ follows from the fact that the union $\bigcup_{\alpha}C_c(G_\alpha,\sigma_\alpha)$ is dense in $C^*(G,\sigma)$. \end{proof} \section{Inverse Approximation of Uniform Spaces}\label{topapprox} The goal here is to describe how uniform spaces (which we will be working with later) can be presented as inverse limits of metrizable spaces. Our approach will be to use covers, so we begin by reviewing some of the relevant terminology and notations. We suggest\cite{Engelking} or \cite{Isbell} as standard references on uniform spaces. Let $\mathcal{U}$ and $\mathcal{V}$ be covers of a set $X$. $\mathcal{U}$ is said to \textbf{refine} $\mathcal{V}$ (equivalently, $\mathcal{V}$ \textbf{coarsens} $\mathcal{U}$), written $\mathcal{U}\prec \mathcal{V}$, if each element of $\mathcal{U}$ is contained in some element of $\mathcal{V}$. \begin{Definition} Let $X$ be a set, $A\subset X$, and $\mathcal{U}$ be a cover of $X$. We define the \textbf{star of $A$ against $\mathcal{U}$}, denoted by $st(A,\mathcal{U})$, to be the set $\cup\{U\in \mathcal{U}: U\cap A\neq \emptyset\}$. \end{Definition} \begin{Remark} The prototypical example of starring is given in metric spaces. If $X$ is a metric space and $\mathcal{U}$ is the cover of $X$ by $\epsilon$-balls then the star of a subset $A\subset X$ against $\mathcal{U}$ is contained in the $2\epsilon$-neighborhood of $A$ (they are equal if $X$ is a geodesic metric space). \end{Remark} \begin{Definition} The \textbf{star of a cover ${\mathcal{U}}$ against another cover ${\mathcal{V}}$} is the cover $$st({\mathcal{U}},{\mathcal{V}}) := \left\{st(U,\mathcal{V}) :U\in {\mathcal{U}}\right\}.$$ A cover $\mathcal{U}$ is said to \textbf{star refine} a cover $\mathcal{V}$, if $st({\mathcal{U}},{\mathcal{U}})$ refines ${\mathcal{V}}$. We will write $\mathcal{U} \leq \mathcal{V}$ if $\mathcal{U}$ star refines $\mathcal{V}.$ \end{Definition} \begin{Definition} A \textbf{uniform space} is a set $X$ equipped with a family $\Lambda = \{\mathcal{U}_\lambda\}$ of covers (called the \textbf{uniform covers} of $X$) such that \begin{itemize} \item $\Lambda$ is closed under coarsening. \item For any $\mathcal{U}_1, \ldots, \mathcal{U}_n \in \Lambda$ there exists $\mathcal{V} \in \Lambda$ such that $\mathcal{V} \leq \mathcal{U}_j$ for all $j = 1, \ldots, n$. \end{itemize} A uniform space $X$ is called \textbf{Hausdorff} if, in addition, it satisfies: \begin{itemize} \item For each $x,y\in X$ there exists $\mathcal{U} \in \Lambda$ such that there is no $U\in \mathcal{U}$ with $x,y\in U$. \end{itemize} \textbf{Unless stated otherwise, all the uniform spaces we consider in the following will be assumed to be Hausdorff.} The only non-Hausdorff uniform structures we will work with are pseudo-metric (non-metric) uniform structures. A function $f:X\to Y$ between uniform spaces is \textbf{uniformly continuous} if the pre-image of uniform covers are uniform covers. \end{Definition} As the name indicates, such a structure is used to abstract uniform properties of metric spaces, such as uniform continuity and uniform convergence. \begin{Definition}\label{normalsequencespace} A \textbf{normal sequence of covers} of a set $X$ is a sequence $\{\mathcal{U}_n:n \ge 0\}$ of covers of $X$ such that $\mathcal{U}_{n+1} \leq \mathcal{U}_{n}$ for all $n\ge 0$. \end{Definition} It is well known that a normal sequence of covers $\{\mathcal{U}_n:n \ge 0\}$ on $X$ defines a pseudo-metric on $X$. Here is an outline of the procedure: For elements $x,y \in X$, let $n(x,y)$ denote the maximum integer $k$ such that $x$ and $y$ are both contained in an element of $\mathcal{U}_k$ and $\infty$ if no such maximum exists. Let $\rho :X\times X \to [0,1)$ be defined by $\rho(x,y) = 2^{-n(x,y)}$, with the convention that $2^{-\infty} = 0$. We observe that $\rho$ itself is not necessarily a pseudo-metric (because it may not satisfy the triangle inequality), but can be used to define a pseudo-metric $d$ via $d(x,y) = \inf \sum_{i=1}^n \rho(x_i,x_{i+1})$ where the infimum is taken over all chains $x = x_1, x_2, \hdots, x_n =y$ in $X$; $d$ satisfies the triangle inequality by definition. As shown in the proof of Theorem 14 on page 7 of \cite{Isbell} the cover by balls of radius 1 determined by the pseudo-metric $d$ refines $\mathcal{U}_1$. By an induction argument, the cover by balls of radius $\frac{1}{2^n}$ refines $\mathcal{U}_{n+1}$. The uniform structure generated by the resulting psuedo-metric is the structure whose uniform covers are precisely those which coarsen $\mathcal{U}_n$ for $n\ge 0$. We will write $\langle X, \{\mathcal{U}_n:n \ge 0\}\rangle$ to denote the resulting uniform structure. \begin{Definition}\label{cofinalrefinement} Let $\{\mathcal{U}_n\}_n$ and $\{\mathcal{V}_n\}_n$ be two normal sequences of covers. We will say that $\{\mathcal{V}_n\}_n$ \textbf{cofinally refines} $\{\mathcal{U}_n\}_n$ if for every $m\ge 0$ there exists $k(m)$ such that $\mathcal{V}_{k(m)}\leq \mathcal{U}_m$. \end{Definition} The following Lemma demonstrates the importance of cofinal refinement. \begin{Lemma}\label{cofinallemma} Let $\{\mathcal{U}_n\}$ and $ \{\mathcal{V}_n\}$ be normal sequences of covers. $\{\mathcal{U}_n\}$ cofinally refines $ \{\mathcal{V}_n\}$ if and only if the identity map $id_X:(X,\{\mathcal{U}_n\}) \to (X,\{\mathcal{V}_n\})$ is uniformly continuous. \end{Lemma} If $X$ is a uniform space then it is known that $X$ is the inverse limit of metrizable uniform spaces where the inverse limit is taken in the category of uniform spaces. Indeed, let $\{\mathcal{U}_n:n\ge 0\}$ be a normal sequence of uniform covers of $X$ and let $\langle X,\{\mathcal{U}_n\}\rangle$ denote the resulting pseudo-metric space. It is not difficult to show that the identity map $id_X:X\to \langle X,\{\mathcal{U}_n\}\rangle$ is uniformly continuous. One can show that if $\{\mathcal{U}_n:n\ge 0\}$ and $\{\mathcal{V}_n:n\ge 0\}$ are normal sequences of uniform covers then there exists a normal sequence of uniform covers that cofinally refines both. One thus has an inverse system indexed by the normal sequences of covers and ordered by cofinal refinement. It is not hard to show that the inverse limit of this system is precisely the given uniform space $X$. In order to show that $X$ is a limit of metrizable spaces, we modify the construction to quotient out in each pseudo-metric space by points which cannot be differentiated by the pseudo-metric. A uniform structure on a set $X$ generates a topology in a way that is analogous to the way metrics define topologies. One defines it by declaring a set $A\subset X$ to be a neighborhood of a point $x\in X$ if there exists a uniform cover $\mathcal{U}$ of $X$ such that $st(\{x\},\mathcal{U}) \subset A$. We say that a collection of covers is a \textbf{base} for a uniform structure if it forms a uniform structure when one takes all coarsenings of covers in the given collection. \begin{Definition}\label{paracompactness}(see Theorems 5.1.9 and 5.1.12 of \cite{Engelking}) A Hausdorff topological space $X$ is said to be \textbf{paracompact} if it satisfies any of the following equivalent conditions: \begin{enumerate} \item Every open cover of $X$ has a locally finite open refinement. \item For every open cover $\mathcal{U}$, there exists a partition of unity whose carriers refine $\mathcal{U}$. \item\label{starthatspace} Every open cover admits an open star refinement. \item \label{uniform} The collection of all open covers is a base for a uniform structure that generates the given topology. \end{enumerate} \end{Definition} Condition \cref{uniform} says that paracompact spaces can be viewed as uniform spaces. In fact, they are the largest class of topological spaces which can be endowed with a uniform structure such that uniform concepts correspond directly to topological ones. \footnote{To address the possible objection that completely regular spaces should be that class, note that the collection of all open covers does not necessarily define a base for a uniform structure. Therefore, one cannot guarantee that every continuous function between completely regular spaces is uniformly continuous with respect to any uniform structure that generates their topology.} Recall that a Hausdorff topological space is said to be \textbf{Lindel\"of} if every open cover admits a countable open refinement. It is well known that every Lindel\"of space is paracompact and hence can be viewed as a uniform spaces. It is easy to show that, for locally compact spaces, Lindel\"of is just a different guise for $\sigma$-compactness. Just to stress this (well-known) fact, which we will rely on in the future, we highlight it as a lemma: \begin{Lemma} Let $X$ be a locally compact Hausdorff space. $X$ is Lindel\"of if and only if $X$ is $\sigma$-compact. \end{Lemma} \begin{Lemma}\label{coverorfunction} Let $X$ be a Lindel\"of space and let $\{X_\alpha,p^\alpha_\beta,A\}$ be an inverse approximation of $X$ by metric spaces. For every $\alpha \in A$ denote by $q_\alpha$ the projection map $X \to X_\alpha$, and for each $n$ let $\mathcal{V}^\alpha_n$ be the cover of $X$ given by the pre-images under $q_\alpha$ of the cover of $X_\alpha$ by $\frac{1}{2^n}$-balls. The following conditions are equivalent: \begin{enumerate}[label=(\arabic*)] \item \label{item.coverrefinement} for every normal sequence $\{\mathcal{U}_n:n\ge 0\}$ of open covers for $X$, there exists $\alpha$ such that the normal sequence $\{\mathcal{V}^\alpha_n : n \ge 0\}$ cofinally refines $\{\mathcal{U}_n:n\ge 0\}$. \item \label{item.fnpullback} For every continuous function $f:X\to Y$ where $Y$ is a separable metric space, there exists $\alpha$ such that $f$ is a pullback of a uniformly continuous function $f_\alpha:X_\alpha\to Y$; i.e $f= f_\alpha \circ q_\alpha$. \end{enumerate} \end{Lemma} \begin{proof} Assume that condition \cref{item.coverrefinement} holds. Let $f:X\to Y$ be a continuous function (not necessarily proper) where $Y$ is just some separable metric space. For each $n \ge 0$, let $\mathcal{W}_n$ be the pre-image under $f$ of the cover of $Y$ by balls of radius $2^{-n}$ . Notice that $\mathcal{W}_n$ is a normal sequence on $X$. By assumption, there exists $\alpha\in A$ such that $\{\mathcal{W}_n\}$ is cofinally refined by $\{\mathcal{V}_n^\alpha\}$. Notice that if $q_\alpha(x) = q_\alpha(y)$ then it must be the case that $f(x) = f(y)$. We define a continuous function $f_\alpha:X_\alpha \to Y$ by $f_\alpha(z) = f(x)$ for any $x \in q_\alpha^{-1}(z)$. It is straightforward to show that $f_\alpha$ is uniformly continuous on $X_\alpha$. Evidently, we have that $f = f_\alpha \circ q_\alpha$. To show that \cref{item.fnpullback} implies \cref{item.coverrefinement}, note that we may assume that every normal sequence consists of countable covers, as the Lindel\"of property guarantees that these are cofinal (under the order of cofinal refinement) in the poset of normal sequences of covers. Every normal sequence $\{\mathcal{U}_n\}$ consisting of countable covers induces a separable metric space. By assumption, the projection $p$ of $X$ on the induced metric space $\overline{X}$ has the property that, for some $\alpha\in A$, $p$ is the pullback along $q_\alpha$ of a uniformly continuous function $p_\alpha:X_\alpha\to \overline{X}$. It follows that the pre-image under $p_\alpha$ of the covers of $\overline{X}$ by $\frac{1}{2^n}$ balls is a normal sequence on $X_\alpha$. Because the covers of $X_\alpha$ by balls of radius $\frac{1}{2^n}$ is cofinal (under cofinal refinement) in the collection of normal sequences of covers on $X_\alpha$, it follows that the normal sequence of covers of $X_\alpha$ by balls of radius $\frac{1}{2^n}$ cofinally refines the pre-image of the covers of $\overline{X}$ by balls of radius $\frac{1}{2^n}$. Hence, $\{\mathcal{V}^\alpha_n\}$ cofinally refines $\{\mathcal{U}_n\}$. \end{proof} \begin{Definition}[Definition/Notation]\label{exhaustion} Let $X$ be a locally compact and $\sigma$-compact space. By an \textbf{exhaustion of $X$ by compact subsets} we will mean a nested collection $\{K_n:n\ge 0\}$ of compact neighborhoods such that $\bigcup_{i=1}^{\infty} int(K_n) = X$ ( where $int(A)$ denotes the interior of a subset $A\subset X$) \end{Definition} \begin{Proposition}\label{propermapping} Let $X$ be a locally compact and $\sigma$-compact space and let $\{\mathcal{U}_n:n \ge 0\}$ be a normal sequence of locally finite open covers by relatively compact sets. Let $d(x, y)$ be the resulting pseudo-metric on $X$ (see the discussion following \cref{normalsequencespace}). Let $\overline{X}$ denote the metric quotient of $(X,d)$ and let $q:(X,d)\to \overline{X}$ denote the quotient map. The following properties are satisfied: \begin{enumerate}[label=(\arabic*)] \item $id_X : X \to (X,d)$ is a proper map. Moreover, the composition of $id_X$ with the metrizable quotient map $q:(X,d) \to \overline{X}$ is proper. \item the quotient map $q:(X,d) \to \overline{X}$ is an open map; in fact, for every $x\in X$, we have $q(B(x,\epsilon)) = B(\overline{x},\epsilon)$. \item \label{nesting} $st(x, \mathcal{U}_{n + 1}) \subset B(x, \frac{1}{2^n}) \subset st(x, \mathcal{U}_n)$, where $B(x, \frac{1}{2^n})$ is the ball around $x$ of radius $\frac{1}{2^n}$ (measured with respect to the pseudo-metric $d$). \end{enumerate} \end{Proposition} \begin{proof} The proofs of (2) and (3) follow directly from the definitions, except for the inclusion $B(x, \frac{1}{2^n}) \subset st(x, \mathcal{U}_n)$, which follows by applying induction to the argument given in the proof of Theorem 1 in \cite{Isbell}. To see (1), notice that the cover of $X$ by balls of radius $1$ refines $\mathcal{U}_0$ and therefore the balls of radius 1 in $X$ are relatively compact subsets of $X$. If $K_1$ is a compact subset of $(X, d)$ then it can be covered by finitely many balls of radius 1; hence $id_X^{-1}(K_1)$ is contained in a finite union of relatively compact subsets of $X$ and is therefore compact ($id_X^{-1}(K_1)$ is closed by the continuity of $id_X$). A similar argument works to show that $q : (X, d) \to \overline{X}$ is proper, based on the fact that the image of the 1-balls in $(X, d)$ must also be relatively compact in $\overline{X}$, and the pre-image of a ball of radius $1$ of $\overline{X}$ is exactly a ball of radius $1$ in $(X, d)$. The composition of two proper maps is proper, concluding the argument. \end{proof} \section{Approximations of \texorpdfstring{$\sigma$}{sigma}-compact groupoids: \texorpdfstring{\\}{}Proof of Theorem B}\label{guts} We want to apply the ideas of \cref{topapprox} to groupoids in such a way that the resulting quotient object also has a groupoid structure. To that end, we have to cover the objects and arrows separately by normal sequences; the interplay between the arrow covers and the object covers is a bit subtle, as can be seen from \cref{example.needsintersectionrefinement} and the technical conditions in \cref{Coverings}. \begin{Example} \label{example.needsintersectionrefinement} Consider the groupoid $G$ pictured below, consisting of 4 objects $\{x_1, x_2, x_3, x_4\}$ and all possible arrows $g_{ij}$ from $x_i$ to $x_j$ (in order to make the picture more readable, only a few arrow labels are shown): \begin{center} \begin{tikzpicture} \node (e1) at (0, 0){}; \node (e2) at (2, 0){}; \node (e3) at (2, -2){}; \node (e4) at (0, -2){}; \filldraw (e1) circle(0.04) node[anchor=north]{}; \filldraw (e2) circle(0.04) node[anchor=north]{}; \filldraw (e3) circle(0.04) node[anchor=north]{}; \filldraw (e4) circle(0.04) node[anchor=north]{}; \draw[->] (e1) to[out=-150, in=-90] (-.5, 0) node[anchor=east]{$x_1$} to[out=90, in=150] (e1); \draw[->] (e2) to[out=-30, in=-90] (2.5, 0) node[anchor=west]{$x_2$} to[out=90, in=30] (e2); \draw[->] (e3) to[out=-30, in=-90] (2.5, -2) node[anchor=west]{$x_3$} to[out=90, in=30] (e3); \draw[->] (e4) to[out=-150, in=-90] (-.5, -2) node[anchor=east]{$x_4$} to[out=90, in=150] (e4); \draw[->] (e1) to[out=15, in = 165]node[pos=.5, yshift=2mm]{$g_{12}$} (e2); \draw[->] (e2) to[out=-165, in = -15] (e1); \draw[->] (e4) to[out=-15, in = -165]node[pos=.5, yshift=-2mm]{$g_{43}$} (e3); \draw[->] (e3) to[out=165, in = 15] (e4); \draw[->] (e1) to[out = -105, in = 105](e4); \draw[->] (e4) to[out=75, in = -75] (e1); \draw[->] (e2) to[out = -105, in = 105](e3); \draw[->] (e3) to[out=75, in = -75] (e2); \draw[->] (e1) to[out = -30, in = 120] (e3); \draw[->] (e3) to[out = 150, in = -60] (e1); \draw[->] (e2) to[out = -150, in = 60] (e4); \draw[->] (e4) to[out = 30, in = -120] (e2); \end{tikzpicture} \end{center} We endow $G$ with the discrete topology. Consider the following open sets on $G$: $U_{00} = \{g_{11}, g_{33}\}$ and $U_{ij} = \{g_{ij}\}$ for all possible combinations of $i,j$ not equal to $1,1$ and $3,3$. For $n \geq 1$, the collection $\mathcal{U}_{n}^1$ consisting of all these sets $\{U_{ij}\}$ (including $U_{00}$) is then a cover of the arrow space; $\mathcal{U}_{n}^0 = \{U_{00}, U_{22}, U_{44}\}$ is the restriction of this cover to the object space, and a cover of the object space in its own right (note that $x_i = g_{ii}$). Since the $\mathcal{U}_n$'s form a normal sequence of covers, we can apply the construction of \cref{topapprox} to get a quotient space in which $x_1$ and $x_3$ are identified (since $U_{00}$ appears in all covers). However, it should be apparent that if we want to place a groupoid structure on the resulting object, we have a problem with multiplication: $[g_{12}]$ has source $x_1$ and $[g_{43}]$ has target $x_3$, so since $x_1 \sim x_3$ we would expect the two to be composable; however, it is clear that it is not possible to define $[g_{12}] \cdot [g_{43}]$ in a way that is compatible with the structure on $G$. The concept of intersection separating refinement which we introduce below (\cref{intersectionseparatingrefinement}) is needed in order to ensure that this kind of situation cannot occur and multiplication on the quotient is well-defined. To check if the condition holds for our particular example, we construct a new collection of sets, one for each element $g_{ii}$, consisting of the intersection of all sets in $s(\mathcal{U}_n^1)$ which contain $g_{ii}$; this leads us to the collection of sets $U_{g_{ii}} = \{g_{ii}\}$. Then $\mathcal{V} = \{U_{g_{ii}} : i = 1, \ldots,4\}$ is a cover of the object space, and one condition in the intersection separating requirement is that $\mathcal{U}_{n + 1}^0$ should refine $\mathcal{V}$; this is clearly not the case, because $U_{00}$ is contained in $\mathcal{U}_{n + 1}^0$ but is not a subset of any set in $\mathcal{V}$. In other words, the normal sequence described here would be disqualified from consideration by condition \cref{intersectionseparatingcondition} of \cref{Coverings}. \end{Example} By Proposition I.4 from \cite{Westman}, if $G$ has a continuous Haar system of measures then the target and source maps are necessarily open. Our proof of \cref{Coverings} does not need $G$ to have a Haar system, but does need the target and source maps to be open. \begin{Notation}\label{groupoidcover} If $\mathcal{U}$ is a cover of a set $X$ and $A\subset X$, we write $\mathcal{U}|_{A}$ to be the collection of elements of $\mathcal{U}$ that intersect $A$. Let $G$ be a topological groupoid. An \text{open cover} $\mathcal{U}$ of $G$ will consist of a pair $\{\mathcal{U}^0,\, \mathcal{U}^1\}$ of open covers of $\G0$ and $\G1$, respectively. \end{Notation} \begin{Definition}\label{intersectionseparatingrefinement} Let $\mathcal{U}$ be a \emph{finite} open cover of a locally compact Hausdorff space $X$. For each $x\in X$, let $U_x$ be the intersection of all elements of $\mathcal{U}$ that contain $x$. We define the \textbf{intersection refinement of $\mathcal{U}$} to be the cover $\mathcal{U}' = \{U_x:x\in X\}$ (see \cite{CM} for a similar concept). Notice that $\mathcal{U}'$ must also be a finite cover. Define an equivalence relation on $X$ by $x\sim y$ if $U_x=U_y$, and let $[[x]]$ denote the equivalence class of $x\in X$ under this equivalence relation. It is possible that the set $[[x]]$ is neither open nor closed in $X$. We say a cover $\mathcal{V}$ is an \textbf{intersection separating refinement of $\mathcal{U}$}, denoted $ \mathcal{V} \leq_{int} \mathcal{U}$, if: \begin{itemize} \item $\mathcal{V}$ is a refinement of the intersection refinement $\mathcal{U}'$ described earlier, and \item whenever $\overline{[[x]]} \cap \overline{[[y]]} = \emptyset$ for some $x, y \in X$ then no open set in $\mathcal{V}$ contains elements from both sets $\overline{[[x]]}$ and $\overline{[[y]]}$ simultaneously. \end{itemize} Such a refinement always exists, because the equivalence relation defines a finite partition of $X$ and the normality of $X$ guarantees that we can always separate two closed disjoint sets by disjoint open sets. \end{Definition} \begin{Definition}\label{groupoidexhuastion} Suppose $G$ is a $\sigma$-compact groupoid and the source and target maps are open. Let $\{K_n\}$ be an exhaustion of $G$ by compact sets as in \cref{exhaustion}. Then $\{K_n':= K_n\cup t(K_n)\cup s(K_n)\}$ is also an exhaustion of $G$ by compact sets, and $K_n'|_{\G0}$ is an exhaustion of $\G0$ (both satisfying the requirement of \cref{exhaustion}). We call an exhaustion of a $\sigma$-compact groupoid that has been obtained in this manner a \textbf{groupoid exhaustion}. By construction, $K_n'$ has the property that $s(K_n'), t(K_n') \subset K_n'$. \end{Definition} The key step of our groupoid approximation argument is the following result, which gives a method for choosing a normal sequence of covers with very specific properties. The consequence of these properties is explained in \cref{CoveringsExplanation}. \begin{Proposition}\label{Coverings} Let $G$ be a $\sigma$-compact groupoid with open source and target maps and let $\{K_n\}$ be a groupoid exhaustion of $G$ by compact sets (as in \cref{groupoidexhuastion}). Let $\{\mathcal{W}_n:n\ge0\}$ be a given normal sequence (see \cref{normalsequencespace}) of open covers of $G$ (see \cref{groupoidcover}). There exists a normal sequence of countable and locally finite open covers $\{\mathcal{U}_n\}_{n\ge 0}$ of $G$ such that for each $n \geq 1$ we have: \begin{enumerate}[label=(\arabic*)] \item \label{proper} $\mathcal{U}_n$ consists of relatively compact open sets. \item \label{refines} $\mathcal{U}_n\leq \mathcal{W}_n$. \item \label{stinclusion} $ s(\mathcal{U}^1_{n}), t(\mathcal{U}^1_{n}) \leq \mathcal{U}^0_n$. \item \label{intersectionseparatingcondition} $\restrict{\mathcal{U}^0_{n+1}}{K_{n}} \leq_{ int}\; t(\mathcal{U}_n^1|_{K_n})\cup s(\mathcal{U}_n^1|_{K_n})$. \item \label{subspacecondition} $\mathcal{U}_{n+1}^0 \leq \mathcal{U}_n^1|_{\G0}$. \item \label{contmult} $m(\restrict{\mathcal{U}^1_{n + 1}}{K_n}, \restrict{\mathcal{U}^1_{n + 1}}{K_n}) \leq \mathcal{U}^1_{n}$. \item \label{inverse} $(\mathcal{U}^1_{n + 1})^{-1} \leq \mathcal{U}^1_{n}$. \end{enumerate} If additionally $G$ is equipped with a Haar system of measures $\{\mu^x:x\in \G0\}$, then the sequence of covers can be chosen such that, for each $n \ge 0$, we have: \begin{enumerate}[resume*] \item \label{haaryfiber} Fix $\{f^n_j:j \in J_n\}$ a finite partition of unity of $K_n$ whose carriers refine $\mathcal{U}^1_n$. Let $(\lambda_j)_j \subset {\mathbb C}$ be any sequence with $|\lambda_j| < n$. For each open set $U\in \mathcal{U}^1_{n+1}$ and for each $x,y\in s(U)$ we have \begin{equation} \label{eqn.approximatespartition} \left|\mathlarger{\int}_{G} \left(\sum_{j} \lambda_j f^n_j\right)\, d\mu^x - \mathlarger{\int}_{G} \left(\sum_{j} \lambda_j f^n_j\right)\, d\mu^y\right| < \frac{1}{n}. \end{equation} \end{enumerate} Moreover, if $\sigma : \G2 \to {\mathbb T}$ is a 2-cocycle, we choose $\{\mathcal{V}_n\}_n$ a normal sequence of finite open covers of ${\mathbb T}$ such that $\sup_{V\in \mathcal{V}_n}diam(V) \to 0$ as $n \to \infty$ and we can require that the sequence $\{\mathcal{U}_n\}$ satisfies: \begin{enumerate}[resume*] \item \label{cocyclecond} $\sigma(\restrict{\mathcal{U}^1_n}{K_n}, \restrict{\mathcal{U}^1_n}{K_n}) \leq \mathcal{V}_n$. \end{enumerate} \end{Proposition} \begin{proof} Let $\mathcal{U}_0^0$ be any relatively compact open cover of $\G0$ which star refines $\mathcal{W}_0^0$, and $\mathcal{U}_0^1$ any relatively compact open cover of $\gpdG1$ which star refines $\mathcal{W}_0^1$, $s^{-1}(\mathcal{U}_0^0)$ and $t^{-1}(\mathcal{U}_0^0)$ simultaneously. This ensures $\mathcal{U}_0$ satisfies \cref{proper}-\cref{stinclusion}; conditions \cref{intersectionseparatingcondition}-\cref{haaryfiber} deal with the interplay of consecutive covers in the normal sequence, so do not apply to $\mathcal{U}_0$. If we need to satisfy \cref{cocyclecond} as well, then we could modify the construction of $\mathcal{U}^1_0$ as described later in this proof. Since $\mathcal{U}_n$ will be chosen to refine $\mathcal{U}_0$ for all $n \geq 1$, the fact that $\mathcal{U}_0$ sets are relatively compact ensures that sets in each $\mathcal{U}_n$ are also relatively compact, hence we do not need to consider condition \cref{proper} in the rest of the construction. Now, assume $\mathcal{U}_n$ has been chosen for some $n\ge 0$. To construct $\mathcal{U}_{n + 1}^0$ we need to satisfy \cref{intersectionseparatingcondition} and \cref{subspacecondition}. Since $s$ and $t$ are open maps, $s(\restrict{\mathcal{U}_n^1}{K_n}) \cup t(\restrict{\mathcal{U}_n^1}{K_n})$ is a finite open cover of $K_n \cap \gpdG0$, so we can apply the construction described in \cref{intersectionseparatingrefinement} to get an intersection separating refinement $\mathcal{V}$ of it. Choose any star refinement of $\restrict{\mathcal{U}_n^1}{\gpdG0}$ and take $\mathcal{U}_{n + 1}^0$ to be a refinement of it such that $\restrict{\mathcal{U}_{n + 1}^0}{K_n}$ also refines $\mathcal{V}$. We construct separately, for $i = \ref{stinclusion}, \ref{contmult}, \ldots, \ref{cocyclecond}$, open covers $\mathcal{V}_i$ of $\gpdG1$ such that $\mathcal{V}_i$ satisfies condition $(i)$ (that is, $s(\mathcal{V}_3), t(\mathcal{V}_3) \leq \mathcal{U}_{n + 1}^0$, and so on for other values of $i$). Then there exists a locally finite open cover (which we will take to be our $\mathcal{U}_{n + 1}^1$) which star-refines $\mathcal{U}_n^1$, $\mathcal{W}_n^1$ and all the $\mathcal{V}_i$'s simultaneously (the existence of such a cover is a consequence of paracompactness); $\mathcal{U}_{n + 1}$ would thus be a star-refinement of $\mathcal{U}_n$ satisfying all of \ref{refines}-\ref{cocyclecond}. Covers satisfying conditions \ref{stinclusion} and \ref{inverse} are easily found using the continuity of the groupoid operations, so we only have to verify that we can find covers $\mathcal{V}_6$, $\mathcal{V}_8$, and $\mathcal{V}_9$. We now proceed to choose a cover $\mathcal{V}_6$ satisfying $m(\restrict{\mathcal{V}_6}{K_n},\restrict{\mathcal{V}_6}{K_n})\leq \mathcal{U}^1_n$. Denote by $K$ the compact set $(K_n\times K_n) \cap \G2$. Let $\mathcal{W}^{(2)}$ be a finite open refinement of $\restrict{m^{-1}(\mathcal{U}^1_n)}{K}$ consisting of open sets of the form $(U\times V) \cap \G2$ where $U,V\subset \G1$ are open sets (such sets form a basis for the topology on $\G2$), and let $\mathcal{W}$ be the collection of sets $U \times V$ such that $(U \times V) \cap \G2$ appears in $\mathcal{W}^{(2)}$. Let $N = \cup_{W \in \mathcal{W}} W$ (an open neighbourhood of $\G2$). Extend $\mathcal{W}$ to a cover $\widetilde{\mathcal{W}}$ of $K_n \times K_n$ by adding in the following open sets: for each $(g, h) \in (K_n \times K_n) \setminus \G2$, pick a neighbourhood $U \times V$ of $(g, h)$ such that $U, V \subset G$ are open and $(U \times V) \cap \G2 = \emptyset$ (we can do this since $\G2$ is a closed set), and choose from these sets a finite subcover of the compact set $(K_n \times K_n) \setminus N$. Let $\mathcal{F}$ be the collection of open sets $U$ for which there exists a $V$ such that either $U \times V$ or $V \times U$ is in the cover $\widetilde{\mathcal{W}}$. Note that $\mathcal{F}$ is a finite cover of $K_n$. For each $g\in K_n$, let $$U_g = \bigcap\limits_{U\in \mathcal{F},\, g\in U} U,$$ and $\mathcal{V} =\{U_g:g\in K_n\}$. Since $\mathcal{F}$ is a finite collection, so is $\mathcal{V}$; moreover, $\mathcal{V}$ is an open cover of $K_n$ (in fact, $\mathcal{V}$ is the intersection refinement of $\mathcal{F}$, see \cref{intersectionseparatingrefinement}). We claim that $(\mathcal{V} \times \mathcal{V}) \cap \G2$ refines $\mathcal{W}^{(2)}$. Consider $U_g \times U_h$ for some $U_g, U_h \in \mathcal{V}$. If $(g, h) \in U \times V$ for some $U \times V \in \mathcal{W}$ then by construction $U_g \times U_h \subset U \times V$, whence $(U_g \times U_h) \cap \G2 \subset (U \times V) \cap \G2 \in \mathcal{W}^{(2)}$. On the other hand, if $(g, h) \in (K_n \times K_n) \setminus N$, then $(g, h) \in (U \times V)$ for some $U \times V \in \widetilde{\mathcal{W}} \setminus \mathcal{W}$, in which case $U_g \times U_h \subset U \times V \subset (K_n \times K_n) \setminus \G2$, and so $(U_g \times U_h) \cap \G2 = \emptyset$. It follows that $(\mathcal{V} \times \mathcal{V}) \cap \G2 \prec \mathcal{W}^{(2)} \prec \restrict{m^{-1}(\mathcal{U}_n^1)}{K}$. So $\mathcal{V}$ satisfies the multiplication condition on $K_n$, and to finish we just have to extend $\mathcal{V}$ to a cover of $G$. We do this by adding in any cover of $G \setminus K_n$ by relatively compact sets, and we define $\mathcal{V}_6$ to be the resulting cover. Note that the construction of $\mathcal{V}_9$ can be performed in a similar manner to that of $\mathcal{V}_6$, using the continuity of $\sigma : \G2 \to {\mathbb T}$ and the chosen normal sequence of covers for ${\mathbb T}$; as a consequence, we omit the details. Suppose now that $G$ is equipped with a Haar system of measures and we also want to satisfy condition \cref{haaryfiber}. By the continuity of the Haar system, $x \mapsto \int_G f^n_j\,d\mu^x$ is continuous for each $j$, so using this and the triangle inequality we can find a cover $\mathcal{V}_8^0$ of $\G0$ such that for $x, y \in V \in \mathcal{V}_8^0$ and $|\lambda_j| < n$ we have $$\left|\int_G \left(\sum\limits_j \lambda _j f_j^n\right)\,d\mu^x - \int_G \left(\sum\limits_j \lambda_j f_j^n\right)\,d\mu^y\right| < \frac{1}{n}.$$ Choose $\mathcal{V}_8 = s^{-1}(\mathcal{V}_8^0)$. Use the covers $\mathcal{V}_i$ and $\mathcal{W}_n$ to construct a new cover $\mathcal{U}_{n + 1}$ as already described, concluding the proof. \end{proof} \begin{Definition} We call a normal sequence of covers which satisfies properties \cref{proper}-\ref{inverse} in \cref{Coverings} a \textbf{groupoid normal sequence} for $G$ (we include condition \ref{haaryfiber} as well if $G$ has a Haar system of measures, and condition \ref{cocyclecond} if there is an associated 2-cocycle). \end{Definition} \begin{Remark} \label{CoveringsExplanation} In \cref{Coverings} we constructed a normal sequence which we constrained by a list of requirements. We briefly explain the significance of each of these properties towards the construction of the approximation groupoid $G_\alpha$. \begin{itemize} \item The normal sequence $\{\mathcal{W}_n\}$ appearing in the hypotheses would usually come from a sequence of functions $\{f_n\}$, as explained in \cref{coverorfunction}. This normal sequence is used in \cref{refines}, which says we can choose $\{\mathcal{U}_n\}$ such that the functions $f_n$ induce uniformly continuous functions $\overline{f}_n$ on the resulting approximation groupoid $G_\mathcal{U}$, in such a way that each $f_n$ is the pullback of $\overline{f}_n$. This technique will be used to show $C_c(G)$ is the limit of continuous compactly supported functions on the approximations, see \cref{sexyapproximationtheorem}. \item \cref{proper} guarantees that the quotient map is a proper map (as shown in \cref{propermapping}). \item \cref{stinclusion} ensures that, in the quotient groupoid resulting from such a normal sequence, the target and source maps are well defined and continuous (as shown in \cref{quotientconstruction}). \item \cref{intersectionseparatingcondition} guarantees that our quotients are bona fide groupoids and not inverse semigroupoids (i.e. if $g$ and $h$ are arrows in the quotient with the target of $g$ equaling the source of $h$ we will have that $hg$ is an arrow, see Claim 3 in the proof of \cref{quotientconstruction}). \item \cref{subspacecondition} gives us that the object space topology induced by the covers $\{\mathcal{U}^0_n\}$ is the subspace topology defined by the inclusion $\gpdG0 \subseteq \gpdG1$ with the topology on $\gpdG1$ induced by the normal sequence $\{\mathcal{U}^1_n\}$. \item \cref{contmult} makes sure that the partial multiplication on the quotient groupoid is well-defined and continuous (as shown in \cref{quotientconstruction}). \item \cref{inverse} gives us that the inversion map is a homeomorphism of our quotient (as shown in \cref{quotientconstruction}). \item \cref{haaryfiber} allows us to push the Haar system to a Haar system on the quotient groupoid (as shown in \cref{Haarsystemapproximation}). \item \cref{cocyclecond} ensures we can push the cocycle $\sigma$ on $G$ to a cocycle $\sigma_\alpha$ on the quotient $G_\alpha$ (as shown in \cref{Haarsystemapproximation}). \item \cref{haaryfiber} and \cref{cocyclecond} guarantee that the induced map on convolution algebras $C_c(G_\alpha,\sigma_\alpha) \hookrightarrow C_c(G,\sigma)$ is a $*$-embedding (as shown in \cref{Haarsystemapproximation}). \end{itemize} \end{Remark} The following is well known and will be used throughout the proofs in this section. \begin{Fact}\label{closed} Let $X$ and $Y$ be locally compact Hausdorff spaces. If $f:X\to Y$ is a proper and continuous function then $f$ is closed. \end{Fact} \begin{Theorem} \label{quotientconstruction} Suppose $G$ is a $\sigma$-compact groupoid with open source and target maps. For any normal sequence $\{\mathcal{W}_n:n\ge 0\}$ of open covers (see \cref{normalsequencespace}) there exists a pseudo-metric structure $(G,d)$ on $G$ such that the induced metrizable quotient $G_\alpha$ of $(G,d)$ is a second countable, locally compact, Hausdorff groupoid. Moreover, each morphism in the chain $\begin{tikzcd} G\arrow{r}{id_G} & (G,d) \arrow{r}{p_\alpha} & G_\alpha \end{tikzcd}$ is proper and continuous, and the pre-image of the cover of $G_\alpha$ by balls of radius $\frac{1}{2^n}$ refines $\mathcal{W}_n$. \end{Theorem} \begin{proof} Let $\{K_n\}$ be a groupoid exhaustion by compact neighborhoods of $G$ (see \cref{groupoidexhuastion}), and use \cref{Coverings} to construct a groupoid normal sequence $\{\mathcal{U}_n\}_{n\ge 0}$ satisfying \cref{proper}-\cref{inverse} for $G$. Let $d^i:\Gi \times \Gi$ be the psuedo-metric induced on $\Gi$ by the sequence of covers $\{\mathcal{U}^i_n:n\ge 0\}$ for $i =0,1$ (see \cref{topapprox} for details). Recall that we defined an equivalence relation $x\sim^i y$ if and only if for every $n \ge 0$ there exists $U\in \mathcal{U}_n^i$ such that $x,y\in U$, and that the Hausdorff (and hence metrizable) quotient of $(G, d)$ is the quotient by this relation. Denote the equivalence class of $x\in G^{(i)}$ by $[x]^i$. Let $\Gi_\alpha$ be the Hausdorff quotient of the pseudo-metric space $(\Gi, d^i)$, and denote by $p^i_\alpha$ the quotient map $(\Gi, d^i) \to \Gi_\alpha$. Let $q_\alpha^i:\Gi\to \Gi_\alpha$ be equal to $p^i_\alpha \circ id_G $. By \cref{propermapping}, $q_\alpha^i$ is a proper and continuous map; as a consequence, $G_\alpha$ is locally compact. We still need to check that $G_\alpha$ has an induced groupoid structure; furthermore, that $G_\alpha$ with the induced topology is a second countable topological groupoid. \vskip10pt\noindent \textbf{Claim: } Condition \cref{stinclusion} for a groupoid normal sequence (see \cref{Coverings}) guarantees that $g\sim_1 h$ implies both $s(g) \sim_0 s(h)$ and $t(g)\sim_0 t(h)$. Since $g\sim_1 h$, we have that, for every $n \in {\mathbb N}$, there exists $U\in \mathcal{U}^1_n$ such that $g,h\in U$; then, as $s(U)$ is contained in an element $V$ of $\mathcal{U}_n^0$ by condition \cref{stinclusion}, we get $s(g), s(h) \in V \in \mathcal{U}_n^0$. Hence $s(g)\sim_0 s(h)$, and similarly $t(g)\sim_0 t(h)$. This shows that we can define source and target maps from $\G1_\alpha$ to $\G0_\alpha$, which we will also call $s$ and $t$ (as the meaning will be clear from context), as follows: $$s([g]^1) = [s(g)]^0 \text{ and } t([g]^1) = [t(g)]^0.$$ \vskip10pt\noindent \textbf{Claim:} $s,t : \G1_\alpha\to \G0_\alpha$ are continuous. We show this for $s$; the proof that $t$ is continuous is similar. Consider $V$ a relatively compact open neighborhood of $[x]^0 \in \G0_\alpha$. Let $g \in \G1$ be any element of $s^{-1}(x)$; we will find a neighbourhood $V_1$ of $[g]^1$ in $\G1_\alpha$ such that $s(V_1) \subseteq V$. Since $(p_\alpha^0)^{-1}(V)$ is open in $(\gpdG0, d^0)$, by \cref{propermapping} there exists an $l \in {\mathbb N}$ such that $W = st(x,\mathcal{U}_l^0)$ satisfies $W \subset (p^0_\alpha)^{-1}(V)$. Let $U = st(g,\mathcal{U}_l^1)$. Any $U' \in \mathcal{U}_l^1$ which contains $g$ necessarily has $x \in s(U')$; by assumption \cref{stinclusion} on the covers, there exists some $W' \in \mathcal{U}^0_l$ for which $s(U') \subset W'$, and $x \in W'$ implies $W' \subset W$. Since $U$ is the union of all such sets $U'$ it follows that $s(U) \subset W$, where by construction $W$ was chosen such that $p_\alpha^0(W) \subset V$. Applying \cref{propermapping} again, $U_1 := B(g, \frac{1}{2^l}) \subset U$. Then $V_1 := p_\alpha^1(U_1)$ is an open set in $\G1_\alpha$ (since $p_\alpha^1$ is an open map) and we have $$s(V_1) = s(p_\alpha^1(U_1)) = p_\alpha^0(s(U_1)) \subset p_\alpha^0(s(U)) \subset p_\alpha^0(W) \subset V,$$ allowing us to conclude that $s : \G1_\alpha \to \G0_\alpha$ is continuous. \vskip10pt\noindent \textbf{Claim}: The topology on $\gpdG0_\alpha$ induced by the covers $\{\mathcal{U}_n^0\}$ is the same as the subspace topology induced by the inclusion $\G0_\alpha \subset \G1_\alpha$. From condition \cref{subspacecondition} on the groupoid normal sequence (i.e. $\mathcal{U}_{n+1}^0 \leq \mathcal{U}_n^1|_{\G0}$ for $n \geq 1$) we get $st(x,\mathcal{U}_{n + 1}^0) \leq st(x,\mathcal{U}^1_{n})|_{\G0}$ for $n\ge 0$. Hence the topology induced by the covers $\{\mathcal{U}_n^0\}$ is finer than the subspace topology on $\G0_\alpha$ induced by the covers $\{\mathcal{U}^1_n\}$. Conversely, use the fact that $s : \G1_\alpha \to \G0_\alpha$ is continuous (as proven above); then the identity map $\restrict{s}{\G0_\alpha}$ is still continuous, whence the subspace topology on $\G0_\alpha$ induced by the topology on $\G1_\alpha$ is finer than the topology on $\G0_\alpha$ induced by the covers $\{\mathcal{U}_n^0\}$. \vskip10pt\noindent We now turn our attention to the partial multiplication on $G_\alpha$. Define $\G2_\alpha$ to be the collection of pairs $([g]^1,[h]^1)$ of elements of $\G1_\alpha$ with $[s(g)]^0 = [t(h)]^0$, and $m_\alpha:\G2_\alpha \to \G1_{\alpha}$ by $$m_\alpha([g]^1,[h]^1) = [\{g'h' \in G: g'\in [g]^1, h'\in [h]^1 \text{ and } s(g') = t(h')\}].$$ The above will not define a multiplication operation on $\G2_\alpha$ if there exist $g, h \in G$ with $[s(g)]^0 = [t(h)]^0$ for which the set on the right hand side of the definition is empty. In the next claim we show this situation cannot occur. \vskip10pt\noindent \textbf{Claim:} Condition \cref{intersectionseparatingcondition} on the groupoid normal sequence gives us that if $g,h\in G$ satisfy $[s(g)] = [t(h)]$ then there exist $g' \sim g$ and $h' \sim h$ such that $s(g') = t(h')$, i.e. $(g', h') \in \G2$. We will use the notation and terminology from \cref{intersectionseparatingrefinement}. Note that $\restrict{\mathcal{U}_n^1}{K_n}$ is a finite cover of $K_n$; denote the sets in the intersection refinement cover $(s(\restrict{\mathcal{U}_n^1}{K_n}) \cup t(\restrict{\mathcal{U}_n^1}{K_n}))'$ of $s(K_n)\cup t(K_n)$ by $U^n_x$; that is, $U^n_x = (\cap_U\, s(U)) \cap (\cap_V\, t(V))$, where the intersections are taken over all $U \in \mathcal{U}^1_n$ which intersect $K_n$ and $s^{-1}(x)$ and over all $V \in \mathcal{U}^1_n$ which intersect $K_n$ and $t^{-1}(x)$. Choose $N \ge 0$ large enough so that $st(g, \mathcal{U}_N^1), st(h, \mathcal{U}_N^1) \subset int(K_N)$. If there exits $n \ge N$ such that $\overline{U^n_{s(g)}} \cap \overline{U^n_{t(h)}} =\emptyset$ then $\overline{[[s(g)]]}\cap \overline{[[t(h)]]} = \emptyset$, and since $\restrict{\mathcal{U}_{n+1}^0}{K_n}\leq_{int}\; (s(\mathcal{U}_n^1|_{K_n}) \cup t(\mathcal{U}_n^1|_{K_n}))$, it must be the case that $\mathcal{U}^0_{n + 1}$ separates $s(g)$ and $t(h)$; that is, $[s(g)] \neq [t(h)]$, contradicting assumptions. Hence we must have $\overline{U^n_{s(g)}} \cap \overline{U^n_{t(h)}} \neq \emptyset$ for every $n\ge N$. Clearly $U^n_{s(g)} \subset s(st(g, \mathcal{U}^1_n))$; since each $U \in \mathcal{U}^1_n$ is relatively compact and $s$ is continuous, it is easy to see that $s(\cl{U}) = \cl{s(U)}$, so $\cl{s(st(g, \mathcal{U}^1_n))} = s(\,\cl{st(g, \mathcal{U}^1_n)}\,)$, whence $\cl{U^n_{s(g)}} \subset s(\,\cl{st(g, \mathcal{U}^1_n)}\,)$, and similarly with $s$ replaced by $t$ and $g$ replaced by $h$. We can thus conclude that $s(\cl{st(g, \mathcal{U}^1_n)}) \cap t(\cl{st(h, \mathcal{U}^1_n)}) \not= \emptyset$ for all $n \geq N$. For $n \geq N$ let $T_n := s(\cl{st(g, \mathcal{U}^1_n)}) \cap t(\cl{st(h, \mathcal{U}^1_n)})$, a closed (in fact, compact) set contained in the compact set $K_N$. Note that $\{T_n\}$ is a decreasing sequence of closed sets when ordered by inclusion, and hence has the finite intersection property. It follows that there exists a $z \in \cap_{n \geq N} T_n$. From the definition of $T_n$, for each $n$ there exists a $g_n \in \cl{st(g, \mathcal{U}^1_n)}$ such that $s(g_n) = z$ and an $h_n \in \cl{st(h, \mathcal{U}^1_n)}$ such that $t(h_n) = z$. Since $\{g_n\}, \{h_n\} \subset K_N$ we can find limit points $g'$ of $\{g_n\}$ and $h'$ of $\{h_n\}$ respectively. By the continuity of the source and target maps we must have $s(g') = t(h') = z$. Moreover, from the construction it is clear that $d(g_n, g) \to 0$, so it follows that $g \sim g'$ and similarly $h \sim h'$, concluding the proof of the claim. \vskip10pt\noindent \textbf{Claim:} $m_\alpha([g]^1,[h]^1) = [gh]^1$ for $(g, h) \in \G2$. Fix any $(g, h) \in \G2$. We check that $m_\alpha([g]^1,[h]^1) \subset [gh]^1$ (the other inclusion is obvious from the definition). Suppose $g'\in [g]^1$ and $h'\in [h]^1$ are such that $g'h'$ is defined. Choose $n_0 \geq 2$ large enough so $g, h, g', h' \in K_{n_0}$. For every $n \geq n_0$, there exists $U, V \in \mathcal{U}_n$ such that $g,g' \in U$ and $h,h'\in V$ (since $g \sim g'$ and $h \sim h'$); it follows from condition \ref{contmult} on the groupoid normal sequence that $gh,g'h' \in m(U \cap K_n, V \cap K_n) \subset W$ for some $W$ an element of $\mathcal{U}_{n-1}$. Since this holds for any $n \geq n_0$, we get $gh \sim g'h'$, so $g'h' \in [gh]^1$. \vskip10pt\noindent \textbf{Claim:} $m_\alpha : \G2_\alpha \to \G1_\alpha$ is continuous. Showing that $m_\alpha$ is continuous is similar to the proof that the source map is continuous, though, as in other claims involving the multiplication, one has to make sure to work in the framework of a specific $K_n$ for $n$ large enough. \vskip10pt\noindent \textbf{Claim:} There is a well-defined, continuous inverse operation on $\G1_\alpha$. Define $([g]^1)^{-1} = [g^{-1}]^1$. From the definition of multiplication, it is immediate that $m([g]^1,([g]^1)^{-1}) = [id_{t(g)}]^1$ and that $m(([g]^1)^{-1},[g]^1) = [id_{s(g)}]^1$. Again, one can check that the inversion mapping on $G$ with the induced pseudo-metric is continuous to conclude that the inversion map is continuous on the metric quotient $G_\alpha$. \vskip10pt\noindent It follows that $G_\alpha$ is a topological groupoid. Each of the functions in the composition $$\begin{tikzcd} G\arrow{r}{id_G} & (G,d) \arrow{r}{p_\alpha} & G_\alpha \end{tikzcd}$$ is continuous. By Theorem 14 on page 7 in \cite{Isbell}, the pre-image of the cover of $G_\alpha$ by balls of radius $1$ refines $\mathcal{W}_0$, and using induction on the same argument we get that the pre-image of balls of radius $\frac{1}{2^{n}}$ refines $\mathcal{W}_n.$ \end{proof} \begin{Theorem} \label{Haarsystemapproximation} Suppose $G$ is a $\sigma$-compact groupoid equipped with a Haar system of measures $\{\mu^x:x\in \G0\}$ and a 2-cocycle $\sigma : \G2 \to {\mathbb T}$. Given any normal sequence of open covers $\mathcal{W}_n$ of $G$, one can apply the construction of \cref{quotientconstruction} to get a second countable approximation $G_\alpha$ of $G$. Additionally, moreover, the construction can be modified so $G_\alpha$ admits a Haar system of measures and a 2-cocycle $\sigma_\alpha$ for which the canonical pullback map $C_c(G_\alpha, \sigma_\alpha) \to C_c(G, \sigma)$ is a $*$-morphism of topological $*$-algebras (with respect to the I-norm or inductive limit topology). \end{Theorem} \begin{proof} Let $\{K_n\}$ be a groupoid exhaustion by compact neighborhoods of $G$ (see \cref{groupoidexhuastion}), and use \cref{Coverings} to construct a groupoid normal sequence $\{\mathcal{U}_n\}$ satisfying conditions \ref{proper}-\ref{cocyclecond}. The proof of \cref{quotientconstruction} shows how to construct the quotient groupoid $G_\alpha$ (and proves that the various groupoid operations are well defined); below, we show that we also get an induced Haar system on $G_\alpha$, and an induced 2-cocycle. Denote by $(G, d)$ the groupoid $G$ with the topology generated by the pseudo-metric $d$ induced by $\{\mathcal{U}_n\}$. We begin with the easier part, which is to construct $\sigma_\alpha$. Notice that $d(x,y) = 0$ for $(x, y) \in \G2$ implies, by condition \cref{cocyclecond} in \cref{Coverings}, that $\sigma(x) = \sigma(y)$. We may therefore define $\sigma_\alpha([x]) = \sigma(x)$. It is straightforward to check that this is continuous and satisfies the cocycle condition (see \cref{defcocycle}). We move on to address the statements about Haar systems. Denote by $p_\alpha$ the metric quotient map $(G, d) \to G_\alpha$. Consider $f \in C_c(G_\alpha)$, and let $\tilde{f}$ denote the composition $G \xrightarrow{id_G} (G,d) \xrightarrow{p_\alpha} G_\alpha \xrightarrow{f} {\mathbb C}$. We will show that, given $\epsilon > 0$, there exists an $n$ such that if $x, y \in U \in \mathcal{U}_n^0$ then $|\int\tilde{f}\,d\mu^x - \int \tilde{f}\,d\mu^y|< \epsilon$; hence $x \sim y$ implies that $\int \tilde{f}\,d\mu^x = \int \tilde{f}\,d\mu^y$. Since $p_\alpha \circ\, id_G$ is proper (see \cref{quotientconstruction}), $\tilde{f} \in C_c(G)$. Choose $M > \sup \mu^x(\supp(\tilde{f}))$ with $M > 0$, where the supremum is taken over all $x \in \gpdG0$ (the fact that the supremum exists follows from \cref{compactmaxmeasure}). Let $\epsilon' =\frac{\epsilon}{3M}$. Cover $\supp(\tilde{f})$ by open sets $\{V_1, \ldots, V_{k}\}$ where each $V_i$ is of the form $\{y\in G: |\tilde{f}(x_i) - \tilde{f}(y)| < \epsilon'\}$ for some $x_i \in \G0$. By a Lebesgue lemma argument and by \cref{propermapping}, there exists $m\ge 0$ such that, for each fixed $i$ and each $x \in V_i$ we have $U_x := st(x,\mathcal{U}^1_m) \subset V_i$. We may assume that $m$ has also been chosen large enough so that $|\tilde{f}(x)|< m$ for all $x \in G$ and that $\supp(\tilde{f}) \subset int(K_m)$ (both of which we can do due to the fact that $\supp(\tilde{f})$ is compact), and also such that $\frac{1}{m} < \frac{\epsilon}{3}$. Consider the (finite) partition of unity $\{f^m_j\}_{j \in J_m}$ that we relied on in the construction of the groupoid normal sequence (see Property \ref{haaryfiber} of \cref{Coverings}). We approximate $\tilde{f}$ by these functions as follows: for each $j$, if $\supp(f^m_j) \subset \supp(\tilde{f})$ then choose any $x_j \in \text{int}(\supp(f^m_j))$ and let $\lambda_j = \tilde{f}(x_j)$, else let $\lambda_j = 0$. Note that $|\lambda_j| < m$ for each $j$. Since the supports of the partition $\{f^m_j\}$ are subordinate to the cover $\{\mathcal{U}_m\}$, for any $x \in \supp(f^m_j)$ we have $|\tilde{f}(x) - \lambda_j| < \epsilon'$ (this is still the case when $\lambda_j = 0$ by choice, since if $\supp(f^m_j) \not\subset \supp(\tilde{f})$ then there exists a $y \in \supp(f^m_j)$ for which $\tilde{f}(y) = 0$, and so $|\tilde{f}(x)| < \epsilon'$ on $\supp(f^m_j)$). We then have for any $x \in K_m$ \begin{align*} |\tilde{f}(x) - \sum\limits_{j \in J_m} \lambda_j f^m_j(x)| &= |\tilde{f}(x) \sum\limits_{j \in J_m} f^m_j(x) - \sum\limits_{j \in J_m} \lambda_j f^m_j(x)| \\ &\leq \sum\limits_{j \in J_m} |\tilde{f}(x) - \lambda_j| \cdot f^m_j(x) \end{align*} For any fixed $j$, if $x \in \supp(f_j^m)$ then $|\tilde{f}(x) - \lambda_j| < \epsilon'$ by construction, and otherwise $f^m_j(x) = 0$. In either case it is true that $|\tilde{f}(x) - \lambda_j|\cdot f^m_j(x) \leq \epsilon' f^m_j(x)$, and since the $f^m_j$'s add up to 1 at each $x \in K_m$ we get $$|\tilde{f}(x) - \sum\limits_{j \in J_m} \lambda_j f^m_j(x)| \leq \epsilon',$$ as desired. Moreover, by construction, $\supp\left(\sum \lambda_j f^m_j\right) \subset \supp(\tilde{f}) \subset int(K_m)$. It now easily follows that, for any $x \in \gpdG0$, $$\left|\int_G \tilde{f}\,d\mu^x - \int_{G} \sum_{j} \lambda_j f_j^m\,d\mu^x\right| \leq \epsilon' \mu^x(\supp(\tilde{f})) <\epsilon'M = \frac{\epsilon}{3}.$$ By Property \cref{haaryfiber} of the groupoid normal system, for any $U \in \mathcal{U}^1_m$ we have $x, y \in s(U)$ implies (for $|\lambda_j| < m$, which we arranged for in the choice of $m$ and $\lambda_j$) $$\left|\int_{G} \sum_{j} \lambda_j f_j^m\,d\mu^x - \int_{G} \sum_{j} \lambda_j f_j^m\,d\mu^y\right| < \frac{1}{m}<\frac{\epsilon}{3}.$$ Hence the triangle inequality gives us that for any $U \in \mathcal{U}_m$ and $x, y \in s(U)$ we have $$\left|\int_{G} \tilde{f}\,d\mu^x - \int_{G} \tilde{f}\,d\mu^y\right| < \epsilon.$$ As a consequence of property \cref{intersectionseparatingcondition} of the groupoid normal sequence, $\restrict{\mathcal{U}^0_{m + 1}}{K_m}$ refines $\{s(U) \cap K_m : U \in \mathcal{U}^1_m\}$. Hence for the given $\epsilon > 0$, if we take any $V \in \mathcal{U}^0_{m + 1}$ then $x, y \in (V \cap K_m) \subseteq s(U)$ implies \begin{equation} \left|\int_{G} \tilde{f}\,d\mu^x - \int_{G} \tilde{f} \,d\mu^y\right| < \epsilon. \label{eqn.Haarsystemapproximation.continuity} \end{equation} In particular it follows that for $x \sim y$ we must have $\int_G \tilde{f} \,d\mu^x = \int_G \tilde{f} \,d\mu^y$. For each $[x] \in \gpdG_\alpha^0$, we can thus define a positive linear functional on $\mathcal{C}_c\left(G_\alpha\right)$ by $f \mapsto \int_G \tilde{f} \,d\mu^x$, where $x$ is chosen to be any representative of $[x]$. By the Riesz-Markov-Kakutani theorem, this defines a unique regular Radon measure $\mu^{[x]}$ on $G_\alpha$ (supported on $G_\alpha^{[x]}$); we claim that the collection $\{\mu^{[x]} : [x] \in G_\alpha^{0}\}$ defines a Haar system. The fact that $$\int_{G_\alpha} f(y)\,d\mu^{[t(g)]} = \int_{G_\alpha} f(gy) \,d\mu^{[s(g)]}$$ follows as in the proof of \cref{Haarquotient}. Next, we need to prove continuity of $[x] \mapsto \int_{G_\alpha} f(y)\,d\mu^{[x]}$ for fixed $f \in \mathcal{C}_c(G_\alpha)$. Consider $[x] \in \G0_\alpha$ with representative $x \in \G0$, and choose $N \geq 0$ such that $st(x, \mathcal{U}_N) \subset K_{N - 1}$. Given $\epsilon > 0$ we can use the proof of \cref{eqn.Haarsystemapproximation.continuity} to find an $n > N$ such that for any $V \in \mathcal{U}_{n}^0$ and $y, z \in V \cap K_{n - 1}$ we have \begin{equation*} \left|\int_{G} \tilde{f}\,d\mu^y - \int_{G} \tilde{f} \,d\mu^z\right| < \epsilon. \end{equation*} In the pseudo-metric $d$, the cover by $\frac{1}{2^n}$-balls refines $\mathcal{U}_n$, so there exists a $W \in \mathcal{U}_n$ such that $\{y: d(x, y) < \frac{1}{2^n}\} \subset W$; since $st(x, \mathcal{U}_n) \subset st(x, \mathcal{U}_N) \subset K_{N - 1}$, we have $W \subset K_{n - 1}$. It follows that for all $[y] \in \G0_\alpha$ for which $d([x], [y]) < \frac{1}{2^n}$ we have \begin{equation*} \left|\int_{G_\alpha} f\,d\mu^{[x]} - \int_{G_\alpha} f\,d\mu^{[y]}\right| < \epsilon, \end{equation*} proving the continuity condition of the Haar system. This concludes the proof of the fact that $\{\mu^{[x]} : [x] \in \G0_\alpha\}$ is a Haar system on $G_\alpha$. The fact that the pullback map induces a *-morphism of convolution algebras follows from \cref{inducedmorphism}. \end{proof} \begin{Theorem}\label{sexyapproximationtheorem} If $(G,\sigma)$ is a $\sigma$-compact groupoid with Haar system of measures and 2-cocycle $\sigma:\to {\mathbb T}$, then $G$ admits an inverse approximation by second countable and locally compact groupoids $\{(G_\alpha,\sigma), p^\beta_\alpha, A\}$ as in \cref{approximate} and, furthermore, each groupoid in the approximation admits a Haar system of measures which makes all the bonding maps and projections mappings onto the inverse system Haar system preserving. \end{Theorem} \begin{proof} Let $\mathcal{N}$ denote the collection of groupoid normal sequences on $G$. $\mathcal{N}$ with the order $\alpha \leq \beta$ if $\alpha$ cofinally refines $\beta$ is directed (i.e. forms a filter under the poset structure) because the collection of groupoid normal sequences are cofinal in the collection of all normal sequences. If $\gamma \leq \theta$ then the equivalence relation determined by the normal sequence $\gamma$ has strictly smaller equivalence classes than the equivalence relation determined by $\theta$; furthermore, there is a unique function $q^\theta_\gamma :G_\theta \to G_\gamma$ such that $q_\gamma = q^\theta_\gamma \circ q_\theta$. One can also easily check that $q_\theta^\gamma$ is a proper surjective groupoid homomorphism. $\mathcal{N}$ induces an inverse system $\{G_\alpha, q^\alpha_\beta:G_\alpha\to G_\beta, \mathcal{N}\}$ of groupoids. We claim that $G$ is the inverse limit of this system with projection maps given by $\{q_\alpha:G\to G_\alpha:\alpha \in \mathcal{N}\}$. It is easy to see that the natural map $G\to \varprojlim_\alpha G_\alpha$ given by $g \to (q_\alpha(g))_\alpha$ is a homeomorphism (see \cref{topapprox}) as the groupoid normal sequences are, by \cref{Coverings}, cofinal (in the order induced by cofinal refinement) in the directed set of all normal sequences of covers for $G$. It is clearly a groupoid homomorphism and thus it is topological groupoid isomorphism. In the case where $G$ has a Haar system of measures and 2-cocycle $\sigma$, we know that $G_\alpha$ can be chosen to have a Haar system of measures and cocycle $\sigma_\alpha$ such that the cannonical pullback morphisms $C_c(G_\alpha,\sigma_\alpha) \to C_c(G,\sigma)$ are *-morphisms. The maps $q^\alpha_\beta:G_\alpha \to G_\beta$ have the property that $(q^\alpha_\beta)^*:C_c(G_\beta,\sigma_\beta) \to C_c(G_\alpha,\sigma_\alpha)$ are *-morphisms for all $\alpha \ge \beta$ (it follows from essentially the same argument as \cref{Haarsystemapproximation}). For purely topological considerations, we have that $\varinjlim_\alpha C_c(G_\alpha,\sigma_\alpha) = C_c(G,\sigma)$ in the category of topological *-algebras over ${\mathbb C}$. \end{proof} \begin{Corollary} Let $G$ and $\{G_\alpha,p^\alpha_\beta,A\}$ be as in \cref{sexyapproximationtheorem}. The canonical pullback maps induce $I$-norm preserving $*$-embeddings $C_c(G_\alpha,\sigma_\alpha) \to C_c(G_\beta,\sigma_\beta)$ of twisted convolution algebras when $\beta \ge \alpha$ and, furthermore, we have that $C_c(G,\sigma) = \varinjlim_\alpha C_c(G_\alpha,\sigma_\alpha)$ in the category of topological *-algebras over ${\mathbb C}$. \end{Corollary} \begin{Corollary} Let $G$ be a $\sigma$-compact groupoid with Haar system of measures and 2-cocycle $\sigma:\G2\to {\mathbb T}$ and let $\{(G_\alpha,\sigma_\alpha),p^\alpha_\beta:A\}$ be the inverse approximation as in the previous theorem. The maximal groupoid $C^*$-algebra functor takes the directed system $\{C_c(G_\alpha,\sigma_\alpha),(p^\alpha_\beta)^*,A\}$ to a directed system $\{C^*(G_\alpha,\sigma_\alpha),(p^\alpha_\beta)^*,A\}$ and, moreover, $\varinjlim_\alpha C^*(G_\alpha,\sigma_\alpha) = C^*(G,\sigma)$. \end{Corollary} \begin{proof} The first assertion follows from \cref{inducedmorphism}. We just need to prove the second assetion about direct limits. Recall that a $C^*$-algebra is the direct limit of a directed system $\{A_\alpha,p^\alpha_\beta,D\}$ if there exists a mapping of the inverse system to $A$ that commutes with the system and such that the union of the images of the $C^*$-algebras in the system are dense in $A$. It is evident that the mapping $i^\alpha_\beta:C_c(G_\alpha,\sigma_\alpha) \hookrightarrow C_c(G,\sigma)$ induces a morphism $j^\alpha_\beta:C^*(G_\alpha,\sigma_\alpha) \to C^*(G,\sigma)$ and it is also clear that this pullback morphism commutes with all the other pullback morphisms in the direct system. Notice also that the union of the images clearly contains $C_c(G,\sigma)$ and hence is dense. \end{proof} Applying the construction of \cref{mainthm.approximation} to the specific examples of groups and topological spaces respectively, we easily obtain the following two corollaries, stated here for future reference: \begin{Corollary} If $G$ is a $\sigma$-compact group with cocycle $\sigma$ then there exists an inverse approximation $\{(G_\alpha,\sigma_\alpha),p^\alpha_\beta,A\}$ of $G$ by second countable groups $G_\alpha$ with cocycle $\sigma_\alpha$ and proper continuous and cocycle preserving bonding homomorphisms $p^\alpha_\beta$ such that $C^*(G,\sigma)$ (the maximal completion) is an inductive limit of the directed system $\{C^*(G_\alpha,\sigma_\alpha),(p^\alpha_\beta)^*,A\}$. \end{Corollary} \begin{Corollary} If $X$ is a locally compact, Hausdorff, and $\sigma$-compact space then there exists an inverse approximation $\{X_\alpha,p^\alpha_\beta,A\}$ of $X$ by locally compact, Hausdorff, and second countable spaces $X_\alpha$ with proper and continuous bonding maps such that $C_0(X)$ is the directed limit of the dual directed system $\{C_0(X_\alpha),(p^\alpha_\beta)^*,A\}$. \end{Corollary} \subsection{Properties of the Approximations} \label{section.approximationproperties} In this section, we discuss some basic properties of groupoids, and whether or not the approximation construction preserves those properties, or can be suitably modified so the property passes to the approximation groupoids. \subsubsection{Transitivity} If $G$ is transitive (that is, for every $x, y \in \G0$ there exists $g \in \G1$ such that $s(g) = x$ and $t(g) = y$), it is not at all hard to see from the construction in \cref{quotientconstruction} that each of the quotient groupoids $G_\alpha$ is also transitive: in the quotient groupoid we have $s([g]) = [s(g)]$ and $t([g]) = [t(g)]$ for any $g \in \G1$. \subsubsection{Principality} If $G$ is a principal groupoid (i.e. the stabilizer groups $G_x^x$ are trivial for all $x \in \G0$), it is not necessarily true that $G_\alpha$ will necessesarily also be principal. For example, consider the groupoid $G$ (and the approximation) shown below: \null\hfill \begin{minipage}[t]{.2\linewidth} Groupoid $G$: \\ \begin{tikzpicture} \node (a) at (0, 0) {}; \node (b) at (1, 0) {}; \filldraw (a) circle(0.05); \filldraw (b) circle(0.05); \draw[->] (a) to[out=-150, in=-90] (-0.5, 0) node[anchor=east]{$x$} to[out=90, in=150] (a); \draw[->] (b) to[out=-30, in=-90] (1.5, 0) node[anchor=west]{$y$} to[out=90, in=30] (b); \draw[->] (a) to[out=30, in=150] node[midway, label={[yshift=-.15cm]$g$}]{} (b); \draw[->] (b) to[out = -120, in=-60] node[midway, anchor=north]{$h$} (a); \end{tikzpicture} \end{minipage} \hfill \begin{minipage}[t]{.25\linewidth} Covers: \\ $U^0 = \{x, y\}$ \\ $U^1 = \{x, y\}, V^1 = \{g, h\}$ \end{minipage} \hfill \begin{minipage}[t]{.28\linewidth} Approximation groupoid: \begin{center} \begin{tikzpicture} \node (a) at (0, 0) {}; \filldraw (a) circle(0.05); \draw[->] (a) to[out=180, in=-150] (-0.5, .5) node[label={[yshift=-.2cm, xshift=-.25cm]$x \sim y$}]{} to[out=30, in=100] (a); \draw[->] (a) to[out=80, in=150] (0.5, .5) node[label={[yshift=-.2cm, xshift=.1cm]$g \sim h$}]{} to[out=-30, in=0] (a); \end{tikzpicture} \end{center} \end{minipage} \hfill\null On the arrows of $G$ we place the discrete topology, and we use the groupoid normal sequence of covers $\mathcal{U}^0_n = \{U^0\}$, $\mathcal{U}^1_n = \{U^1, V^1\}$ to perform the approximation. Note that $G$ is a principal groupoid, and the normal sequence $\mathcal{U}$ satisfies the conditions of \cref{Coverings}; however, the quotient groupoid (${\mathbb Z}_2$, also shown above) is not principal. An interesting question to address is whether or not the constructions of the covers can be changed so that the approximation groupoids are, in general, principal when $G$ is. \subsubsection{\'Etalness} Recall that a topological groupoid $G$ is said to be \textbf{\'etale} if the source map $s$ and the target map $t$ are both local homeomorphisms. In the case when $G$ is \'etale, we can modify the construction so that the approximation groupoids $G_\alpha$ are also \'etale. We use the fact that a groupoid $G$ is \'etale if and only if $\G0$ is open in $G$ and $G$ has a Haar system of measures (c.f. page 2 of \cite{Renault}). Since $\G0$ is clopen in $G$, in constructing any of the approximation groupoids one can modify the cover $\{\mathcal{U}^1_0\}$ of $G$ so that, if $U\in \mathcal{U}_0^1$, then either $U \subset \G0$ or $U \subset (G \setminus \G0)$. Then, in the pseudo-metric induced by $\{\mathcal{U}_n\}$, the object space and arrow space of $G$ have distance 1 from each other, and so $\G0_\alpha$ is clopen in the approximation groupoid $G_\alpha$. Since $G_\alpha$ has an induced Haar system of measures by \cref{sexyapproximationtheorem}, it follows that $G_\alpha$ is \'etale. \section{Equivalence of Groupoids}\label{renaultequivalencetheorem} The main result in this section is \cref{equivalence}, which shows that equivalent $\sigma$-compact groupoids have Morita equivalent $C^*$-algebras. We will use the terminology from Section 1 of \cite{SW}. Recall that, for a groupoid $G$, a subset $A\subset \G0$ is said to be \textbf{full} if $G \cdot A = \G0$. If $G$ is a groupoid and $A\subset \G0$, then we use the notation $G(A)$ to denote the subgroupoid of $G$ whose arrows begin and end in $A$; i.e. the set of all $g\in G$ with $s(g),t(g) \in A$. \begin{Definition} If $G$ and $H$ are topological groupoids, we say that a groupoid $L$ is a \textbf{linking groupoid for $H$ and $G$} if $L^{(0)} = H^{(0)}\sqcup \G0$ such that $\G0$ and $\gpdH0$ are full clopen subsets of $\gpdL0$ and such that $L(\G0) = G$ and $L(\gpdH0) = H$. \end{Definition} If $G$ and $H$ have Haar systems of measures, then a linking groupoid $L$ for $G$ and $H$ also has a Haar system of measures (see Lemma 4 of \cite{SW}) that restricts to the Haar system on $G$ and $H$ respectively. \begin{Definition} Suppose $Z$ is a topological space for which there is a continuous and open map $r_Z : Z \to \gpdG0$ (called a \textbf{moment map}). Define $$G \ast Z = \{(g, z) \in G \times Z : s(g) = r_Z(z)\}$$ with the relative product topology. A \textbf{left action} of $G$ on $Z$ is a continuous map $G \ast Z \to Z$ such that $r_Z(z) \cdot z = z$ for all $z \in Z$ and if $(g, h) \in \gpdG2$ and $(h, z) \in G \ast Z$ then $(gh, z) \in G \ast Z$ and $gh \cdot z = g \cdot (h \cdot z)$. Say that the action is \textbf{free} if $g \cdot z = z$ implies $g = r_Z(z)$. The action is called \textbf{proper} if the map $G \ast Z \to Z \times Z$ given by $(g, z) \mapsto (g \cdot z, z)$ is proper. A \textbf{right action} is defined analogously, with the difference that the moment map is denoted $s_Z : Z \to \gpdG0$ and $z \cdot g$ is defined if and only if $s_Z(z) = r(g)$. \end{Definition} For a left action of $G$ on $Z$, $\lorbit{G}{Z}$ denotes the orbit space of the action; for a right action, we use the notation $\rorbit{Z}{G}$. \begin{Definition} $G$ and $H$ are said to be \textbf{equivalent} if there exists a locally compact Hausdorff space $Z$, called a \textbf{$(G,H)$-equivalence}, such that the following conditions hold: \begin{enumerate} \item $Z$ is a free and proper left $G$-space. \item $Z$ is a free and proper right $H$-space. \item The actions of $G$ and $H$ on $Z$ commute. \item The moment map $r_Z : Z \to \gpdG0$ induces a homeomorphism of $\rorbit{Z}{H}$ onto $\G0$. \item The moment map $s_Z : Z \to \gpdH0$ induces a homeomorphism of $\lorbit{G}{Z}$ onto $\gpdH0$. \end{enumerate} \end{Definition} Two groupoids $G$ and $H$ are equivalent if and only if there exists a linking groupoid for $G$ and $H$ (Lemma 3 of \cite{SW} shows how to construct a linking groupoid from an equivalence; conversely, if $L$ is a linking groupoid for $G$ and $H$, and we let $A = \gpdG0$, $B = \gpdH0$, then one can show that $Z = L^A_B$ is a $(G, H)$- equivalence). Note that these results are usually stated under the assumptions that $G$ and $H$ are second countable; however, the proofs do not use the second countability assumption and go through without change for $\sigma$-compact. The following lemma was suggested to the authors by Dana Williams and its proof is essentially the same as the one given in his book \cite{Williams} (which is currently in draft stage) on groupoid $C^*$-algebras. \begin{Lemma} Let $G$ and $H$ be $\sigma$-compact groupoids with open source and target maps, and let $Z$ be a $(G,H)$-equivalence. Then $Z$ must itself be $\sigma$-compact. \end{Lemma} \begin{proof} By assumption, $G$ and $H$ are $\sigma$-compact, and since $\lorbit{G}{Z}$ is homeomorphic to $\gpdH0$, so is $\lorbit{G}{Z}$. Let $\lorbit{G}{Z} = \cup_n K_n$ be an exhaustion of $\lorbit{G}{Z}$ by $\sigma$-compact neighbourhoods. Using the fact that the orbit map $\pi : Z \to \lorbit{G}{Z}$ is continuous and open (see Proposition 2.1 of \cite{MW3}), we can lift each $K_n$ to a compact set $T_n$ in $Z$ such that $\pi(T_n) = K_n$. Namely, at each $z \in Z$ choose a relatively compact open neighbourhood $U_z$. Then $\{\pi(U_z)\}$ is an open cover of $K_n$, so necessarily it has a finite subcover $\{\pi(U_{z_1}), \ldots \pi(U_{z_k})\}$. Let $T_n = \mathop{\cup}\limits_{i = 1}^k \left(\overline{U}_{z_i} \cap \pi^{-1}(K_n)\right)$. Let $G = \cup_n C_n$ be an exhaustion of $G$ by compact neighborhoods. Since the action of $G$ on $Z$ is continuous, it follows that $\{C_n \cdot T_n\}$ is an exhaustion of $Z$ by compact neighborhoods, concluding the proof that $Z$ is $\sigma$-compact. \end{proof} As shown in Lemma 3 of \cite{SW}, the linking groupoid associated to a $(G,H)$-equivalence $Z$ is topologically the disjoint union $G\sqcup H\sqcup Z \sqcup Z^{op}$ with object space $\G0\sqcup {H^{(0)}}$. Being that $G$, $H$ and $Z$ are all $\sigma$-compact, it follows that the linking groupoid associated to an equivalence between $\sigma$-compact groupoids with open source and target maps is $\sigma$-compact. Moreover, every linking groupoid comes from such an equivalence, leading to the following result: \begin{Corollary} Let $G$ and $H$ be $\sigma$-compact groupoids with open source and target maps and let $L$ be a linking groupoid for $G$ and $H$. Then $L$ is $\sigma$-compact. \end{Corollary} If $L$ is a linking groupoid for $G$ and $H$ we can approximate $L$ in such a way that we get equivalent approximations for $G$ and $H$. \begin{Proposition}\label{approximatethatlink} Let $G$ and $H$ be $\sigma$-compact groupoids with Haar systems and let $L$ be a $\sigma$-compact linking groupoid for $G$ and $H$. We may assume $L$ is endowed with a Haar system that restricts to the given Haar sytems for $G$ and $H$. There exists an approximation $\{L_\alpha,q^\alpha_\beta,A\}$ for $L$ (as in \cref{mainthm.approximation}), with projection maps $q_\alpha : L \to L_\alpha$, such that if we let $H_\alpha = q_\alpha(H)$ and $G_\alpha = q_\alpha(G)$ then: \begin{enumerate} \item $\{G_\alpha, \restrict{q^\alpha_\beta}{G_\alpha}, A\}$ is an approximation for $G$ and $\{H_\alpha, \restrict{q^\alpha_\beta}{H_\alpha}, A\}$ is an approximation for $H$, and \item $L_\alpha$ is a linking groupoid for $G_\alpha$ and $H_\alpha$. \end{enumerate} \end{Proposition} \begin{proof} We need to tweak the proof of \cref{mainthm.approximation} to require that the covers $\mathcal{U}_n$ in the groupoid normal sequences separate between $G$ and $H$, in the sense that if $U$ is an open set in one of the covers such that $U\cap G \neq \emptyset$ then $U\cap H = \emptyset$, and vice versa; however, it should be clear that this requirement can be enforced, since $G$ and $H$ are clopen sets in $L$. This guarentees that the images of $G$ and $H$ are clopen sets in the approximation $L_\alpha$ determined by the covers $\{\mathcal{U}_n\}$. It follows in particular that $L_\alpha^{(0)}$ is a disjoint union of the clopen sets $\G0_\alpha$ and $H^{(0)}_\alpha$. The fact that $\G0_\alpha$ is full in $L_\alpha$, and $L_\alpha(\G0_\alpha) = G_\alpha$ (and similarly for $\gpdH0$) follows immediately from the fact that the quotient map $q_\alpha : L \to L_\alpha$ commutes with both the source and target maps and the action of $L$ on $\gpdG0$. Hence $L_\alpha$ is a linking groupoid for $G_\alpha$ and $H_\alpha$. \end{proof} \begin{Theorem}[Equivalence Theorem] \label{equivalence} Let $G$ and $H$ be $\sigma$-compact groupoids with Haar systems. If $G$ and $H$ are equivalent then $C^*(G)$ and $C^*(H)$ are Morita equivalent. \end{Theorem} \begin{proof} Recall how the equivalence theorem works in the second countable setting (modulo technicalities that we will not need to consider here). Let $L$ be a linking groupoid for the second countable groupoids $G$ and $H$. The idea is to show that $C^*(L)$ is a linking algebra for $C^*(G)$ and $C^*(H)$. Notice first that the characteristic functions $p_G$ and $p_H$ for $\G0$ and ${H^{(0)}}$, respectively, are projections inside the multiplier algebra $M(C^*(L))$. One can show that $pC^*(L)q$ is an imprimitivity $C^*(G)\text{-}C^*(H)$ bimodule. In the $\sigma$-compact case, we have the exact same framework. It is immediate that $pC^*(L)q$ is a $C^*(G)\text{-}C^*(H)$ Hilbert bimodule and that it satisfies all the requisite properties for being an $C^*(G)\text{-}C^*(H)$ imprimitivity Hilbert bimodule except for the fullness condition ( definition 3.1 in \cite{RW}). By \cref{approximatethatlink} and \cref{mainthm.approximation}, we know that \begin{itemize} \item $C^*(L)$ is the direct limit of the subalgebras $C^*(L_\alpha)$ (and analogously for $G$ and $H$ using the same approximation) \item $L_\alpha$ is a linking groupoid for $G_\alpha$ and $H_\alpha$ for each $\alpha$. \end{itemize} It is clear that the projections $p$ and $q$ are equal to the pullbacks of the projections $p_\alpha$ and $q_\alpha$ (coming from the characteristic functions on $\G0_\alpha$ and $H_{\alpha}^{(0)}$) in $M(C^*(L_\alpha))$ (under the induced mapping $M(C^*(L_\alpha)) \to M(C^*(L))$). We know that the embedded copy of $p_\alpha C^*(L_\alpha) q_\alpha$ in $pC^*(L)q$ (with the inherited bimodule structure) is an imprimitivity bimodule for $C^*(G_\alpha)$ and $C^*(H_\alpha)$. It follows that the closure of $span\{\langle x,y\rangle_C^*(G): x,y \in pC^*(L)q\})$ contains $C^*(G_\alpha)$ for all $\alpha$. As $\bigcup_\alpha C^*(G_{\alpha})$ is dense in $C^*(G)$, it follows that $span\{\langle x,y\rangle_C^*(G): x,y \in pC^*(L)q\})$ is dense in $C^*(G)$. The same argument holds for $C^*(H)$ and so it follows that $pC^*(L)q$ is an imprimitivity bimodule for $C^*(G)$ and $C^*(H)$. \end{proof}
{ "timestamp": "2018-03-14T01:02:12", "yymm": "1712", "arxiv_id": "1712.05237", "language": "en", "url": "https://arxiv.org/abs/1712.05237" }
\section{\label{sec:introduction} Introduction} The dawn of the multi-messenger astronomy has been one of the most great achievements in the scientific community in the last decade. Combining neutrino observations with measurements of cosmic rays, electromagnetic radiation and gravitational wave will be crucial to solve long-standing problems in astrophysics \cite{PhysRevLett.107.251101} and may move forward our current horizons in fundamental physics \cite{Amelino-Camelia:2016ohi}. Nonetheless, the observation of astrophysical neutrinos provides on its own a more deep understanding of neutrino physics. Neutrino oscillation phenomena, such as sterile neutrinos \cite{Aartsen:2017bap}, and non-standard interactions \cite{1742-6596-718-6-062011}, are just some topics whose understanding relies in the observation of astrophysical neutrinos. The largest neutrino telescope to date is the IceCube Neutrino Observatory at the geographic South Pole, whose first sensors were deployed at the South Pole during the austral summer of 2004-2005 and have been producing data since February 2005 \cite{ACHTERBERG2006155}. After six years of data taking \cite{Aartsen:2017mau}, from early 2010 to early 2016 for a total of 2078 days, 50 neutrino events with deposited energies above 60 TeV have provided the evidence for the existence of an extraterrestrial neutrino flux. Only three events with deposited energy above 1 PeV have been observed, with the 2 PeV event being the most energetic one. The discovery of this flux has motivated a vigorous program of studies to unravel their origin \cite{Aartsen:2017eiu} and their properties \cite{Palomares-Ruiz:2015mka,Aartsen:2016xlq,Aartsen:2013vja}. IceCube detects neutrinos by observing Cherenkov light produced by charged particles created in neutrino interactions as they transit the ice within the detector. At this range of energies, the way neutrinos interact is deep-inelastic scattering with nuclei in the detector material. There are two possible interactions: charged-current (CC) or neutral-current (NC) interactions. In both a cascades of hadrons is created at the neutrino interaction vertex and for CC interaction this shower is accompanied by an outgoing charged lepton which may itself trigger another overlaid cascades. IceCube events have two basic topologies: tracks and showers. Considering the energy involved for this analysis we assume tracks are made only by $\nu_{\mu}$ CC interactions and by $\nu_{\tau}$ CC interactions in which the tau lepton decays in $\nu_{\tau} \mu \nu_{\mu}$. Showers instead are those events without visible muon tracks and are formed by particle showers near the neutrino vertex. While the particle content of showers created by final-state hadrons, electrons, and taus is different, the IceCube detector is currently insensitive to the difference. This means that a shower is produced in $\nu_{e}$ CC interaction, $\nu_{\tau}$ CC interactions (where the produced $\tau$ does not decay in the muonic channel), and in all-flavor NC interactions. In previou works IceCube data have been analyzed and discussed in detail (see Ref. \cite{Aartsen:2017eiu,Chianese:2017jfa,Palomares-Ruiz:2015mka} and references therein) using a maximum-likelihood approach over the whole collection of events. Although useful informations about the energy behavior and the flavor composition has already been explored, it has never been performed an inference analysis of the properties of each single neutrino event. This work differs from previous analyses also for the statistical approach used: having to deal, one by one, with just one single event the frequentist approach is unsuitable and may be misleading. For this reason we prefer the Bayesian approach which we discuss in Sec. \ref{sec:analysis}. The structure of the paper is as follows. We start in Sec. \ref{sec:Flux} by describing our assumptions on energy and flavor flux. Sec. \ref{sec:scattering} provides a description of the deep-inelastic scattering, the energy-dependent cross section of neutrino-nucleon interactions and the neutrino energy loss. Branching fractions of tau-decay channel, along with the energy distribution of the decay products, are presented in Sec. \ref{sec:tau_decay}. A summary of all parameters used in this analysis and a brief description of the Bayesian method can be found in Sec. \ref{sec:analysis}. Finally in Sec. \ref{sec:results} we highlight our results and discuss their implications. \section{\label{sec:Flux} Assumptions on neutrino fluxes} Working with neutrinos implies the knowledge of its energy and flavor, which are not direct observables. We can only infer this quantities by the deposited energy in the detector and by the event topology. This is possible only if we know \emph{a priori} the expected fluxes of incoming-neutrino energy and flavor. We assume an equal spectrum and flux for neutrinos and anti-neutrinos at the Earth. When fitting the current available data, the assumption that neutrino and anti-neutrino flavor fractions are the same at the Earth seems reasonable \cite{Nunokawa:2016pop}. All parameters and their properties, if not otherwise specified, are obtained for the sum of neutrino plus anti-neutrino contributions. For the sake of brevity, here and in the rest of this article, we imply also anti-neutrinos when we speak of neutrinos and we will refer to both neutrinos and anti-neutrinos as $\nu$. The flavor ratio at the Earth $(f_e : f_{\mu} : f_{\tau} )_{\oplus}$ is one of the most studied properties of astrophysical neutrinos. This is due mainly to the fact that the flavor ratio of astrophysical neutrinos is both a probe of the source of high energy cosmic rays and a test of fundamental particle physics. A deviation from the expected flavor ratio at the Earth would be a signal of new physics in the neutrino sector. Consistently with the $(1 : 1 : 1 )_{\oplus}$ flavor ratio at Earth commonly expected \cite{Majumdar:2006px} and with the results reported by the IceCube collaboration in Ref. \cite{Aartsen:2015ivb}, a uniformly-distributed prior probability for the neutrino flavor will be used in this analysis. Thus \begin{equation} f(\ell) = \begin{cases} 1/3, & \ell = e \\ 1/3, & \ell = \mu \\ 1/3, & \ell = \tau , \end{cases} \end{equation} where $\ell$ is the leptonic flavor and $f(\ell)$ its probability distribution. Astrophysical neutrinos from cosmic accelerators are generically expected to have a hard energy spectrum. Waxman and Bahcall \cite{Waxman:1998yy} predicted a cosmic neutrino flux proportional to $E_{\nu}^{-2}$, as originally predicted by Fermi. But the spectral index may depends on the source properties and the acceleration mechanism, as pointed out in some recent works (see Ref. \cite{Bell:2013gga} for instance). It is also possible that the neutrino fluxes may be described by more than one component \cite{Chianese:2017jfa,Chianese:2017nwe}. In this analysis we assume a single astrophysical component parametrized in terms of an unbroken power-law per neutrino flavor described by two parameters, the normalization $\Phi_{\text{astro}}$ at 100 TeV neutrino energy and the spectral index $\gamma$: \begin{equation} \Phi(E_{\nu} )_{\nu + \bar{\nu}} = \Phi_{\text{astro}} \cdot \left( \frac{E_{\nu}}{100 \; \text{TeV}} \right)^{- \gamma}. \end{equation} Since in this analysis we are mainly interested in inferring the energy $E_{\nu}$ for each single neutrino, the only parameter that matters for us is the spectral index $\gamma$. The most recent estimate for the spectral index for high-energy astrophysical neutrino observed by IceCube is given in Ref. \cite{Aartsen:2016xlq}, in which they found that the 6-years data are well described by an isotropic, unbroken power law flux with a hard spectral index of $\gamma = 2.13 \pm 0.13$. Thus our prior distribution for the spectral index will be a normal distribution with mean value $2.13$ and standard deviation $0.13$: \begin{equation} \mathcal{N}(\gamma \, | \, 2.13, 0.13) = \frac{1}{{0.13 \sqrt {2\pi } }}e^{{{ - \left( {\gamma - 2.13} \right)^2 } \mathord{\left/ {\vphantom {{ - \left( {x - \mu } \right)^2 } {0.0338 }}} \right. \kern-\nulldelimiterspace} {0.0338 }}}. \end{equation} According to the cut considered by the IceCube collaboration we consider only neutrino events with deposited energy in the range 60 TeV - 3 PeV, which is a cut often used when performing statistical analyses of the astrophysical flux. The minimum deposited energy of 60 TeV is intended to eliminate most of the expected atmospheric muon background events (in our work we neglect the contribution to the neutrino fluxes of atmospheric neutrino), while the limit of 3 PeV deposited energy would discard the Glashow resonance at $E_{\nu} \simeq 6.3$ PeV \cite{PhysRev.118.316}, which should give rise to yet-unobserved events in the few PeV region. From all these considerations we use \begin{equation} \frac{(\gamma -1) \, E_{\nu}^{-\gamma}}{(60 \, \text{TeV})^{-\gamma +1} - (3 \, \text{PeV})^{-\gamma +1 } } \end{equation} with $E_{\nu} \in$ [ 60 TeV, 3 PeV], as our prior probability distribution for the neutrino energy $E_{\nu}$. \section{\label{sec:scattering} Neutrino-Nucleon Deep-inelastic scattering} Our current knowledge of the proton's parton distributions allows us to calculate the neutrino-nucleon cross sections with confidence up to neutrino energies of about 10 PeV \cite{Gandhi:1995tf}. At neutrino energies $E_{\nu}$ above some 10 GeV, as relevant for this analysis, neutrino-nucleon reactions are dominated by deep-inelastic scattering. The processes that go into our evaluation are the CC channel, where the $\nu$ scatters off a quark in the nucleon N via exchange of a virtual W-boson, $$ \nu_{\ell} N \rightarrow X + \ell $$ and the NC channel, via exchange of a virtual Z-boson, $$ \nu_{\ell} N \rightarrow X + \nu_{\ell} , $$ where $\ell = \{ e, \mu, \tau \}$ , and $X$ represents hadrons. In Fig. \ref{fig:CC_NC_plot} both interactions are schematically represented. \begin{figure}[!h] \begin{tikzpicture} \begin{feynman} \vertex (a) {\(\nu_{\ell}\)}; \vertex [below right=1.5cm of a] (b); \vertex [above right=1.2cm of b] (c) {\( \ell \)}; \vertex [below =1cm of b, blob] (d) {\contour{white}{$ $}}; \vertex [left=of d] (e) {\( N \)} ; \vertex [ right=of d] (f) {\( X \)}; \diagram* { (a) -- [fermion, edge label'=$E_{\nu}$] (b), (b) -- [fermion, edge label'=$(1-y) E_{\nu}$] (c), (b) -- [boson, edge label'=\(W \)] (d), (e) -- [fermion, thick] (d), (d) -- [fermion, thick, edge label'=$y E_{\nu}$] (f), }; \end{feynman} \end{tikzpicture} $\qquad$ \begin{tikzpicture} \begin{feynman} \vertex (a) {\(\nu_{\ell}\)}; \vertex [below right=1.5cm of a] (b); \vertex [above right=1.2cm of b] (c) {\( \nu_{\ell} \)}; \vertex [below =1cm of b, blob] (d) {\contour{white}{$ $}}; \vertex [left=of d] (e) {\( N \)} ; \vertex [ right=of d] (f) {\( X \)}; \diagram* { (a) -- [fermion, edge label'=$E_{\nu}$] (b), (b) -- [fermion, edge label'=$(1-y) E_{\nu}$] (c), (b) -- [boson, edge label'=\(Z \)] (d), (e) -- [ fermion, thick] (d), (d) -- [fermion, thick, edge label'=$y E_{\nu}$] (f), }; \end{feynman} \end{tikzpicture} \caption{ Diagrams for charged (left) and neutral (right) current neutrino-nucleon interaction. Time runs from left to right and the flavor index $\ell$ represents $e$, $\mu$, or $\tau$. } \label{fig:CC_NC_plot} \end{figure} The neutrino-nucleon CC and NC cross-sections have been measured by several experiments. A complete review can be found in Ref. \cite{Gandhi:1998ri}, from which we report in Table \ref{tab:sigma_CC} and \ref{tab:sigma_NC} the values of cross-sections respectively for CC and NC interaction for given energy values in the range 60 TeV-3 PeV. \def1.27{1.5} \begingroup \squeezetable \begin{table}[!h] \caption{Charged-current cross sections for neutrino, anti-neutrino and their sum for neutrino-nucleon interactions.} \begin{ruledtabular} \begin{tabular}{lccc} $E_{\nu}$ [TeV] & $\sigma_{CC}^{\nu}$ [$10^{-33} \, cm^2$] & $\sigma_{CC}^{\bar{\nu}}$ [$ 10^{-33} \, cm^2$] & $\sigma_{CC}$ [$10^{-33} \, cm^2$] \\ \hline 60 & 0.1514 & 0.1199 & 0.2713 \\ 100 & 0.2022 & 0.1683 & 0.3705 \\ 250 & 0.3255 & 0.2909 & 0.6164 \\ 600 & 0.4985& 0.4667 & 0.9652 \\ $10^3$ & 0.6342 & 0.6051 & 1.2393 \\ $2.5 \cdot 10^3$ & 0.9601 & 0.9365 & 1.8966 \\ \end{tabular} \end{ruledtabular} \label{tab:sigma_CC} \end{table} \endgroup \def1.27{1.5} \begingroup \squeezetable \begin{table}[!h] \caption{Neutral-current cross sections for neutrino, anti-neutrino and their sum for neutrino-nucleon interactions.} \begin{ruledtabular} \begin{tabular}{lccc} $E_{\nu}$ [TeV] & $\sigma_{NC}^{\nu}$ [$10^{-33} \, cm^2$] & $\sigma_{NC}^{\bar{\nu}}$ [$ 10^{-33} \, cm^2$] & $\sigma_{NC}$ [$ 10^{-33} \, cm^2$] \\ \hline 60 & 0.05615 & 0.04570 & 0.10185 \\ 100 & 0.07667 & 0.06515 & 0.14182 \\ 250 & 0.1280 & 0.1158 & 0.2438 \\ 600 & 0.2017 & 0.1901 & 0.3918 \\ $10^3$ & 0.2600 & 0.2493 & 0.5093 \\ $2.5 \cdot 10^3$ & 0.4018 & 0.3929 & 0.7947 \\ \end{tabular} \end{ruledtabular} \label{tab:sigma_NC} \end{table} \endgroup For the purpose of this analysis, we need to know the probability that a neutrino interacts with a nucleon via CC or NC channel. In order to estimate this probability we use the values given in Table \ref{tab:sigma_CC} and \ref{tab:sigma_NC} from which we get the fraction of NC events \begin{equation} \frac{\sigma_{NC}}{\sigma_{CC}+ \sigma_{NC} }. \label{eq:NC_frac} \end{equation} These values are then fitted, as shown in Fig. \ref{fig:fit_NC}, in order to obtain the following parametrization in terms of $\epsilon = \text{Log}_{10}( E_{\nu} / \text{TeV})$ \begin{equation} A_1 + A_2 \cdot \log (\epsilon - A_3 ), \label{eq:NC_prior} \end{equation} with $A_1 = 0.2595$, $A_2 = 0.0313$ and $A_3 = 0.2484$. From Fig. \ref{fig:fit_NC} one can see that the probability that a neutrino interacts with a nucleon via NC is $\sim 30 \%$ and in the range of energies we are interested in depends slightly on the neutrino energy $E_{\nu}$. Eq. \ref{eq:NC_prior} will be then used as the prior probability for NC interactions. \begin{figure}[!h] \includegraphics[width=0.9\linewidth,height=0.22\textheight]{figures/fig2.pdf} \caption{Fraction of NC events. Points are taken from Table \ref{tab:sigma_CC} and \ref{tab:sigma_NC} using Eq. \ref{eq:NC_frac}, while the curve is obtained from Eq. \ref{eq:NC_prior}. } \label{fig:fit_NC} \end{figure} An important parameter that plays a crucial role in this analysis is the inelasticity parameter $y$: as schematically shown in Fig. \ref{fig:CC_NC_plot}, in both CC and NC interactions a fraction ($1-y$) of the neutrino energy $E_{\nu}$ goes to the final-state lepton; the remaining fraction $y$ goes to the final-state hadrons. The differential cross section for CC interactions in terms of $y$ and of the Bjorken scaling variables $x$ (the fraction of the nucleon momentum carried by the struck quark) is given by (in natural units $ \hslash = c= 1$) \begin{equation} \frac{d \sigma_{CC}}{dy dx} =\frac{2 G_F^2 M_N E_{\nu}}{\pi} \left( \frac{M_{W}^2}{Q^2+M_W^2} \right)^2 \left( q + (1-y)^2\bar{q} \right). \label{eq:CC_cros} \end{equation} Likewise, the NC differential cross section is given by \begin{equation} \frac{d \sigma_{NC}}{dy dx} =\frac{2 G_F^2 M_N E_{\nu}}{\pi} \left( \frac{M_{Z}^2}{Q^2+M_Z^2} \right)^2 \left( q^0 + (1-y)^2\bar{q}^0 \right). \label{eq:NC_cros} \end{equation} In these equations $q$, $\bar{q}$, $ q^0$ and $\bar{q}^0$ are quark and antiquark distribution functions \cite{Connolly:2011vc, Gandhi:1998ri}, $M_N$, $M_W$ and $M_Z$ are respectively the nucleon, W and Z mass, $G_F$ is the Fermi coupling constant and $Q^2 \approx 2xyE_{\nu}M_N$ is the negative four-momentum transfer squared. In order to simulate in our code the $y$-distribution given by Eq. \ref{eq:CC_cros} and \ref{eq:NC_cros}, we used the algorithm described in Ref. \cite{Connolly:2011vc}. \begin{figure}[!h] \includegraphics[width=0.95\linewidth,height=0.17\textheight]{figures/fig3.png} \caption{Average $y$ as a function of neutrino energy $E_{\nu}$, for CC (solid lines) and NC (dashed) reactions. Figure taken from Ref. \cite{Gandhi:1995tf}. } \end{figure} \section{\label{sec:tau_decay} $\tau$-decay channels} When a $\nu_{\tau}$ and a nucleon interacts via CC interaction a $\tau$ of energy $E_{\tau} = (1-y) E_{\nu}$ is produced. The $\tau$ is the heaviest of the leptons with a mass $m_{\tau}$ of 1.78 GeV and therefore it has a very short lifetime of about $3 \cdot 10^{-13} \, s$. It can decay in the lepton channel or in the hadronic channel, as shown schematically in Fig. \ref{fig:tau_decay_plot}. The leptonic decays have a total branching fraction of $\sim 35 \%$ and the hadronic decays have a total branching fraction of $\sim 65 \%$, which is consistent with the expected branching fraction when the color charges of the quarks are included. \begin{figure}[!h] \begin{tikzpicture} \begin{feynman} \vertex (a) {\(\tau^{-}\)}; \vertex [right=1 cm of a] (b); \vertex [above right=1 cm of b] (f1) {\(\nu_{\tau}\)}; \vertex [below right=1 cm of b] (c); \vertex [above right=1 cm of c] (f2) {\(\overline \nu_{e} , \overline \nu_{\mu}\)}; \vertex [below right=1 cmof c] (f3) {\(e^{-}, \mu^{-} \)}; \diagram* { (a) -- [fermion] (b) -- [fermion] (f1), (b) -- [boson, edge label'=\(W^{-}\)] (c), (c) -- [anti fermion] (f2), (c) -- [fermion] (f3), }; \end{feynman} \end{tikzpicture} \begin{tikzpicture} \begin{feynman} \vertex (a) {\(\tau^{-}\)}; \vertex [right=1 cm of a] (b); \vertex [above right=1 cm of b] (f1) {\(\nu_{\tau}\)}; \vertex [below right=1 cm of b] (c); \vertex [ right=1 cm of c] (f2) ; \vertex [ right=1.5 cm of f2] (f3) ; \diagram* { (a) -- [fermion] (b) -- [fermion] (f1), (b) -- [boson, edge label'=\(W^{-}\)] (c), (c) -- [anti fermion, half left, edge label=\( \bar{u}\)] (f2), (c) -- [ fermion, half right, edge label=\( d \)] (f2), (f2) -- [ fermion, very thick, edge label= hadrons] (f3), }; \end{feynman} \end{tikzpicture} \caption{ Diagrams for leptonic (left) and hadronic (right) decay of the $\tau$ lepton. Time runs from left to right. } \label{fig:tau_decay_plot} \end{figure} The branching fraction into each decay channel is approximately \cite{Dutta:2000jv} \begin{align} \begin{split} 0.18 \quad & \text{for} \; \tau \rightarrow \nu_{\tau} e \nu_e, \\ 0.18 \quad & \text{for} \; \tau \rightarrow \nu_{\tau} \mu \nu_{\mu}, \\ 0.12 \quad & \text{for} \; \tau \rightarrow \nu_{\tau} \pi ,\\ 0.26 \quad & \text{for} \; \tau \rightarrow \nu_{\tau} \rho , \\ 0.13 \quad & \text{for} \; \tau \rightarrow \nu_{\tau} a_1, \\ 0.13 \quad & \text{for} \; \tau \rightarrow \nu_{\tau} X \quad ( X \neq\pi , \rho, a_1). \\ \end{split} \label{eq:br_fraction} \end{align} Due to its very short lifetime the track produced inside the detector by the $\tau$ has generally a length of $ 50 \, m \cdot \left( E_{\tau} / \text{PeV} \right) $ \cite{Xu:2017yxo}. At energies below PeV, the double cascade signature is difficult to distinguish from a single cascade, due to the sparse spacing of digital optical modules. Thus a track produced by a $\tau$ below a few PeV is unresolvable by IceCube. At higher energy ($ \gtrsim 1$ PeV) a signature of $\nu_{\tau}$ CC interactions would be two cascades joined by a short track, referred as a "double bang", which has not yet been observed. Considering the energies of our interest, in this analysis we assume that $\nu_{\tau}$ CC interactions followed by $\tau \rightarrow \nu_{\tau} \mu \nu_{\mu}$ are undistinguishable from a track event produced in $\nu_{\mu}$ CC interactions, while all the other $\tau$-decay channels produce a shower event. The $\nu_{\tau}$ spectra for $\tau$-leptonic decay has the following form in term of $z= E_{\nu_{\tau}} / E_{\tau}$ \cite{LIPARI1993195} \begin{equation} \frac{d\sigma}{dz} \propto \left(\frac{5}{3} -3 z^2+ \frac{4 z^3}{3} \right) - P_{\tau} \left( \frac{1}{3}-3 z^2+ \frac{8 z^3}{3} \right) , \label{eq:z_lepton} \end{equation} while the $\nu_{\ell}$ ($\ell = \{e , \mu \}$) spectra in term of $z'= E_{\nu_{\ell}} / E_{\tau}$ reads \begin{equation} \frac{d\sigma}{dz'} \propto \left(2 -6 {z'}^2+4 z'^3 \right) - P_{\tau} \left(-2+ 12 z' -18 z'^2+8 z'^3 \right) , \label{eq:zp_lepton} \end{equation} where $P_{\tau}$ is the polarization of the $\tau$. In the case of hadronic decays $ \tau \rightarrow \nu_{\tau} X$ the distribution depends on the kind of hadrons produced. An approximation of the distribution for each hadronic channel $i$, in terms of $z= E_{\nu_{\tau}} / E_{\tau}$ and $r_i = m_i^2/m_{\tau}^2$, can be found in Ref. \cite{Dutta:2000jv}: \begin{widetext} \begin{equation} \frac{d\sigma}{dz} \propto \begin{cases} \frac{1}{1-r_{\pi}} \theta(1- r_{\pi} - z) + P_{\tau} \frac{2z -1 + r_{\pi}}{(1-r_{\pi})^2} \theta(1- r_{\pi} - z), & \tau \rightarrow \nu_{\tau} \pi ,\\ \frac{1}{1-r_{\rho}} \theta(1- r_{\rho} - z) + P_{\tau} \left( \frac{2z -1 + r_{\rho}}{1-r_{\rho}} \right) \left( \frac{1 - 2 r_{\rho}}{1+ 2 r_{\rho}} \right) \theta(1- r_{\rho} - z), & \tau \rightarrow \nu_{\tau} \rho, \\ \frac{1}{1-r_{a_1}} \theta(1- r_{a_1} - z) + P_{\tau} \left( \frac{2z -1 + r_{a_1}}{1-r_{a_1}} \right) \left( \frac{1 - 2 r_{a_1}}{1+ 2 r_{a_1}} \right) \theta(1- r_{a_1} - z), & \tau \rightarrow \nu_{\tau} a1, \\ \frac{1}{0.3} \theta(0.3-z), & \tau \rightarrow \nu_{\tau} X \quad ( X \neq\pi , \rho, a1). \end{cases} \label{eq:z_hadron} \end{equation} \end{widetext} For energies of our interest we have $m_{\tau}/E_{\tau} \ll 1$, thus it is safe to assume \cite{Hagiwara:2003di,Bourrely:2004iy} the $\tau$ being almost fully polarized, i.e., $P_{\tau} = 1$. The distributions in Eq. \ref{eq:z_lepton}, \ref{eq:zp_lepton} and \ref{eq:z_hadron} , along with their respective branching fraction in Eq. \ref{eq:br_fraction}, will be then used as prior distributions in those CC interactions involving a $\nu_{\tau}$ and its subsequent decay. \section{\label{sec:deposited_energy} Deposited energy} All charged particles produced in the nucleon-neutrino interaction propagate through ice emitting Cherenkov radiation. This Cherenkov radiation is ultimately measured by the IceCube detectors producing a deposited energy $E_{dep.}$, which is proportional to the total energy $E_{\nu}$ of the neutrino. Each channel has different efficiencies when it comes to producing a measured energy deposition in the IceCube detector. First of all one has to distinguish electromagnetic cascades from hadronic cascades, which are both recognize in the detector as showers. For electromagnetic cascades one can safely assume the deposited energy being equal to the energy of the electron produced in the nucleon-neutrino interaction. On the other hand, the deposited energy in hadronic cascade is less reliable due to the presence of more neutral particles like neutrons, to large losses due to the binding energies in hadronic processes and to a higher Cherenkov threshold for hadrons \cite{Kowalski2004Search}. Following Ref. \cite{Palomares-Ruiz:2015mka}, being $E_X$ the energy of the cascade-initiating particle, we define the deposited energy in hadronic cascade as \begin{equation} E_h (E_X) = \left( 1 - f \cdot \left( \frac{E_X}{E_0} \right)^{-m} \right) \cdot E_X, \end{equation} where $f = 0.533$, $E_0 = 0.399 \, GeV$ and $ m = 0.130$, resulting from a fit to simulations of hadronic cascades \cite{Kowalski2004Search}. For track events, being the lifetime of a muon much larger than the time it takes to cross the detector, a fraction of the initial muon energy $E_{\mu}$ is lost. The average deposited energy along a track $E_t$ by a muon can be obtained using the parametrization given in Ref. \cite{Palomares-Ruiz:2015mka} \begin{equation} E_{t} (E_{\mu} ) = F_{\mu} \cdot ( E_{\mu} + a/b), \end{equation} where $a=0.206 \, GeV/m$, $b=3.21 \cdot 10^{-4} \, m^{-1}$ and $F_{\mu} = 0.119$. If the track is produced in a tau decay of energy $E_{\tau}$, one has to take into account also that a significant fraction of tau leptons would escape the detector volume before decaying, so that $F_{\mu} $ has to be multiplied by a factor given by \begin{equation} \frac{1 + p_1 \cdot (E_{\tau}/10 \, PeV)}{1 + q_1 \cdot (E_{\tau}/10 \, PeV) + q_2 \cdot (E_{\tau}/10 \, PeV)^2}, \end{equation} where $p_1 = 0.984 $, $q_1 = 1.01 $ and $ q_2 = 1.03 $ \cite{Palomares-Ruiz:2015mka}. Finally, for all the nucleon-neutrino interaction we have considered, the total deposited energy $E_{dep.}$ is given by \begin{widetext} \begin{equation} E_{dep.} = \begin{cases} E_h ( E_X) , & \quad NC, \\ E_h ( E_X) +E_{\ell} , & \nu_e \, CC, \\ E_h ( E_X) + E_t(E_{\ell}) , & \nu_{\mu} \, CC, \\ E_h ( E_X) +E_{\ell} \cdot (1 - z-z') , & \nu_{\tau} \, CC \quad \tau \rightarrow \nu_{\tau} e \nu_e, \\ E_h ( E_X) +E_t \left( E_{\ell} \cdot (1 - z-z') \right) , & \nu_{\tau} \, CC \quad \tau \rightarrow \nu_{\tau} \mu \nu_{\mu}, \\ E_h ( E_X) + E_h \left( E_{\ell} \cdot (1-z) \right), & \nu_{\tau} \, CC \quad \tau \rightarrow\nu_{\tau} X, \\ \end{cases} \label{eq:en_dep} \end{equation} \end{widetext} where $E_X = y E_{\nu}$, $E_{\ell} = (1-y) E_{\nu}$ with $\ell = \{e, \mu, \tau \}$, while $y$ has been discussed in Sec. \ref{sec:scattering}. The parameters $z $ and $z' $ have been discussed in Sec. \ref{sec:tau_decay}, which are respectively $E_{\nu_{\tau}}/ E_{\tau}$ and $E_{\nu_{e,\mu}}/E_{\tau}$. For both topologies the energy deposited within the detector can be reconstructed within $\sim 15 \%$ above 10 TeV \cite{Aartsen:2013vja}. Thus one has to make distinction between the true deposited energy $E_{dep.}$ and the observed-deposited energy $E_{dep.}^{obs.}$. For this analysis we simply assume that $E_{dep.}^{obs.}$ follows a normal distribution with mean value given by $E_{dep.}$ and standard deviation $\sigma_{E_{dep.}}$: \begin{align} \begin{split} \mathcal{N} & (E_{dep.}^{obs.} \, | \, E_{dep.},\sigma_{E_{dep.}}) = \\ = & \frac{1}{{\sigma_{E_{dep.}} \sqrt {2\pi } }}e^{{{ - \left( {E_{dep.}^{obs.} - E_{dep.} } \right)^2 } \mathord{\left/ {\vphantom {{ - \left( {x - \mu } \right)^2 } {2\sigma_{E_{dep.}} ^2 }}} \right. \kern-\nulldelimiterspace} {2\sigma_{E_{dep.}}^2 }}} \text{.} \end{split} \end{align} For each neutrino event the value of $\sigma_{E_{dep.}}$ is taken from the uncertainty in the deposited energy provided by IceCube \cite{Aartsen:2015zva,Aartsen:2017mau}. \section{\label{sec:analysis}Analysis} In Table \ref{tab:parameters} we summarize all parameters and the sequence of events that, given a neutrino with energy $E_{\nu}$, cause an observed-deposited energy $E_{dep.}^{obs.}$ in the detector. In a certain sense, we need to go backwards through the whole chain of events in order to infer the neutrino energy $E_{\nu}$ from the observed-deposited energy $E_{dep.}^{obs.}$ and its topology (which sometimes in the rest of this paper we abbreviate as ''top.''). In this section we briefly describe how this goal can be achieved using Bayesian inference. \def1.27{1.7} \begingroup \squeezetable \begin{table*} \caption{ Table of all parameters used in this analysis along with their associated prior probability distribution. The right two columns show the sections and the references where these parameters are discussed in detail. The values of $a$ and $b$ in the $z$-parameter row can be obtained from Eq. \ref{eq:z_hadron} with $P_{\tau} = 1$.} \begin{ruledtabular} \begin{tabular}{llcc} Parameters & Prior probability distribution & Sec. & Ref. \\ \hline \multicolumn{4}{c}{Flux parameters} \\ \hline $r \quad$ (anti-neutrino/neutrino ratio) & $\delta( r -1)$ & \ref{sec:Flux} & \cite{Nunokawa:2016pop} \\ $\ell \quad$ (neutrino flavor) & $\begin{aligned} 1/3, \quad & \ell = e \\ 1/3, \quad & \ell = \mu \\ 1/3, \quad & \ell = \tau \end{aligned}$ & \ref{sec:Flux} & \cite{Aartsen:2015ivb} \\ $ \gamma \quad$ (spectral index) & $\mathcal{N}(\gamma \, | \, 2.13, 0.13) $ & \ref{sec:Flux} & \cite{Aartsen:2016xlq} \\ $E_{\nu} \quad$ (neutrino energy) & $(\gamma-1) \, E_{\nu}^{-\gamma}/ \left( (60 \, \text{TeV})^{-\gamma+1} - (3 \, \text{PeV})^{-\gamma+1} \right) $ & \ref{sec:Flux} & \cite{Aartsen:2016xlq} \\ \hline \multicolumn{4}{c}{Deep-inelastic scattering parameters} \\ \hline $k \quad$ (nucleon-neutrino interaction) & $\begin{aligned} A_1 + A_2 \cdot \ln( \epsilon- A_3), \quad & k = \text{NC} \\ 1- A_1 - A_2 \cdot \ln( \epsilon- A_3), \quad & k = \text{CC} \\ \end{aligned}$ & \ref{sec:scattering} & \cite{Connolly:2011vc, Gandhi:1995tf} \\ $y \quad$ (inelasticity parameter) & $d \sigma_k ( E_{\nu})/d y \quad$ (see Eq. \ref{eq:CC_cros} and \ref{eq:NC_cros}) & \ref{sec:scattering} & \cite{Connolly:2011vc,Gandhi:1995tf} \\ \hline \multicolumn{4}{c}{$\tau$-decay parameters} \\ \hline $j \quad$ ($\tau$-decay channel) & $\begin{aligned} 0.18, \quad & j= \tau \rightarrow \nu_{\tau} e \nu_e \\ 0.18, \quad & j= \tau \rightarrow \nu_{\tau} \mu \nu_{\mu} \\ 0.12, \quad & j= \tau \rightarrow \nu_{\tau} \pi \\ 0.26, \quad & j= \tau \rightarrow \nu_{\tau} \rho \\ 0.13, \quad & j= \tau \rightarrow \nu_{\tau} a_1 \\ 0.13, \quad & j= \tau \rightarrow \nu_{\tau} X \quad ( X \neq\pi , \rho, a_1) \end{aligned}$ & \ref{sec:tau_decay} & \cite{Dutta:2000jv} \\ $ z \quad$ (energy fraction $E_{\nu_{\tau}}/E_{\tau}$) & $\begin{aligned} 4/3 \left( 1 - z^3 \right), \quad & \text{if } j= \tau \rightarrow \nu_{\tau} e \nu_e \; \text{or} \;\nu_{\tau} \mu \nu_{\mu} \\ % \left(a_{\pi} + b_{\pi} \cdot z \right) \theta(1- r_{\pi} - z) , \quad & \text{if } j= \tau \rightarrow \nu_{\tau} \pi \\ % \left(a_{\rho} + b_{\rho} \cdot z \right) \theta(1- r_{\rho} - z), \quad & \text{if } j= \tau \rightarrow \nu_{\tau} \rho \\ \left(a_{a_1} + b_{a_1} \cdot z \right) \theta(1- r_{a_1} - z), \quad & \text{if } j= \tau \rightarrow \nu_{\tau} a_1 \\ 1 / 0.3 \, \theta(0.3-z), \quad & \text{if } j= \tau \rightarrow \nu_{\tau} X \quad ( X \neq\pi , \rho, a_1) \end{aligned}$ & \ref{sec:tau_decay} & \cite{Hagiwara:2003di, LIPARI1993195, Dutta:2000jv} \\ % $ z' \quad$ (energy fraction $E_{\ell}/E_{\tau}$) & $ 4 - 12 z' + 12 z'^2 - 4 z'^3, \quad \text{if } j= \tau \rightarrow \nu_{\tau} e \nu_e \; \text{or} \; \nu_{\tau} \mu \nu_{\mu}$ & \ref{sec:tau_decay} & \cite{LIPARI1993195, Dutta:2000jv} \\ % \hline \multicolumn{4}{c}{Deposited Energy} \\ \hline % $E_{dep.}^{obs.} \;$ (observed deposited energy) & $\; \mathcal{N}(E_{dep.}^{obs.} \, | \, E_{dep.},\sigma_{E_{dep.}}) \quad $ with $ E_{dep.}$ defined in Eq. \ref{eq:en_dep} & \ref{sec:deposited_energy} & \cite{Palomares-Ruiz:2015mka, Aartsen:2013vja} \\ \end{tabular} \end{ruledtabular} \label{tab:parameters} \end{table*} \endgroup As usually done in literature, let $D$ denote the observed data, in our case the deposited energy $E_{dep.}$ and the event topology (track or shower), and $\theta$ denote the model parameters, which are summarized in the first column of Table \ref{tab:parameters}. Formal inference then requires setting up a joint probability distribution $f(D, \theta)$ (here and in the rest of this paper we will refer simply as $f$ to all distributions). This joint distribution comprises two parts: a prior distribution $f(\theta)$ (see the second column of Table \ref{tab:parameters}) and a likelihood $f(D | \theta)$. Defining $f(\theta)$ and $f(D | \theta)$ gives the full probability distribution \begin{equation} f(D, \theta) = f(D | \theta) \cdot f(\theta). \end{equation} Having observed $D$, one can then obtain the distribution of $\theta$ conditional on $D$ by applying the Bayes theorem \begin{equation} f(\theta | D) = \frac{f(D | \theta) \cdot f(\theta)}{\int f(D | \theta) \cdot f(\theta) \, d \theta}. \label{eq:bayes} \end{equation} This is called the posterior distribution of $\theta$ and is the object of our Bayesian-inference analysis. From the posterior distribution of $\theta$ one can then obtain the expected value of a given parameter by integrating over the remaining parameters or study the dependence between parameters $x$ and $y$ by applying the product rule $f(x| y , D) = f(x , y| D) / f(y | D)$. From Eq. \ref{eq:bayes}, one recovers the maximum likelihood approach as a special case that holds under particular conditions, such as many data points and vague priors, which clearly are not satisfied in this analysis. In theory, Bayesian methods are straightforward: the posterior distribution contains everything you need to carry out inference. In practice, the posterior distribution can be difficult to estimate precisely. A useful tool to derive the posterior distribution of Eq. \ref{eq:bayes} is the Markov Chain Monte Carlo (MCMC) technique. In a MCMC instead of having each point being generated one independently from another (like in a Monte Carlo), the sequence of generated points takes a kind of random walk in parameter space. Moreover, the probability of jumping from one point to an other depends only on the last point and not on the entire previous history (this is the peculiar property of a Markov chain). In particular, for this work we performed the MCMC using the Gibbs sampling algorithm \cite{doi:10.1080/01621459.2000.10474335}, in order to explore the entire parameter space of the posterior distribution. This allows us to derive the unknown and potentially complex distribution $f(\theta | D)$ and estimate all neutrino properties we are interested in. The results of this inference analysis are presented and discussed in Sec. \ref{sec:results}. \section{\label{sec:results}Results and Conclusion} In Table \ref{tab:results} we show for each of the 37 shower events above 60 TeV, denoted by its ID number and observed-deposited energy $E_{dep.}^{obs.}$, the mean values (mean) and the standard deviations (s.d.) of the posterior distribution of neutrino energy $E_{\nu}$. The mean and s.d. values are given assuming different flavors $\ell$ ($e$, $\mu$ or $\tau$) and type of interaction $k$ (CC or NC), where for the meaning of parameters $\ell$ and $k$ we remind the reader to see Table \ref{tab:parameters}. In the last columns, one can also find for each neutrino the probability $f( \ell | E_{dep.}^{obs.}, \text{top.} )$ of being of electronic, muonic and tauonic flavor and the probability $f(k | E_{dep.}^{obs.}, \text{top.} )$ of having scattered with nucleon via CC or NC interaction. We show the same results for the 13 track events above 60 TeV in Table \ref{tab:reasults_track}. But in this case the probabilities for neutrinos of being electronic or having scattered with nucleon via CC or NC interaction are absent: as we learned in the previous sections, tracks can only be produced in CC interactions by muonic or tauonic neutrinos. For shower events the neutrino energy $E_{\nu}$ is, as expected, approximately equal to the observed-deposited energy $E_{dep.}^{obs.}$ only in $\nu_e$ CC interactions, where the uncertainty (given by the s.d.) for $E_{\nu}$ is also approximately equal to the uncertainty $\sigma_{E_{dep.}}$ in the observed-deposited energy. For $\nu_{\tau}$ CC interactions and all-flavors NC interactions instead the situation is different: due mainly to neutrinos energy loss in neutrino-nucleon deep-inelastic scattering and to the $\tau$-decay products escaping the detector, the neutrino energy results being higher than the observed-deposited energy with a more dispersed distribution. This behaviour is manifest in Fig. \ref{fig:plots_PDF_shower}, where the posterior distribution \begin{equation} f( E_{\nu} | \ell, k, E_{dep.}^{obs.}, \text{top.}) \end{equation} are shown for two shower events: with observed-deposited energy $E_{dep.}^{obs.} =(88.4 \pm 12.5) $ TeV and $E_{dep.}^{obs.} =(2003.7 \pm 261.5) $ TeV (the most energetic event). In the bottom part of these plots is also shown the neutrino-energetic distribution $f(E_{\nu} | E_{dep.}^{obs.}, \text{top.})$ making no assumption on $\ell$ and $k$, i.e. marginalizing over these parameters \begin{widetext} \begin{equation} f(E_{\nu} | E_{dep.}^{obs.}, \text{top.} ) = \sum_{\ell, k} f(E_{\nu} | \ell, k , E_{dep.}^{obs.}, \text{top.} ) \cdot f(\ell | E_{dep.}^{obs.}, \text{top.} ) \cdot f(k | E_{dep.}^{obs.}, \text{top.} ). \end{equation} \end{widetext} As one can see from Fig. \ref{fig:plots_PDF_shower}, for showers, having to guess about the neutrino energy, the observed-deposited energy in the detector is the best choice, being this value approximately equal to the mode of the distribution $f(E_{\nu} | E_{dep.}^{obs.}, \text{top.})$. Instead the mean value feels the effect of the pronounced tail at higher energy produced by NC and $\nu_{\tau}$ CC interactions. Thus the mean value of $E_{\nu}$ results being higher than the observed-deposited energy: in Fig. \ref{fig:plots_E_shower} we show, for different kinds of interaction, the neutrino-energy mean value as a function of the observed-deposited energy and the \emph{relative standard deviation} (RSD), which is a measure of dispersion of a probability distribution (defined as the ratio of the standard deviation to the mean value), as a function of the observed-deposited energy. We show the same plots for track events: in Fig. \ref{fig:plots_PDF_track} one can find the posterior distributions for two track events, while in Fig. \ref{fig:plots_E_track} we show the neutrino-energy mean value and RSD as a function of the observed-deposited energy. The main difference between the energetic distribution for showers $f(E_{\nu} | E_{dep.}^{obs.}, \text{shower})$ and tracks $f(E_{\nu} | E_{dep.}^{obs.}, \text{track})$ is that for the latter the distribution mode is higher than the observed-deposited energy while the distribution tail is more pronounced at higher energy. This is mainly due to the fact that tracks are produced by muons, whose energy loss in the detector is only a fraction of the neutrino energy $E_{\nu}$. An important feature that emerges from this analysis, in particular from the right plots of Fig. \ref{fig:plots_E_shower} and \ref{fig:plots_E_track}, is that, as we approach higher observed-deposited energy, the neutrino-energy distributions become less dispersed, a fact which is illustrated by the decreasing values taken by the RSD at higher energy. This behaviour can be understood taking into account the very steeply falling of the neutrino spectrum: at higher energy the right tail of the neutrino-energy distribution becomes less pronounced, because higher energies become less frequent, and this results in a less relative dispersion in the density distribution. \begin{figure*} \includegraphics[width=0.49\linewidth,height=0.35\textheight]{figures/fig5a.pdf} \includegraphics[width=0.49\linewidth,height=0.35\textheight]{figures/fig5b.pdf} \caption{Posterior probability distributions $f( E_{\nu} | \ell, k, E_{dep.}^{obs.}, \text{shower})$ of the neutrino energy for two shower events. In the top panel the distributions assuming $\nu_e$ CC interaction (solid line), $\nu_{\tau}$ CC interaction (dashed line) and NC interaction (dotted line) are shown. In the bottom panel the distribution $f( E_{\nu} | E_{dep.}^{obs.}, \text{shower})$, obtained marginalizing over $\ell$ and $k$, is shown.} \label{fig:plots_PDF_shower} \end{figure*} \begin{figure*} \includegraphics[width=0.48\linewidth,height=0.35\textheight]{figures/fig6a.pdf} \includegraphics[width=0.48\linewidth,height=0.35\textheight]{figures/fig6b.pdf} \caption{Posterior probability distributions $f( E_{\nu} | \ell, k, E_{dep.}^{obs.}, \text{track})$ of the neutrino energy for two track events. In the top panel the distributions assuming $\nu_{\mu}$ CC interaction (solid line) and $\nu_{\tau}$ CC interaction (dashed line) are shown. In the bottom panel the distribution $f( E_{\nu} | E_{dep.}^{obs.}, \text{track})$, obtained marginalizing over $\ell$ and $k$, is shown.} \label{fig:plots_PDF_track} \end{figure*} \begin{figure*} \includegraphics[width=0.49\linewidth,height=0.3\textheight]{figures/fig7a.pdf} \includegraphics[width=0.49\linewidth,height=0.3\textheight]{figures/fig7b.pdf} \caption{The mean value of the neutrino energy $E_{\nu}$ (left) and its RSD value (right) are shown as a function of the observed-deposited energy in shower events making no assumption (solid line), assuming $\nu_e$ CC interaction (dashed line), $\nu_{\tau}$ CC interaction (dotted line) and NC interaction (dot-dashed line).} \label{fig:plots_E_shower} \end{figure*} \begin{figure*} \includegraphics[width=0.49\linewidth,height=0.3\textheight]{figures/fig8a.pdf} \includegraphics[width=0.49\linewidth,height=0.3\textheight]{figures/fig8b.pdf} \caption{The mean value of the neutrino energy $E_{\nu}$ (left) and its RSD value (right) are shown as a function of the observed-deposited energy in track events making no assumption (solid line), assuming $\nu_{\mu}$ CC interaction (dashed line) and $\nu_{\tau}$ CC interaction (dotted line).} \label{fig:plots_E_track} \end{figure*} The very steeply falling of neutrino spectrum (with a spectral index of $\sim 2.13$) plays also a crucial role when estimating the posterior flavor probabilities $f( \ell | E_{dep.}^{obs.}, \text{shower})$. For instance, considering that a $\nu_{\mu}$ produces a shower only in NC interactions, one should expect \emph{a priori} that the probability for a shower event of being generated by a muonic neutrino is $\sim \, 10 \%$, being $\sim 30 \, \%$ the probability for a neutrino of scattering via NC interaction (see Fig. \ref{fig:fit_NC}) and $1/3$ the probability of being muonic. Instead our Bayesian inference gives us a value of $\sim 4 \, \% $. As mentioned above, this value, which is smaller than the expected one, can be explained only considering the neutrino spectrum and the existence of other mechanisms with better efficiency in producing a deposited energy (such as the $\nu_{e}$ CC interaction): higher energies are less frequent, thus the more particles can escape the detector after a neutrino-nucleon interaction, the less chance there is of this interaction having occurred in the detector. From this considerations it is not surprising that a neutrino producing a shower event has the best chance of being electronic: from Table \ref{tab:results} we have about $\sim 62 \, \%$ of chance that its flavor is electronic, $\sim 4 \, \%$ muonic and $\sim 34 \, \%$ tauonic. For track events the situation is simpler: we need to consider only $\nu_{\mu}$ and $\nu_{\tau}$ (followed by $\tau \rightarrow \nu_{\tau} \mu \nu_{\mu}$) CC interactions. Both interactions have similar efficiency in producing a deposited energy (as illustrated by the top panels of Fig. \ref{fig:plots_PDF_track}), thus for track events when estimating the chance of being muonic or tauonic the most important thing to consider is the branching fraction for the muonic $\tau$-decay channel (see Eq. \ref{eq:br_fraction}). Our Bayesian inference for track events (see Table \ref{tab:results_track}) ends up giving to neutrinos a $\sim 87 \, \%$ chance of being muonic and $\sim 13 \, \%$ tauonic. Performing an inference analysis of neutrino fluxes over the whole data sample goes beyond the scope of this work, as we are interested only in inferring properties of each single neutrino event. But it is worth noticing that, combining our flavor probabilities with the observed track-to-shower ratio, we obtain that the expected flavor ratio $(1 : 1 : 1 )_{\oplus}$ is currently disfavored. The current track-to-shower ratio for events above 60 TeV is 13/50, thus the flavor flux is approximately given by \begin{widetext} \begin{equation} \sim \frac{1}{50} \left( 37 \cdot 62 \, \% \, : \, 37 \cdot 4 \, \% + 13 \cdot 87 \, \% \, : \, 37 \cdot 34 \, \% + 13 \cdot 13 \, \% \right)_{\oplus} \propto (1 : 0.54 : 0.63 )_{\oplus}. \end{equation} \end{widetext} This finding is in agreement with previous results obtained independently in other analyses \cite{Aartsen:2015ivb,PALOMARESRUIZ2016433, Palladino:2015zua}. Thus the current observed track-to-shower ratio implies that the electronic flavor is almost two times more frequent than the muonic or tauonic flavor. Although it is not statistically significant at present and a complete discussion of its implications goes beyond the scope of this paper, this result may be explained either by a misidentification of tracks as showers (according to IceCube the fraction of track misidentification is about $\sim 30 \, \%$ \cite{Aartsen:2015ivb}, while the reverse, i.e., a shower being misclassified as a track, is very rare \cite{Aartsen:2014muf}) or, even more compellingly, by some new physics that goes beyond the standard model. Therefore, a further investigation in this direction will be crucial when more data will be collected. In this work we performed, for the first time, a detailed Bayesian inference analysis for each of the 50 high-energy neutrino events above 60 TeV detected by IceCube in 6 years of data taking. We have shown how from the observed-deposited energy and the topology event one can obtain an estimate of the neutrino energy and flavor. We have also explained how this analysis depends on the assumptions made for neutrino fluxes and for the physics involved in all processes producing shower and track events in the detector. From these assumptions we selected those prior probability distributions which seem, at present, the most reasonable ones. Further investigations in high-energy neutrino physics may change the current situation, improving our knowledge of the prior probability distribution for the parameters involved in this inference analysis. Neutrino astronomy has just started with IceCube providing the first evidence of astrophysical high-energy neutrinos. Inference analyses, as the one here exposed, for the properties of each high-energy neutrino have become impelling in searches for new physics and in order to shed some light on many of the questions raised by the observation of these events. \begingroup \squeezetable \def1.27{1.27} \begin{table*}[!h] \caption{Here we show the relevant properties of the 37 shower events with observed-deposited energy above 60 TeV. The first three columns are respectively the ID number, the observed-deposited energy and its uncertainty for each shower event. From the fourth to the seventh columns we have the mean value (mean) and the standard deviation (s.d.) of the posterior distributions of $E_{\nu}$ assuming different neutrino flavor and kind of interaction. In the last columns the probabilities for each shower event of being generated by a electronic, muonic and tauonic neutrino and of having scattered with nucleon via CC or NC interaction are shown. } \begin{ruledtabular} \begin{tabular}{cccccccccccccccc} \multirow{4}{*}{ID } & \multirow{4}{*}{$E_{dep.}^{obs.}$ [TeV]} & \multirow{4}{*}{$\sigma_{E_{dep.}}$ [TeV]} & \multicolumn{8}{c}{$E_{\nu}$ [TeV]} & \multicolumn{5}{c}{prob. [$ \% $]} \\ \cline{4-11} & & & \multicolumn{2}{c}{CC + NC } & \multicolumn{2}{c}{CC} & \multicolumn{2}{c}{CC} & \multicolumn{2}{c}{NC} & \multirow{3}{*}{$\nu_e$ } & \multirow{3}{*}{$\nu_{\mu} $ } & \multirow{3}{*}{$\nu_{\tau} $ } & \multirow{3}{*}{CC } & \multirow{3}{*}{NC } \\ & & & \multicolumn{2}{c}{$\nu_e + \nu_{\mu} + \nu_{\tau}$} & \multicolumn{2}{c}{$\nu_e$} & \multicolumn{2}{c}{$\nu_{\tau}$} & \multicolumn{2}{c}{$\nu_e + \nu_{\mu} + \nu_{\tau}$} & & & & \\ \cline{4-11} & & & mean & s.d. & mean & s.d. & mean & s.d. & mean & s.d. & & & & \\ \hline 2 & 117 & 15.4 & 162 & 158 & 116 & 17 & 174 & 94 & 343 & 364 & 62.2 & 4.2 & 33.5 & 87.3 & 12.7 \\ 4 & 165.4 & 19.8 & 222 & 178 & 165 & 21 & 245 & 121 & 444 & 399 & 62.5 & 4.0 & 33.5 & 88.0 & 12.0 \\ 9 & 63.2 & 8.9 & 92 & 112 & 64 & 8 & 94 & 48 & 201 & 271 & 60.9 & 4.5 & 34.6 & 86.5 & 13.5 \\ 10 & 97.2 & 12.4 & 134 & 130 & 97 & 13 & 145 & 75 & 276 & 306 & 62.2 & 4.2 & 33.5 & 87.3 & 12.7\\ 11 & 88.4 & 12.5 & 121 & 124 & 88 & 13 & 130 & 68 & 257 & 296 & 62.5 & 4.1 & 33.4 & 87.5 & 12.5\\ 12 & 104.1 & 13.2 & 143 & 146 & 104 & 14 & 154 & 88 & 304 & 345 & 62.5 & 4.1 & 33.4 & 87.7 & 12.3 \\ 14 & 1040.7 & 144.4 & 1195 & 379 & 1020 & 153 & 1402 & 411 & 1671 & 529 & 63.9 & 3.1 & 33.1 & 90.7 & 9.3 \\ 17 & 199.7 & 27.2 & 266 & 203 & 197 & 29 & 294 & 146 & 519 & 435 & 62.2 & 4.2 & 33.6 & 87.5 & 12.5 \\ 19 & 71.5 & 7.2 & 103 & 119 & 73 & 8 & 108 & 54 & 229 & 287 & 62.4 & 4.3 & 33.3 & 87.1 & 12.9\\ 20 & 1140.8 & 142.8 & 1307 & 377 & 1128 & 150 & 1531 & 409 & 1789 & 512 & 64.2 & 3.0 & 32.8 & 91.1 & 8.9 \\ 22 & 219.5 & 24.4 & 297 & 218 & 220 & 26 & 329 & 157 & 572 & 461 & 61.8 & 4.2 & 34.0 & 87.5 & 12.5 \\ 26 & 210 & 29 & 279 & 213 & 207 & 31 & 308 & 154 & 546 & 455 & 62.2 & 4.2 & 33.6 & 87.6 & 12.4 \\ 27 & 60.2 & 5.6 & 88 & 107 & 62 & 6 & 91 & 48 & 195 & 264 & 61.9 & 4.2 & 33.8 & 87.2 & 12.8 \\ 30 & 128.7 & 13.8 & 176 & 152 & 130 & 15 & 192 & 98 & 354 & 350 & 62.4 & 4.1 & 33.5 & 87.8 & 12.2 \\ 33 & 384.7 & 48.6 & 491 & 278 & 381 & 52 & 561 & 244 & 864 & 532 & 62.5 & 3.9 & 33.6 & 88.2 & 11.8 \\ 35 & 2003.7 & 261.5 & 2085 & 346 & 1969 & 274 & 2333 & 348 & 2414 & 349 & 71.0 & 1.7 & 27.2 & 94.8 & 5.2 \\ 39 & 101.3 & 13.3 & 141 & 148 & 101 & 14 & 150 & 77 & 305 & 349 & 62.1 & 4.2 & 33.6 & 87.2 & 12.8 \\ 40 & 157.3 & 16.7 & 217 & 183 & 158 & 18 & 236 & 113 & 440 & 407 & 62.2 & 4.2 & 33.6 & 87.3 & 12.7 \\ 41 & 87.6 & 10 & 124 & 128 & 88 & 11 & 133 & 78 & 264 & 297 & 62.3 & 4.3 & 33.4 & 87.1 & 12.9 \\ 42 & 76.3 & 11.6 & 106 & 117 & 76 & 12 & 112 & 63 & 228 & 277 & 61.9 & 4.3 & 33.7 & 87.0 & 13.0 \\ 46 & 158 & 16.6 & 218 & 183 & 159 & 18 & 236 & 114 & 442 & 406 & 62.0 & 4.3 & 33.7 & 87.2 & 12.8 \\ 48 & 104.7 & 13.5 & 145 & 145 & 104 & 15 & 156 & 83 & 307 & 339 & 62.0 & 4.3 & 33.7 & 87.3 & 12.7 \\ 51 & 66.2 & 6.7 & 95 & 105 & 67 & 7 & 102 & 61 & 204 & 244 & 61.9 & 4.4 & 33.7 & 86.9 & 13.1 \\ 52 & 158.1 & 18.4 & 214 & 175 & 158 & 20 & 235 & 117 & 427 & 391 & 62.5 & 4.1 & 33.4 & 87.7 & 12.3 \\ 56 & 104.2 & 10 & 145 & 133 & 106 & 11 & 157 & 78 & 298 & 311 & 62.3 & 4.1 & 33.6 & 87.6 & 12.4 \\ 57 & 132.1 & 18.1 & 182 & 172 & 131 & 19 & 194 & 101 & 386 & 393 & 62.3 & 4.2 & 33.4 & 87.2 & 12.8 \\ 59 & 124.6 & 11.7 & 174 & 156 & 126 & 13 & 189 & 99 & 355 & 354 & 62.2 & 4.2 & 33.6 & 87.4 & 12.6 \\ 60 & 93 & 12.9 & 129 & 132 & 92 & 14 & 138 & 75 & 274 & 305 & 62.0 & 4.4 & 33.6 & 86.9 & 13.1\\ 64 & 70.8 & 8.1 & 102 & 113 & 72 & 9 & 107 & 58 & 219 & 265 & 61.9 & 4.5 & 33.7 & 86.7 & 13.3\\ 66 & 84.2 & 10.7 & 116 & 115 & 84 & 11 & 127 & 71 & 240 & 273 & 62.0 & 4.1 & 33.9 & 87.6 & 12.4\\ 67 & 165.7 & 16.5 & 230 & 193 & 167 & 18 & 249 & 120 & 464 & 424 & 61.7 & 4.3 & 34.0 & 87.0 & 13.0\\ 70 & 98.8 & 12 & 137 & 138 & 99 & 13 & 147 & 83 & 287 & 325 & 62.4 & 4.2 & 33.4 & 87.5 & 12.5\\ 74 & 71.3 & 9.1 & 100 & 110 & 72 & 9 & 106 & 57 & 214 & 262 & 62.0 & 4.3 & 33.7 & 87.0 & 13.0\\ 75 & 164 & 21.4 & 222 & 187 & 163 & 23 & 242 & 126 & 450 & 415 & 62.4 & 4.1 & 33.5 & 87.6 & 12.4\\ 79 & 158.2 & 20.3 & 215 & 182 & 157 & 22 & 233 & 115 & 435 & 401 & 62.1 & 4.3 & 33.6 & 87.2 & 12.8\\ 80 & 85.6 & 11.1 & 119 & 127 & 85 & 12 & 127 & 65 & 256 & 304 & 62.4 & 4.2 & 33.4 & 87.3 & 12.7\\ 81 & 151.8 & 21.6 & 203 & 174 & 150 & 23 & 222 & 112 & 410 & 392 & 62.4 & 4.1 & 33.5 & 87.6 & 12.4 \end{tabular} \end{ruledtabular} \label{tab:results} \end{table*} \endgroup \begingroup \squeezetable \def1.27{1.27} \begin{table*}[!h] \caption{Same as for Table \ref{tab:results}, but for the 13 track events with observed-deposited energy above 60 TeV.} \begin{ruledtabular} \begin{tabular}{ccccccccccc} \multirow{4}{*}{ID } & \multirow{4}{*}{$E_{dep.}^{obs.}$ [TeV]} & \multirow{4}{*}{$\sigma_{E_{dep.}}$ [TeV]} & \multicolumn{6}{c}{$E_{\nu}$ [TeV]} & \multicolumn{2}{c}{prob. [$ \% $]} \\ \cline{4-9} & & & \multicolumn{2}{c}{CC} & \multicolumn{2}{c}{CC} & \multicolumn{2}{c}{CC} & \multirow{3}{*}{$\nu_{\mu} $ } & \multirow{3}{*}{$\nu_{\tau} $ } \\ & & & \multicolumn{2}{c}{ $ \nu_{\mu} + \nu_{\tau}$} & \multicolumn{2}{c}{$\nu_{\mu}$ } & \multicolumn{2}{c}{ $\nu_{\tau}$} & & \\ \cline{4-9} & & & mean & s.d. & mean & s.d. & mean & s.d.& & \\ \hline 3 & 78.7 & 10.8 & 218 & 158 & 215 & 146 & 232 & 221 & 86.8 & 13.2\\ 5 & 71.4 & 9 & 197 & 142 & 195 & 131 & 209 & 198 & 86.8 & 13.2\\ 13 & 252.7 & 25.9 & 719 & 495 & 721 & 488 & 703 & 538 & 87.1 & 12.9\\ 23 & 82.2 & 8.6 & 226 & 163 & 224 & 150 & 240 & 232 & 86.8 & 13.2\\ 38 & 200.5 & 16.4 & 571 & 395 & 570 & 382 & 577 & 469 & 87.0 & 13.0\\ 44 & 84.6 & 7.9 & 235 & 170 & 233 & 154 & 254 & 251 & 86.7 & 13.3\\ 45 & 429.9 & 57.4 & 1071 & 650 & 1083 & 657 & 986 & 587 & 87.3 & 12.7\\ 47 & 74.3 & 8.3 & 207 & 150 & 205 & 136 & 222 & 217 & 86.8 & 13.2\\ 62 & 75.8 & 7.1 & 217 & 156 & 214 & 142 & 235 & 229 & 86.7 & 13.3\\ 63 & 97.4 & 9.6 & 275 & 197 & 272 & 181 & 296 & 280 & 86.8 & 13.2\\ 71 & 73.5 & 10.5 & 200 & 149 & 197 & 134 & 216 & 225 & 86.7 & 13.3\\ 76 & 126.3 & 12.7 & 356 & 253 & 352 & 235 & 379 & 348 & 86.7 & 13.3\\ 82 & 159.3 & 15.5 & 451 & 316 & 450 & 302 & 463 & 395 & 87.0 & 13.0\\ % \end{tabular} \end{ruledtabular} \label{tab:results_track} \end{table*} \endgroup \section*{Acknowledgements} We are grateful to Giovanni Amelino-Camelia for his encouragement in writing this paper and we are grateful to Gennaro Miele for some valuable comments on an earlier version of this manuscript.
{ "timestamp": "2017-12-15T02:00:46", "yymm": "1712", "arxiv_id": "1712.04979", "language": "en", "url": "https://arxiv.org/abs/1712.04979" }
\section{Introduction} \label{sIntro} The present paper concerns the relationship between sum-and-distance systems and sum systems, and their general structure, including a construction method for all such systems. Roughly speaking, a sum-and-distance system consists of several component sets of natural numbers such that the sums comprising one element, or its negative, of each set generate a prescribed target set, specifically an arithmetic progression. More simply, a sum system consists of several sets of non-negative integers such that the sums formed by choosing exactly one term from each set generate a sequence of consecutive integers. For the precise definitions, see Section \ref{sDefi} below. \par Two-component sum-and-distance systems arise naturally when we consider square arrays of consecutive integers with certain symmetry properties. The algebraic properties of square matrices with different types of symmetries were recently explored in \cite{rSupAlg}, also giving construction formulae for the various types. In that paper, the matrix entries were assumed to be general real numbers, allowing the symmetry classes to form direct summands in a ${\mathbb Z}_2$-graduation of the matrix algebra over ${\mathbb R}$. However, an additional level of complication is introduced when we require the matrix entries to be integers or, more specifically, a consecutive sequence of integers, such as in a magic square or a principal reversible square \cite{rOB}. \par A reversible square $M$ is an $n \times n$ matrix with the properties of column and line reversal symmetry (R) and the vertex sum property (V) (see \cite{rSupAlg} and equations (\ref{eRcub}), (\ref{eVdef}) in Section \ref{sprc} below). Such a matrix also has the associated symmetry that any two entries in diametrically opposite positions with respect to the centre of the matrix add up to the same constant $2 w$ (see \cite{rSupAlg}, Lemma 7.1). Subtracting $w$ from each matrix entry, we obtain a reversible square $M_0$ whose entries sum up to $0.$ If $n = 2 \nu$ is even, it is then of the form (\cite{rSupAlg} Theorem 7.2) \begin{align} M_0 &= \rc 2 \pmatrix{ J(1_\nu\,a^T + b\,1_\nu^T) J & J (-1_\nu\,a^T + b\,1_\nu^T) \cr (1_\nu\,a^T - b\,1_\nu^T) J & -1_\nu\,a^T - b\,1_\nu^T}, \label{ewtless}\end{align} where $1_\nu \in {\mathbb R}^\nu$ is the vector with all entries equal to 1, $J \in {\mathbb R}^{\nu \times \nu}$ is the matrix which has entries 1 on the anti-diagonal and 0 elsewhere, and $a, b \in {\mathbb R}^\nu$ are arbitrary vectors. If the reversible square $M$ is to contain exactly the integers $1, \dots, n^2,$ then the weight $w$ can be found as the average of all entries, $w = (n^2 + 1)/2;$ hence the weightless reversible square $M_0$ will have as entries the numbers \begin{align} \left\{-\frac{n^2-1}2, -\frac{n^2-1}2 + 1, \dots, \frac{n^2-1}2 - 1, \frac{n^2-1}2 \right\}. \nonumber\end{align} Considering that multiplication by $J$ on the left or right just inverts the order of the rows or columns of a matrix, respectively, we find from (\ref{ewtless}) that the sums $\pm a_j \pm b_k,$ with $j, k \in \{1, \dots, \nu\}$ and independently chosen signs, must generate each odd number from $-n^2 + 1$ to $n^2 - 1$ exactly once. In other words, the sets of entries of the vectors $a$ and $b$ form a two-component non-inclusive sum-and-distance system as defined in Section \ref{sDefi} below. \par If $n = 2 \nu + 1$ is odd, then the weightless reversible square $M_0$ will have the form (\cite{rSupAlg} Theorem 7.2) \begin{align} M_0 &= \pmatrix{J(1_\nu\,a^T + b\,1_\nu^T)J & J b & J(-1_\nu\,a^T + b\,1_\nu^T) \cr (J a)^T & 0 & - a^T \cr (1_\nu\,a^T - b\,1_\nu^T)J & - b & -1_\nu\,a^T - b\,1_\nu^T } \nonumber\end{align} with vectors $a, b \in {\mathbb R}^\nu,$ and by the same reasoning as above, we find that, for the reversible square $M$ to contain the integers $1, \dots, n^2,$ the sums $\pm a_j \pm b_k,$ where $j, k \in \{1, \dots, \nu\}$ and the signs are chosen independently, taken together with the entries $\pm a_j, \pm b_j$ $(j \in \{1, \dots, \nu\}),$ must generate exactly the integers $1, 2, \dots, (n^2-1)/2$ and their negatives. In other words, the sets of entries of the vectors $a$ and $b$ form a two-component inclusive sum-and-distance system, as defined in Section \ref{sDefi} below. \par Similarly, sum-and-distance systems appear in a certain type of rank 3 associated magic squares (cf.\ \cite{rBloRep} Theorem 9). As shown in Lemma 3.1 and Theorem 4.1 of \cite{rSupAlg} (see also \cite{rBloRep} Theorem 1), a $2\nu \times 2 \nu$ matrix $M$ will have all rows and columns adding up to the same number, and also the associated symmetry described above, if, after subtracting the weight $w,$ it has the form \begin{align} M_0 &= \rc 2 \pmatrix{J(J V^T + W J)J & J(-J V^T + W J) \cr (J V^T - W J)J & -J V^T - W J} \nonumber\end{align} with matrices $V, W \in {\mathbb R}^{\nu \times \nu}$ whose rows add up to 0. Specifically, if $\nu$ is even and we choose vectors $v, w$ with entries $\pm 1$ which, for each vector, add up to 0, and further vectors $a, b \in {\mathbb R}^\nu,$ and set $V = a\,v^T,$ $W = b\,w^T,$ then the resulting matrix $M$ (after adding the weight $w = (n^2+1)/2$) will be an associated magic square with entries $1, \dots, n^2$ if and only if the sets of entries of $a$ and $b$ form a two-component non-inclusive sum-and-distance system. \par As a final example, we mention most perfect squares; these are square matrices of even dimensions which, in addition to having all rows and columns adding up to the same number, have the properties that all $2 \times 2$ submatrices have the same sum of entries, and that all pairs of entries half the matrix size apart on any diagonal add up to the same number. By \cite{rSupAlg} Theorem 6.2 any $2 \nu \times 2 \nu$ most perfect square, with even $\nu,$ is, after subtracting the weight from each entry, of the form \begin{align} M_0 &= \pmatrix{a\,\S_\nu^T + \S_\nu\,b^T & a\,\S_\nu^T - \S_\nu\,b^T \cr -a\,\S_\nu^T + \S_\nu\,b^T & -a\,\S_\nu^T - \S_\nu\,b^T}, \nonumber\end{align} where $a, b \in {\mathbb R}^\nu$ are any vectors and $\S_\nu = (1, -1, 1, -1, \dots, 1, -1)^T \in {\mathbb R}^\nu.$ Again we see that $M$ will have entries $1, \dots, (2 \nu)^2$ if and only if the sets of entries of the vectors $2 a$ and $2 b$ form a two-component non-inclusive sum-and-distance system. \par Sum systems are conceptually simpler. They are directly related to reversible cuboids, the multidimensional analogues of reversible squares and rectangles, as shown in Theorem \ref{tssten} below. Further, it is one of the results of the present study that sum systems are in one-to-one correspondence with sum-and-distance systems (Theorems \ref{tSdsSs1}, \ref{tSdsSs2}). \par We remark that sum systems can be interpreted as discrete local coordinate systems for a set of consecutive integers, generalising the base $q$ decimal representation. Indeed, the integers $0, 1, \dots, q^m-1$ can be uniquely represented in the form \begin{align} \sum_{j=1}^m a_j q^{j-1}, \nonumber\end{align} where $a_j \in \{0, 1, \dots, q-1\},$ so the sets \begin{align*} &\{0, 1, 2, \dots, q-1\}, \{0, q, 2q, \dots, q^2-q\}, \{0, q^2, 2q^2, \dots, q^3-q^2\},\\ &\qquad\qquad\dots, \{0, q^{m-1}, 2 q^{m-1}, \dots, q^m - q^{m-1}\} \end{align*} form an $m$-component sum system in the sense defined in Section \ref{sDefi} below. Using this system as a basis, the $m$ entries, one taken from each component set, which add up to a given number can be considered as that number's discrete coordinates. In general, sum systems will have a considerably more complicated structure than the above simple arithmetic progressions, and it is one of the main results of the present paper to provide a constructive description of the general sum system (see Theorem \ref{tssbuild}). \par Research on some related topics has been undertaken previously, including the study of arithmetic progressions arising in the sum of two sets of integers \cite{rB}, \cite{rgreen}; comparing the sizes of the sum set and the difference set of a set with itself \cite{rMO}, \cite{rR}, \cite{rN}; for an overview of this subject, see \cite{rGr}. However, it seems that despite the simplicity of the concepts, sum systems and sum-and-distance systems, as studied here, have not attracted much attention in the mathematical literature, and our present results are new. \par The paper is organised as follows. After giving the definitions of sum-and-distance systems and sum systems in Section \ref{sDefi}, we use a polynomial factorisation method to show in Section \ref{sSdsSs} that there is a one-to-one relationship between sum-and-distance systems and sum systems of suitable size. It is fairly straightforward to see that a sum-and-distance system generates a corresponding sum system, but the fact that every sum system arises in this way is not obvious. In Section \ref{sprc}, we explore the connection between $m$-component sum systems and $m$-dimensional principal reversible cuboids, which are generalisations of Ollerenshaw and Br\'ee's principal reversible squares \cite{rOB} from square matrices to more general order $m$ tensors. This shows that the structure of sum systems (and hence, by means of the bijection, of sum-and-distance systems) can be fully understood in terms of the construction of principal reversible cuboids. In Section \ref{sconstr}, we establish that the structure of the latter is essentially recursive, in the sense that any principal reversible cuboid arises from glueing offset copies of a maximal principal reversible subcuboid together. Finally, in Section \ref{sjof} we show that, due to this recursive property, every principal reversible cuboid can be constructed by means of a chain of building operators with parameters arising from a joint ordered factorisation of the cuboid's dimensions, thus linking the structure of principal reversible cuboids with number theoretic properties of their sizes. As a result, we obtain the general structure of the component sets of sum systems as nested arithmetic progressions. We conclude with some examples which illustrate how sum systems and sum-and-distance systems arise from joint ordered factorisations. \section{Definition of sum-and-distance systems and sum systems} \label{sDefi} Arithmetic progressions play a central role in the present paper. We use the notation \def\ap#1{\langle #1 \rangle} \def\Ap#1{\left\langle #1 \right\rangle} $\ap m := \{0, 1, \dots, m-1\}$ for any $m \in {\mathbb N},$ so the arithmetic progression with start value $a,$ step size $s$ and $N$ terms can be expressed as $s \ap N + a \ (= \{a, a+s, a+2s, \dots, a+(N-1)s \}).$ \par Note that we use the standard convention that $A + B = \{x + y : x \in A, y \in B\}$ and $a A + b = \{a x + b : x \in A\}$ for sets $A, B \subset {\mathbb R}$ and $a, b \in {\mathbb R}$ throughout. As usual, $A - B = A + (-B).$ We write $|M|$ for the cardinality of a finite set $M.$ \par\medskip {\it Definition.} Let $\nu, \mu \in {\mathbb N}.$ We call a pair of sets $\{a_1, \dots, a_\nu\}, \{b_1, \dots, b_\mu\} \subset {\mathbb N}$ a {\it (non-inclusive) sum-and-distance system\/} if \begin{align} \{|a_j \pm b_k| : j \in \{1, \dots, \nu\}, k \in \{1, \dots, \mu\}\} &= 2 \ap{2 \nu \mu} + 1. \nonumber\end{align} The set of pairs is called an {\it inclusive sum-and-distance system\/} if \begin{align} \{|a_j \pm b_k|, a_j, b_k : j \in \{1, \dots, \nu\}, k \in \{1, \dots, \mu\}\} &= \ap{2 \nu \mu + \nu + \mu} + 1. \nonumber\end{align} \par\medskip\noindent The target set for a non-inclusive sum-and-distance system, $2 \ap{2 \nu \mu} +1 = \{1, 3, 5, \dots, 4 \nu \mu -1\},$ differs from that for an inclusive sum-and-distance system, $\ap{2 \nu \mu + \nu + \mu} + 1 = \{1, 2, \dots, 2 \nu \mu + \nu + \mu\},$ in that the former only has odd integers; this difference is motivated by the situations outlined above in which sum-and-distance systems arise, and the reason for it will be made transparent by Theorems \ref{tSdsSs1}, \ref{tSdsSs2}. \par Sum-and-distance systems can be equivalently characterised by a target set of positive and negative numbers in the following way. \begin{lemma}\label{lfulltarget} Let $\{a_1, \dots, a_\nu\}, \{b_1, \dots, b_\mu\} \subset {\mathbb N},$ $\nu, \mu \in {\mathbb N}.$ \par\medskip\noindent (a) These sets form a non-inclusive sum-and-distance system if and only if \begin{align} \{\pm a_j \pm b_k : j \in \{1, \dots, \nu\}, k \in \{1, \dots, \mu\}\} = 2 \ap{4 \nu \mu} - 4 \nu \mu + 1, \nonumber\end{align} where the signs $\pm$ are chosen independently, so there are 4 elements of the set for each pair $(j, k).$ \par\medskip\noindent (b) These sets form an inclusive sum-and-distance system if and only if \begin{align} \{\pm a_j \pm b_k, \pm a_j, \pm b_k, 0 : j \in \{1, \dots, \nu\}, &k \in \{1, \dots, \mu\}\}\nonumber \\ &= \ap{(2 \nu + 1) (2 \mu + 1)} - 2 \nu \mu - \nu - \mu, \nonumber\end{align} where the signs $\pm$ are chosen independently, so there are 8 elements of the set for each pair $(j, k).$ \end{lemma} \begin{proof} (a) The sums $\pm a_j \pm b_k$ will give exactly the sums and absolute distances $|a_j \pm b_k|$ and their negatives $-|a_j \pm b_k|,$ so the resulting set will be the union of the target set of the non-inclusive sum-and-distance system with its negative; this can be written as the step-2 arithmetic progression on the right-hand side. \par\medskip\noindent (b) The sums $\pm a_j \pm b_k$ give the same results as in (a), and including the elements $\pm a_j$ and $\pm b_k,$ we obtain the union of the target set of the inclusive sum-and-distance system with its negative. By adding the element 0 to the set, we can complete this to the arithmetic progression on the right-hand side. \end{proof} The above lemma motivates the following generalisation. \par\medskip {\it Definition.} Let $m \in {\mathbb N}$ and $A_j \subset {\mathbb N}$ $(j \in \{1, \dots, m\}).$ Then we call $A_1, A_2, \dots, A_m$ a {\it (non-inclusive) $m$-part sum-and-distance system\/} if \begin{align} \sum_{j=1}^m (A_j \cup (-A_j)) &= 2 \Ap{2^m \prod_{j=1}^m |A_j|} - 2^m \prod_{j=1}^m |A_j| + 1. \nonumber\end{align} We call $A_1, A_2, \dots, A_m$ an {\it inclusive $m$-part sum-and-distance system\/} if \begin{align} \sum_{j=1}^m (A_j \cup \{0\} \cup (-A_j)) &= \Ap{\prod_{j=1}^m (2|A_j|+1)} - \rc 2 \left(\prod_{j=1}^m (2|A_j|+1) - 1 \right). \nonumber\end{align} \par\medskip {\it Definition.} Let $n_1, n_2 \in {\mathbb N}+1.$ \hfill\break We call a pair of sets $A = \{a_1, \dots, a_{n_1}\}, B = \{b_1, \dots, b_{n_2}\} \subset {\mathbb N}_0$ a {\it sum system\/} if \begin{align} A + B &= \ap{n_1 n_2}, \nonumber\end{align} i.e. in explicit form, \begin{align} \{a_j + b_k : j \in \{1, \dots, n_1\}, k \in \{1, \dots, n_2\}\} &= \{0, 1, \dots, n_1 n_2 - 1\}. \nonumber\end{align} More generally, we call a collection of $m$ sets $A_1, A_2, \dots, A_m \subset {\mathbb N}_0,$ each of cardinality at least 2, an {\it $m$-part sum system\/} if \begin{align} \sum_{k=1}^m A_k &= \Ap{\prod_{k=1}^m \left| A_k \right|}; \nonumber\end{align} note that \begin{align} \sum_{k=1}^m A_k = \left\{ \sum_{k=1}^m a_k : a_k \in A_k \ (k \in \{1, \dots, m\})\right \}. \nonumber\end{align} \par\medskip\noindent Since the number 0 in the target set can only arise as a sum of 0s, as all numbers in the sets are non-negative, it follows that each component set of a sum system contains the number 0. \section{Correspondence between sum-and-distance systems and sum systems} \label{sSdsSs} Given a finite set $M \subset {\mathbb N}_0,$ we can associate with it the polynomial \begin{align} p_M(x) &= \sum_{j \in M} x^j. \label{epoly}\end{align} More generally, for a finite set $M \in {\mathbb Z},$ we have an associated Laurent polynomial (\ref{epoly}) which may include negative powers. \par Specifically for the arithmetic progression $M = s \ap N + a,$ where $s, N \in {\mathbb N}$ and $a \in {\mathbb N}_0,$ we find \begin{align} p_{s \ap N + a}(x) &= \sum_{j=0}^{N-1} x^{a+j s} = x^a \sum_{j=0}^{N-1} (x^s)^j. \nonumber\end{align} Clearly this polynomial has root $0$ with multiplicity $a;$ it is also evident that $1$ is not a root, nor are the other $s$th roots of unity. Hence, to find the further roots of this polynomial, we may assume $x^s \neq 1$ and observe that \begin{align} p_{s \ap N + a}(x) &= x^a \sum_{j=0}^{N-1} (x^s)^j = x^a\,\frac{1 - (x^s)^N}{1 - x^s} = x^a\,\frac{1 - x^{s N}}{1 - x^s}, \nonumber\end{align} which shows that the non-zero roots of $p_{s \ap N + a}$ are exactly the ($s N$)th roots of unity which are not $s$th roots of unity; in particular, they all lie on the complex unit circle. \par\medskip {\it Definition.} Let $m \in {\mathbb N}.$ A polynomial $p$ of degree $d$ is called {\it palindromic\/} if it is equal to its reciprocal polynomial, i.e.\ if $p(x) = x^{d} p(\rc x),$ so \begin{align} p(x) = \sum_{j=0}^{d} \alpha_j\, x^j \nonumber\end{align} with $\alpha_j = \alpha_{d - j}$ $(j \in \{0, \dots, d\}).$ \par\medskip\noindent The results of this section will rely on the following key observation. \begin{lemma}\label{lkey} Let $p$ be a polynomial with real coefficients and with all its roots situated on the complex unit circle. \begin{description} \item{(a)} If all roots of $p$ are non-real, then $p$ is palindromic and of even degree. \item{(b)} If all roots of $p$ are non-real except for the simple root $-1,$ then $p$ is palindromic and of odd degree. \end{description} \end{lemma} \begin{proof} (a) As the polynomial has real coefficients, its (non-real) roots come in complex conjugate pairs, say $\{r_j, \overline{r_j} \mid j \in \{1, \dots, m\}\}.$ Thus \begin{align} p(x) &= \prod_{j=1}^m (x - r_j) (x - \overline{r_j}) = x^{2m} \prod_{j=1}^m \left(1 - \frac{r_j}x\right) \left(1 - \frac{\overline{r_j}}x\right) \nonumber\\ &= x^{2m} \prod_{j=1}^m r_j\,\overline{r_j}\,\left(\rc{r_j} - \rc x\right) \left(\rc{\overline{r_j}} - \rc x\right) = x^{2m} \prod_{j=1}^m \left(\rc x - \overline{r_j}\right) \left(\rc x - r_j\right) \nonumber\\ &= x^{2m}\,p\left(\rc x\right), \nonumber\end{align} with $2m$ the degree of the polynomial, so $p$ is palindromic. \par\medskip\noindent (b) The polynomial $p$ can be factorised as $p(x) = (x + 1) \tilde p(x),$ where $\tilde p$ only has non-real roots situated on the unit circle. Writing \begin{align} p(x) = \sum_{j=0}^{d} \alpha_j\,x^j, \qquad & \tilde p(x) = \sum_{j=0}^{d-1} \tilde \alpha_j\,x^j, \nonumber\end{align} where $d$ is the degree of the polynomial $p,$ a straightforward calculation gives \begin{align} \alpha_j &= \cases{\tilde \alpha_0 & if j=0, \cr \tilde \alpha_j + \tilde \alpha_{j+1} & if j \in \{1, \dots, d-1\}, \cr \tilde \alpha_{d-1} & if j = d;} \label{ecoefform}\end{align} and hence it follows by recursion that $\tilde \alpha_j \in {\mathbb R}$ $(j \in \{0, \dots, d-1\}),$ since $p$ has real coefficients. Therefore we can apply part (a) to find that $\tilde p$ is palindromic of even degree, i.e. $\tilde \alpha_j = \tilde \alpha_{d-1-j}$ $(j \in \{0, \dots, d-1\}).$ Hence $p$ is of odd degree, and we deduce from (\ref{ecoefform}) that \begin{align} \alpha_{d} &= \tilde \alpha_{d-1} = \tilde \alpha_0 = \alpha_0, \nonumber\\ \alpha_j &= \tilde \alpha_j + \tilde \alpha_{j-1} = \tilde \alpha_{d-1-j} + \tilde \alpha_{d-1-j+1} = \alpha_{d-j} \quad (j \in \{1, \dots, d-1\}), \nonumber\end{align} so $p$ is palindromic. \end{proof} Using this result, we can show that the component sets of sum systems always have a palindromic structure, too, in the following sense. \begin{theorem}\label{tsspalin} Let $m \in {\mathbb N}.$ Suppose the sets $A_1, A_2, \dots, A_m \subset {\mathbb N}_0$ form an $m$-part sum system. Then, for each $j \in \{1, \dots, m\},$ \begin{align} A_j = (\max A_j) - A_j, \nonumber\end{align} i.e. $x \in A_j$ if and only if $(\max A_j - x) \in A_j.$ \par Moreover, if all component sets $A_j$ have odd cardinality, then $\max A_j$ is even for every $j \in \{1, \dots, m\};$ if at least one component set has even cardinality, then $\max A_j$ is odd for exactly one $j \in \{1, \dots, m\}.$ \end{theorem} \begin{proof} Denoting the elements of the set $A_j$ by $a^{(j)}_1, a^{(j)}_2, \dots, a^{(j)}_{d_j},$ where $d_j = |A_j|,$ and setting $d = \prod \limits_{k=1}^m |A_k|,$ we find \begin{align} \prod_{j=1}^m p_{A_j}(x) &= \left(\sum_{k_1=1}^{d_1} x^{a^{(1)}_{k_1}}\right) \left(\sum_{k_2=1}^{d_2} x^{a^{(2)}_{k_2}}\right) \cdots \left(\sum_{k_m=1}^{d_m} x^{a^{(m)}_{k_m}}\right) \nonumber\\ &= \sum_{k_1=1}^{d_1} \sum_{k_2=1}^{d_2} \cdots \sum_{k_m=1}^{d_m} x^{a^{(1)}_{k_1} + a^{(2)}_{k_2} + \cdots + a^{(m)}_{k_m}} = \sum_{j=0}^{d-1} x^j = \frac{1 - x^d}{1 - x}, \label{esspoly}\end{align} where we used the sum system property $\sum \limits_{k=1}^m A_k = \ap d$ in the penultimate step. This shows that the polynomials $p_{A_j}$ form a factorisation of the polynomial on the right-hand side of (\ref{esspoly}). Now we distinguish between two cases. \par\medskip {\it 1st case.\/} If $d = \prod \limits_{j=1}^m d_j$ is odd, i.e. if all $d_j$ are odd, then the polynomial on the right-hand side of (\ref{esspoly}) has no real roots; its roots are the non-real $d$th roots of unity. Hence, for any $j \in \{1, \dots, m\},$ $p_{A_j}$ has only non-real roots situated on the complex unit circle, and it has real coefficients (in fact, coefficients in $\{0, 1\}$). Hence, by Lemma \ref{lkey} (a), $p_{A_j}$ has even degree and is palindromic, which gives the stated property for $A_j.$ \par\medskip {\it 2nd case.\/} If at least one of the component set cardinalities $d_j$ is even, then $d$ is even, so $-1$ is a (simple) root of the polynomial on the right-hand side of (\ref{esspoly}). Therefore exactly one of the polynomials $p_{A_j}$ has the root $-1;$ w.l.o.g. we may assume that $p_{A_1}$ is this polynomial. Then for any $j \in \{2, \dots, m\},$ the same reasoning as in the first case shows that $p_{A_j}$ has even degree and is palindromic, while, by Lemma \ref{lkey} (b), $p_{A_1}$ is palindromic of odd degree. \end{proof} This observation allows us to establish the following bijection between sum-and-distance systems and sum systems. \begin{theorem}\label{tSdsSs1} Let $m \in {\mathbb N}$ and suppose the non-empty sets $A_1, A_2, \dots, A_m \subset {\mathbb N}$ form an $m$-part non-inclusive sum-and-distance system. For $j \in \{1, \dots, m\},$ let \begin{align} \tilde A_j &:= \rc 2\,\max A_j + \rc 2\,(A_j \cup (-A_j)) = \left \{ \frac{(\max A_j) - a}2, \frac{(\max A_j) + a}2 : a \in A_j \right\}; \nonumber\end{align} then $\tilde A_1, \tilde A_2, \dots, \tilde A_m$ form an $m$-part sum system, where each part has even cardinality. \par Conversely, suppose the sets $\tilde A_1, \tilde A_2, \dots, \tilde A_m \subset {\mathbb N}_0$ form an $m$-part sum system, where each component set has even cardinality. Then, for each $j \in \{1, \dots, m\},$ denoting the elements of $\tilde A_j$ by $0 = \alpha_1 < \alpha_2 < \cdots < \alpha_{2 \nu_j},$ let \begin{align} A_j &:= \{\alpha_{\nu_j + k} - \alpha_{\nu_j + 1 - k} : k \in \{1, \dots, \nu_j\} \}; \nonumber\end{align} then the sets $A_1, A_2, \dots, A_m$ form an $m$-part non-inclusive sum-and-distance system. \end{theorem} \begin{proof} We find for the set sum \begin{align} \sum_{j=1}^m \tilde A_j &= \rc 2 \sum_{j=1}^m \max A_j + \rc 2 \sum_{j=1}^m (A_j \cup (-A_j)) \nonumber\\ &= \rc 2 \sum_{j=1}^m \max A_j + \Ap{2^m \prod_{j=1}^m |A_j|} - 2^{m-1} \prod_{j=1}^m |A_j| + \rc 2 \nonumber\\ &= \Ap{2^m \prod_{j=1}^m |A_j|} = \Ap{\prod_{j=1}^m |\tilde A_j|}, \nonumber\end{align} bearing in mind that \begin{align} \rc 2 \sum_{j=1}^m \max A_j &= 2^m \prod_{j=1}^m |A_j| - 1 - 2^{m-1} \prod_{j=1}^m |A_j| + \rc 2, \nonumber\end{align} as the sum of the largest elements of the component sets of a sum-and-distance system gives the largest element of its target set. \par For the converse, we note that for each $j \in \{1, \dots, m\},$ the component set $\tilde A_j$ of the sum system has palindromic symmetry by Theorem \ref{tsspalin}, i.e. its ordered elements satisfy \begin{align} \alpha_{\nu_j+k} + \alpha_{\nu_j+1-k} &= \alpha_{2 \nu_j} \qquad (k \in \{1, \dots, \nu_j\}). \nonumber\end{align} Hence \begin{align} \alpha_{\nu_j+k} - \alpha_{\nu_j+1-k} &= 2 \alpha_{\nu_j+k} - \alpha_{2\nu_j} \nonumber\end{align} and also \begin{align} -(\alpha_{\nu_j+k} - \alpha_{\nu_j+1-k}) &= 2 \alpha_{\nu_j+1-k} - \alpha_{2 \nu_j}, \nonumber\end{align} which gives \begin{align} A_j \cup (-A_j) &= \{2 \alpha_{\nu_j+k} - \alpha_{2 \nu_j} : k \in \{1, \dots, \nu\}\} \cup \{2 \alpha_{\nu_j+1-k} - \alpha_{2 \nu_j} : k \in \{1, \dots, \nu\}\} \nonumber\\ &= 2 \tilde A_j - \max \tilde A_j. \nonumber\end{align} Therefore \begin{align} \sum_{j=1}^m (A_j \cup (-A_j)) &= \sum_{j=1}^m (2 \tilde A_j - \max \tilde A_j) = 2 \sum_{j=1}^m \tilde A_j - \sum_{j=1}^m \max \tilde A_j \nonumber\\ &= 2 \Ap{\prod_{j=1}^m |\tilde A_j|} - \left(\prod_{j=1}^m |\tilde A_j| - 1 \right) \nonumber\\ &= 2 \Ap{2^m \prod_{j=1}^m |A_j|} - 2^m \prod_{j=1}^m |A_j| + 1, \nonumber\end{align} as required. \end{proof} \begin{theorem}\label{tSdsSs2} Let $m \in {\mathbb N}$ and suppose the non-empty sets $A_1, A_2, \dots, A_m \subset {\mathbb N}$ form an $m$-part inclusive sum-and-distance system. For $j \in \{1, \dots, m\},$ let \begin{align} \tilde A_j := \max A_j + (A_j \cup \{0\} \cup (-A_j)); \nonumber\end{align} then $\tilde A_1, \tilde A_2, \dots, \tilde A_m$ form an $m$-part sum system, where each part has odd cardinality. \par Conversely, suppose the sets $\tilde A_1, \tilde A_2, \dots, \tilde A_m \subset {\mathbb N}_0$ form an $m$-part sum system, where each component set has odd cardinality. Then, for each $j \in \{1, \dots, m\},$ denoting the elements of $\tilde A_j$ by $0 = \alpha_1 < \alpha_2 < \cdots < \alpha_{2 \nu_j+1},$ let \begin{align} A_j &:= \{\textstyle{\rc 2}\,(\alpha_{\nu_j+1+k} - \alpha_{\nu_j+1-k}) : k \in \{1, \dots, \nu_j\}\}; \nonumber\end{align} then the sets $A_1, A_2, \dots, A_m$ form an $m$-part inclusive sum-and-distance system. \end{theorem} \begin{proof} In analogy to the proof of Theorem \ref{tSdsSs1}, we find the set sum \begin{align} \sum_{j=1}^m \tilde A_j &= \sum_{j=1}^m \max A_j + \sum_{j=1}^m (A_j \cup \{0\} \cup (-A_j)) \nonumber\\ &= \sum_{j=1}^m \max A_j + \Ap{\prod_{j=1}^m (2|A_j|+1)} - \rc 2 \prod_{j=1}^m (2|A_j|+1) + \rc 2 \nonumber\\ &= \Ap{\prod_{j=1}^m (2|A_j|+1)} = \Ap{\prod_{j=1}^m |\tilde A_j|}. \nonumber\end{align} \par For the converse, we use the fact that for each $j \in \{1, \dots, m\},$ the component set $\tilde A_j$ of the sum system has palindromic symmetry by Theorem \ref{tsspalin}, which gives \begin{align} \alpha_{\nu_j+1+k} + \alpha_{\nu_j+1-k} &= \alpha_{2\nu_j+1} \qquad (k \in \{0, \dots, \nu\}) \nonumber\end{align} (bearing in mind that $\alpha_1 = 0$), and in particular $2 \alpha_{\nu_j+1} = \alpha_{2\nu_j+1}.$ Thus $\alpha_{2\nu_j+1} = \max \tilde A_j$ is even, which also follows from Theorem \ref{tsspalin}, as all component sets of the sum system have odd cardinality. Hence \begin{align} {\textstyle \rc 2}\,(\alpha_{\nu_j+1+k} - \alpha_{\nu_j+1-k}) &= \alpha_{\nu_j+1+k} - {\textstyle \rc 2}\,\alpha_{2\nu_j+1} = \alpha_{\nu_j+1+k} - \alpha_{\nu_j+1} \in {\mathbb N}, \nonumber\end{align} and \begin{align} -{\textstyle \rc 2}\,(\alpha_{\nu_j+1+k} - \alpha_{\nu_j+1-k}) &= \alpha_{\nu_j+1-k} - {\textstyle \rc 2}\,\alpha_{2\nu_j+1} = \alpha_{\nu_j+1-k} - \alpha_{\nu_j+1} \in {\mathbb N} \nonumber\end{align} $(k \in \{1, \dots, \nu\}).$ Consequently, \begin{align} A_j &\cup \{0\} \cup (-A_j) \nonumber\\ &= \{\alpha_{\nu+1+k} - \alpha_{\nu_j+1} : k \in \{1, \dots, \nu\}\} \cup \{0\} \cup \{\alpha_{\nu+1-k} - \alpha_{\nu_j+1} : k \in \{1, \dots, \nu\}\} \nonumber\\ &= \{\alpha_k - \alpha_{\nu_j+1} : k \in \{1, \dots, 2 \nu_j + 1\}\} = \tilde A_j - {\textstyle \rc 2} \max \tilde A_j. \nonumber\end{align} This gives \begin{align} \sum_{j=1}^m (A_j \cup \{0\} \cup (-A_j)) &= \sum_{j=1}^m \tilde A_j - \rc 2 \sum_{j=1}^m \max \tilde A_j = \Ap{\prod_{j=1}^m |\tilde A_j|} - \rc 2 \left(\prod_{j=1}^m |\tilde A_j| - 1 \right) \nonumber\\ &= \Ap{\prod_{j=1}^m (2|A_j|+1)} - \rc 2 \left(\prod_{j=1}^m (2|A_j|+1) - 1 \right), \nonumber\end{align} proving the claim. \end{proof} {\it Remark.\/} Note that sum systems with odd cardinality throughout correspond to inclusive sum-and-distance systems, and the tight target set (containing consecutive integers) of the latter is related to the fact that the maximum of each component set of the sum system is even, as apparent from the proof of Theorem \ref{tSdsSs2}. However, sum systems with even cardinality do not have this property, and hence their corresponding non-inclusive sum-and-distance systems have a more sparse target set containing consecutive odd integers only. Thus the discrepancy between inclusive and non-inclusive sum-and-distance systems resolves into the simple dichotomy between odd and even cardinality of the component sets when considering the sum systems. \par We remark further that at the level of sum systems, there is no reason to require that the components have all odd or all even cardinality. A sum system with mixed parity will, by the transforms given in Theorems \ref{tSdsSs1} and \ref{tSdsSs2}, correspond to a hybrid inclusive/non-inclusive sum-and-distance system, but we do not pursue this correspondence further in the present study. \section{Principal reversible cuboids and sum systems} \label{sprc} In this section we shall extend the definition of reversible square matrices, which can be considered as order 2 tensors, to general order $m$ tensors. We use multiindex notation, i.e. tensor components are indexed by coordinate vectors $k \in {\mathbb N}^m,$ which have a partial ordering given by \begin{align} k \le n \iff k_j \le n_j \ (j \in \{1, \dots, m\}) \qquad (k, n \in {\mathbb N}^m). \nonumber\end{align} The root element of the tensor (corresponding to the top left entry of a matrix) has index $1_m = (1, 1, \dots, 1) \in {\mathbb N}^m.$ We shall also use the standard unit vectors $e_j \in {\mathbb N}^m$ $(j \in \{1, \dots,m\}),$ where $(e_j)_l = \delta_{j l}$ $(j, l \in \{1, \dots, m\}),$ i.e. $e_j$ has $j$th entry 1 and all other entries 0. \par\medskip {\it Definition.} Let $m \in {\mathbb N}$ and $n \in {\mathbb N}^m.$ Then $M \in {\mathbb N}_0^n$ is called an {\it order\/} $m$ {\it tensor\/} (of dimensions $n_1, n_2, \dots, n_m$). It has entries $M_k = M_{k_1, k_2, \dots, k_m} \in {\mathbb N}_0$ $(k \in {\mathbb N}^m, k \le n).$ \par For $j < m,$ we call any subtensor where $m-j$ indices are fixed while the remaining $j$ indices vary in the range determined by $n$ an {\it order $j$ slice\/} of $M.$ \par\medskip {\it Remark.\/} Strictly speaking, the order of the tensor is $|\{j \in \{1, \dots, m\} : n_j > 1\}| \le m,$ so it has order {\it at most\/} $m.$ The order will be exactly $m$ if $n \in ({\mathbb N}+1)^m.$ However, we allow $n \in {\mathbb N}^m$ for ease of reference later. \par\medskip\noindent The following is an extension of the vertex-cross sum property (V) of matrices which states that the two pairs of diagonally opposite corners of any rectangular submatrix add up to the same number \cite{rSupAlg}. \par\medskip {\it Definition.} Let $M \in {\mathbb N}_0^n,$ $n \in {\mathbb N}^m,$ $m \in {\mathbb N} + 1.$ Then we say that $M$ has the {\it vertex cross sum property\/} (V) if and only if every order 2 slice of M has the property (V) for matrices, i.e. if \begin{align} M_{k_1, \dots, k_i, \dots, k_j, \dots, k_m} + M_{k_1, \dots, k_i', \dots, k_j', \dots, k_m} &= M_{k_1, \dots, k_i, \dots, k_j', \dots, k_m} + M_{k_1, \dots, k_i', \dots, k_j, \dots, k_m} \label{eVdef}\end{align} for all $1\le i < j \le m$ and $k_1, \dots, k_m, k_i', k_j' \in {\mathbb N}$ such that $k_l, k_l' \le n_l$ $(l \in \{1, \dots, m\}).$ \begin{lemma}\label{lVprop} Let $M \in {\mathbb N}_0^n,$ $n \in {\mathbb N}^m,$ $m \in {\mathbb N}+1.$ Then $M$ has property {\rm (V)} if and only if \begin{align} M_k &= \sum_{j=1}^m M_{1_m + (k_j-1)e_j} - (m-1) M_{1_m} \qquad (k \in {\mathbb N}^m, k \le n). \label{eVprop}\end{align} \end{lemma} \begin{proof} Suppose $M$ has property (V). We shall show by induction on $l \in \{2, \dots, m\}$ that for any $k \in {\mathbb N}^m,$ $k \le n,$ and any cardinality $l$ subset $\{j_1, j_2, \dots, j_l\} \subset \{1, 2, \dots, m\},$ \begin{align} M_{1_m + \sum_{r=1}^l (k_{j_r}-1)e_{j_r}} &= \sum_{r=1}^l M_{1_m + (k_{j_r}-1)e_{j_r}} - (l-1) M_{1_m}. \label{eVind}\end{align} For $l = 2,$ property (V) gives \begin{align} M_{1_m + (k_{j_1}-1)e_{j_1} + (k_{j_2}-1)e_{j_2}} &= M_{1_m + (k_{j_1}-1)e_{j_1}} + M_{1_m + (k_{j_2}-1)e_{j_2}} - M_{1_m}. \nonumber\end{align} Now suppose $l \in \{2, \dots, m-1\}$ is such that identity (\ref{eVind}) holds for up to $l$ terms. Then, again by property (V), we find \begin{align} M_{1_m + \sum_{r=1}^{l+1} (k_{j_r}-1)e_{j_r}} &= M_{1_m + \sum_{r=1}^{l-1} (k_{j_r}-1)e_{j_r} + (k_{j_l}-1)e_{j_l}} + M_{1_m + \sum_{r=1}^{l-1} (k_{j_r}-1)e_{j_r} + (k_{j_{l+1}}-1)e_{j_{l+1}}} \nonumber\\ &\qquad - M_{1_m + \sum_{r=1}^{l-1} (k_{j_r}-1)e_{j_r}} \nonumber\\ &= \sum_{r=1}^l M_{1_m + (k_{j_r}-1)e_{j_r}} + \sum_{r \in \{1, \dots,l-1\} \cup \{l+1\}} M_{1_m + (k_{j_r}-1)e_{j_r}} \nonumber\\ &\qquad - 2\,(l-1) M_{1_m} - \sum_{r=1}^{l-1} M_{1_m + (k_{j_r}-1)e_{j_r}} + (l-2) M_{1_m} \nonumber\\ &= \sum_{r=1}^{l+1} M_{1_m + (k_{j_r}-1)e_{j_r}} - l\,M_{1_m}. \nonumber\end{align} The identity (\ref{eVprop}) now follows when we take $l = m$ in (\ref{eVind}), which forces $\{j_1, j_2, \dots, j_m\} = \{1, 2, \dots, m\}.$ The converse follows directly from applying identity (\ref{eVprop}) to (\ref{eVdef}). \end{proof} The preceding lemma shows that if the root entry $M_{1_m} = 0,$ then each entry of the order $m$ tensor $M$ is the sum of the entries on the axes for each of its index coordinates, i.e. \begin{align} M_k &= M_{k_1,1,\dots,1} + M_{1, k_2, 1, \dots, 1} + \cdots + M_{1, \dots, 1, k_m}. \nonumber\end{align} This means that overall the set of entries of $M$ is equal to the sum set of the sets of entries on each coordinate axis, where all but one entry of the index vector are kept equal to 1. This gives the following connection with sum systems. \begin{theorem}\label{tssten} Let $m \in {\mathbb N},$ $n \in ({\mathbb N}+1)^m$ and $M \in {\mathbb N}_0^n$ an order $m$ tensor with property {\rm (V),} $M_{1_m} = 0$ and set of entries \begin{align} \{M_k : k \in {\mathbb N}^m, k \le n\} &= \Ap{\prod_{j=1}^m n_j}. \nonumber\end{align} Then the sets $A_1, A_2, \dots, A_m \subset {\mathbb N}_0,$ \begin{align} A_j &= \{M_{1_m + k e_j} : k \in \ap{n_j}\} \qquad (j \in \{1, \dots, m\}) \label{eMcoord}\end{align} form an $m$-part sum system. \end{theorem} \begin{proof} The statement follows from Lemma \ref{lVprop} when we note that the identity (\ref{eVprop}) will turn into \begin{align} M_k &= \sum_{j=1}^m M_{1_m + (k_j-1)e_j} \qquad (k \in {\mathbb N}^m, k \le n), \label{eMsum}\end{align} and that the set of entries of $M$ is equal to the target set for the sum system. \end{proof} Conversely, given an $m$-part sum system and choosing the entries on the coordinate axes of $M$ such that they satisfy (\ref{eMcoord}) and $M_{1_m} = 0,$ it is clear that defining the remaining entries via (\ref{eMsum}) will result in an order $m$ tensor with property (V). \par In fact, $M$ can be considered as an $m$-dimensional tabular representation of the sum system with a certain arrangement of the elements of each component set. \par There is some freedom of choice in assigning the elements of the component sets of a sum system to tensor entries so as to satisfy (\ref{eMcoord}), with only the constraint that $M_{1_m} = 0.$ In order to establish a bijection, we introduce the following generalisation of Ollerenshaw and Br\'ee's definition of a principal reversible square \cite{rOB}. \par\medskip {\it Definition.} We call an order $m$ tensor $M \in {\mathbb N}_0^n,$ $n \in ({\mathbb N}+1)^m,$ $m \in N,$ a {\it principal reversible $m$-cuboid\/} if $M$ has property (V), its set of entries is \begin{align} \{M_k : k \in {\mathbb N}^m, k \le n\} &= \Ap{\prod_{j=1}^m n_j}, \nonumber\end{align} and for every $j \in \{1, \dots, m\},$ every row in the $j$th direction is arranged in strictly increasing order, i.e. $M_{k} < M_{k + l e_j} \ (k \in {\mathbb N}^m, 1 \le l \le n_j-k_j).$ \par\medskip\noindent Putting the elements of the sum system component $A_j$ onto the $j$th coordinate axis of $M,$ we obtain the following relationship by virtue of Theorem \ref{tssten}. \begin{corollary}\label{cssprc} Let $m \in {\mathbb N}.$ There is a bijection between the principal reversible $m$-cuboids with dimension vector $n \in ({\mathbb N}+1)^m$ and the $m$-part sum systems $A_1, \dots, A_m$ with cardinalities $|A_j| = n_j$ $(j \in \{1, \dots, m\}).$ \end{corollary} \par\medskip\noindent In conjunction with Theorem \ref{tsspalin}, this shows that principal reversible $m$-tensors also have a generalised form of the row and column reversal symmetry (R) defined for matrices \cite{rSupAlg}, as follows. \begin{theorem}\label{trevsym} Let $m \in {\mathbb N},$ $n \in ({\mathbb N}+1)^m,$ and let $M \in {\mathbb N}_0^n$ be a principal reversible $m$-cuboid. Then $M$ has the line reversal symmetry {\rm (R),} i.e. for all $j \in \{1, \dots, m\}$ and any $k \in {\mathbb N}^m, k \le n,$ \begin{align} &M_{k_1, \dots, k_{j-1}, l, k_{j+1}, \dots, k_m} + M_{k_1, \dots, k_{j-1}, n_j+1-l, k_{j+1}, \dots, k_m} \nonumber\\ &= M_{k_1, \dots, k_{j-1}, 1, k_{j+1}, \dots, k_m} + M_{k_1, \dots, k_{j-1}, n_j, k_{j+1}, \dots, k_m} \qquad (l \in \{1, \dots, n_j\}). \label{eRcub}\end{align} \end{theorem} \begin{proof} Let $A_1, A_2, \dots, A_m$ be the sum system corresponding to $M$ by Theorem \ref{tssten}. Then, for each $j \in \{1, \dots, m\},$ \begin{align} 0 &= M_{1_m} < M_{1_m + e_j} < M_{1_m + 2 e_j} < \cdots < M_{1_m + (n_j - 1) e_j} = \max A_j \nonumber\end{align} are the elements of the component set $A_j,$ which by Theorem \ref{tsspalin} has the palindromic property \begin{align} M_{1_m + k e_j} + M_{1_m + (n_j -1-k) e_j} &= M_{1_m + (n_j-1) e_j} + M_{1_m} \qquad (k \in \ap{n_j}). \nonumber\end{align} This proves the identity (\ref{eRcub}) along the coordinate axes of $M;$ the general case follows by observing that Lemma \ref{lVprop} gives the representation \begin{align} M_k &= M_{k_1, \dots, k_{j-1}, 1, k_{j+1}, \dots, k_m} + M_{1_m + (k_j-1) e_j} \qquad (j \in \{1, \dots, m\}, k \in {\mathbb N}^m, k \le n) \nonumber\end{align} for the entries of $M.$ \end{proof} \section{Structure and construction of principal reversible cuboids} \label{sconstr} Throughout this section, let $m \in {\mathbb N},$ $n \in {\mathbb N}^m \setminus \{1_m\},$ and consider a principal reversible cuboid $M \in {\mathbb N}_0^n.$ For a multiindex $\tilde n \in {\mathbb N}^m,$ $\tilde n \le n,$ we write \begin{align} M_{[\tilde n]} := (M_k)_{k \le \tilde n} \nonumber\end{align} for the subcuboid of $M$ which has dimensions $\tilde n$ and includes the root entry $M_{1_m}.$ Moreover, we define \begin{align} \mu_{\tilde n} := \min \{N \in {\mathbb N} : N \neq M_k \ (k \le \tilde n)\}, \nonumber\end{align} the smallest positive integer not appearing as an entry in $M_{[\tilde n]}.$ Then we have the following characterisation of principal reversible subcuboids of $M.$ \begin{lemma}\label{lPRsC} For $\tilde n \le n,$ \begin{align} \mu_{\tilde n} &\le \prod_{j=1}^m \tilde n_j, \label{ePRsC}\end{align} with equality if and only if $M_{[\tilde n]}$ is a principal reversible cuboid. \end{lemma} \begin{proof} $M_{[\tilde n]}$ inherits the ordering property and (V) from $M.$ Hence it is a principal reversible cuboid if and only if \begin{align} \{M_k : k \le \tilde n\} &= \Ap{\prod_{j=1}^m \tilde n_j}. \nonumber\end{align} If this is the case, then evidently (\ref{ePRsC}) holds; if it is not the case, then $M_{[\tilde n]},$ having $\prod_{j=1}^m \tilde n_j$ different entries, must skip some element of $\Ap{\prod_{j=1}^m \tilde n_j},$ so $\mu_{\tilde n} < \prod_{j=1}^m \tilde n_j.$ \end{proof} By Lemma \ref{lVprop}, the entries of $M$ arise as sums of the corresponding entries along the coordinate axes of $M.$ Let us define \begin{align} a_{j,k} := M_{1_m + k e_j} &\qquad (j \in \{1, \dots, m\}, k \in \ap{n_j}); \nonumber\end{align} then the identity (\ref{eVprop}), with $M_{1_m} = 0,$ gives \begin{align} M_{1_m + k} &= \sum_{j=1}^m M_{1_m + k_j e_j} = \sum_{j=1}^m a_{j, k_j} \qquad (k \in {\mathbb N}_0^m, 1_m + k \le n). \label{eMina}\end{align} The following observation shows that for any subcuboid (containing the root element) $M_{[\tilde n]},$ the smallest missing integer $\mu_{\tilde n}$ appears on a coordinate axis of $M,$ just outside $M_{[\tilde n]}.$ \begin{lemma}\label{lfindmu} Let $\tilde n \le n,$ $\tilde n \neq n.$ Then there is $j \in \{1, \dots, m\}$ such that $\mu_{\tilde n} = a_{j, \tilde n_j}.$ \end{lemma} \begin{proof} By definition, $\mu_{\tilde n}$ is the smallest entry of $M$ outside $M_{[\tilde n]},$ \begin{align} \mu_{\tilde n} &= \min\{M_{\hat n} : \hat n \le n, \hat n \not \le \tilde n\}. \nonumber\end{align} By the increasing arrangement of all lines parallel to coordinate axes, we have for any $\hat n$ and any $j \in \{1, \dots, m\}$ that $a_{j, \hat n_j -1} = M_{1_m + (\hat n_j -1)e_j} \le M_{\hat n},$ so \begin{align} \mu_{\tilde n} &= \min \{a_{j, \hat n_j -1} : j \in \{1, \dots, m\}, \tilde n_j < \hat n_j < n_j\} \nonumber\\ &= \min \{a_{j, \tilde n_j} : j \in \{1, \dots, m\}\}, \nonumber\end{align} by the increasing arrangement of $a_{j, \cdot}.$ \end{proof} If the cuboid in Lemma \ref{lfindmu} arises by truncating a principal reversible subcuboid in one direction only, the smallest missing integer must appear on the axis of the direction of truncation, since the other directions would lead outside the larger enclosing principal reversible subcuboid. \begin{corollary}\label{ccutmu} Let $\tilde n \in {\mathbb N}^m,$ $\tilde n \le n,$ such that $M_{[\tilde n]}$ is a principal reversible subcuboid. Let $j \in \{1, \dots, m\}$ and $\hat n \in {\mathbb N}^m$ such that $\hat n_i = \tilde n_i$ $(i \neq j)$ and $\hat n_j < \tilde n_j.$ Then $\mu_{\hat n} = a_{j, \hat n_j}.$ \end{corollary} The following statement gives an extension of this beyond the confines of the principal reversible subcuboid. \begin{lemma}\label{lmuchain} Let $\tilde n \le n,$ $\tilde n \neq n,$ be such that $M_{[\tilde n]}$ is a proper principal reversible subcuboid of $M,$ and let $j \in \{1, \dots, m\}.$ Suppose for some $k_0 \in {\mathbb N},$ $k_0 < \tilde n_j,$ \begin{align} a_{j, \tilde n_j + k} &= a_{j, k} + \prod_{i=1}^m \tilde n_i \qquad (k \in \ap{k_0}). \nonumber\end{align} Then $\mu_{\tilde n + k_0 e_j} = a_{j, k_0} + \prod_{i=1}^m \tilde n_i.$ \end{lemma} \begin{proof} By definition, $\mu_{\tilde n + k_0 e_j}$ is the smallest positive integer not in the set $\{M_{\hat n} : \hat n \le \tilde n + k_0 e_j\}.$ Since, by Lemma \ref{lPRsC}, $\{M_{\hat n} : \hat n \le \tilde n\} = \Ap{\prod_{i=1}^m \tilde n_i},$ we have in fact that $\mu_{\tilde n + k_0 e_j} $ is the smallest positive integer not in the set \begin{align} \{M_{\hat n} &: \hat n_i \le \tilde n_i \ (i \neq j), \tilde n_j < \hat n_j \le \tilde n_j + k_0\} \nonumber\\ &= \left \{a_{j, \tilde n_j + k} + \sum_{i \neq j} a_{i, k_i} : k_i \in \ap{\tilde n_i} \ (i \neq j), k \in \ap{k_0} \right\} \nonumber\\ &= \left\{ \prod_{i=1}^m \tilde n_i + \sum_{i=1}^m a_{i, k_i} : k_i \in \ap{\tilde n_i} \ (i \neq j), k_j \in \ap{k_0} \right\} \nonumber\\ &= \prod_{i=1}^m \tilde n_i + \{M_{\hat n} : \hat n_i \le \tilde n_i \ (i \neq j), \hat n_j \le k_0\}, \nonumber\end{align} using (\ref{eMina}) in the first and the hypothesis of the Lemma in the second equality. Taking the minimum on both sides, we find $\mu_{\tilde n + k_0 e_j} = \mu_{\tilde n + (k_0 - \tilde n_j) e_j} + \prod_{i=1}^m \tilde n_i.$ Corollary \ref{ccutmu} gives $\mu_{\tilde n + (k_0 - \tilde n_j) e_j} = a_{j, k_0},$ and the statement follows. \end{proof} The following lemma provides the key to understanding the structure of principal reversible cuboids. Essentially it shows that, starting from a principal reversible subcuboid, finding the entry of $M$ giving the next integer in sequence and adding the slice in the corresponding direction to the subcuboid, and continuing in this way, the next integer in sequence will always be found in the same direction as the previous one, until the addition of slices has completed a larger principal reversible subcuboid (or exhausted $M$). Thus the next integer in sequence can only appear in a new direction if the starting point is a complete principal reversible subcuboid, not a general subcuboid. \begin{lemma}\label{lMkey} Suppose $M_{[\tilde n]}$ is a proper principal reversible subcuboid of $M,$ $\tilde n \le n,$ $\tilde n \neq n,$ such that for some $j \in \{1, \dots, m\}$ and some $K \in \{1, \dots, \tilde n_j-1\},$ \begin{align} a_{j, \tilde n_j + k} &= \mu_{\tilde n + k e_j} \qquad (k \in \ap{K}). \nonumber\end{align} Then either $M_{[\tilde n + K e_j]}$ is a principal reversible cuboid, or $\mu_{\tilde n + K e_j} = a_{j, \tilde n_j + K}.$ \end{lemma} \begin{proof} Applying Lemma \ref{lmuchain} recursively to $k_0 \in \{1, \dots, K\},$ we find that \begin{align} a_{j, \tilde n_j + K -1} &= \mu_{\tilde n + (K-1) e_j} = a_{j, K-1} + \prod_{i=1}^m \tilde n_i, \nonumber\end{align} and \begin{align} \mu_{\tilde n + K e_j} &= a_{j,K} + \prod_{i=1}^m \tilde n_i. \label{e2}\end{align} Hence the slice of $M$ with indices $\hat n_i \le \tilde n_i$ $(i \neq j),$ $\hat n_j = \tilde n_j + K$ has entries \begin{align} S_1 &:= \{M_{\hat n} : \hat n_i \le \tilde n_i \ (i \neq j), \hat n_j = \tilde n_j + K\} \nonumber\\ &= \left\{ \sum_{i=1}^m a_{i,k_i} : 0 \le k_i \le \tilde n - 1 \ (i \neq j), k_j = \tilde n_j + K - 1 \right\} \nonumber\\ &= \left\{ a_{j,K-1} + \prod_{r=1}^m \tilde n_r + \sum_{i \neq j} a_{i,k_i} : 0 \le k_i \le \tilde n - 1 \ (i \neq j) \right\} \nonumber\end{align} using (\ref{eMina}). Now suppose that $\mu_{\tilde n + K e_j} \neq a_{j, \tilde n_j + K}.$ By Lemma \ref{lfindmu}, there is then $l~\in~\{1, \dots, m\},$ $l \neq j,$ such that $\mu_{\tilde n + K e_j} = a_{l, \tilde n_l}.$ This means (again by (\ref{eMina})) that the slice of $M$ with indices $\hat n_i \le \tilde n_i$ $(i \neq l),$ $\hat n_l = \tilde n_l + 1$ has entries \begin{align} S_2 &:= \{M_{\hat n} : \hat n_i \le \tilde n_i \ (i \neq l), \hat n_l = \tilde n_l + 1\} \nonumber\\ &= \left\{\sum_{i=1}^m a_{i, \tilde k_i} : 0 \le \tilde k_i \le \tilde n_i - 1 \ (i \neq l), \tilde k_l = \tilde n_l \right\}. \nonumber\end{align} As the index sets appearing in the definitions of $S_1$ and $S_2$ are disjoint and all entries of $M$ are different, it follows that $S_1 \cap S_2 = \emptyset.$ \par Now if $M_{[\tilde n + (K- \tilde n_j) e_j]}$ is a principal reversible subcuboid, then $M_{[\tilde n + K e_j]}$ will have entries \begin{align} \Ap{\prod_{i=1}^m \tilde n_i} \cup \left(\prod_{i=1}^m \tilde n_i + \Ap{K \prod_{i \neq j} \tilde n_i}\right) &= \Ap{(\tilde n_j + K) \prod_{i \neq j}\tilde n_i} \nonumber\end{align} and hence be a principal reversible subcuboid. If, on the other hand, $M_{[\tilde n + (K- \tilde n_j) e_j]}$ is not a principal reversible subcuboid, then by Lemma \ref{lPRsC} and Corollary \ref{ccutmu}, \begin{align} a_{j,K} &= \mu_{\tilde n + (K- \tilde n_j) e_j} < K \prod_{i \neq j} \tilde n_i; \nonumber\end{align} also, \begin{align} a_{j, \tilde n_j - K} &= \mu_{\tilde n - K e_j} \le (\tilde n_j-K) \prod_{i \neq j} \tilde n_i, \nonumber\end{align} so $\displaystyle a_{j,K} + a_{j, \tilde n_j - K} < \prod_{i=1}^m \tilde n_i.$ As $M_{[\tilde n]}$ is a principal reversible subcuboid with entries $\displaystyle \Ap{\prod_{i=1}^m \tilde n_i},$ there are suitable indices $k_i \in \ap{\tilde n_i}$ $(i \in \{1, \dots, m\})$ such that \begin{align} a_{j,K} + a_{j, \tilde n_j - K} &= \sum_{i=1}^m a_{i, k_i}. \nonumber\end{align} Setting $\tilde k_j := \tilde n_j - 1 - k_j \in \ap{\tilde n_j},$ this equation can be written in the form \begin{align} a_{j,K} + a_{j,\tilde n_j-K} &= a_{j,\tilde n_j - 1 - \tilde k_j} + \sum_{i \neq j} a_{i,k_i}. \label{e1}\end{align} By the reversal symmetry of the principal reversible subcuboid $M_{[\tilde n]}$ (Theorem \ref{trevsym}), the numbers $0 = a_{j,0} < a_{j,1} < \cdots < a_{j,\tilde n_j-1}$ have the property \begin{align} a_{j, \tilde n_j -1} &= a_{j,r} - a_{j, \tilde n_j-1-r} \qquad (r \in \ap{\tilde n_j}), \nonumber\end{align} so in particular $a_{j, \tilde n_j-1-\tilde k_j} = a_{j, \tilde n_j-1} - a_{j, \tilde k_j}$ and $a_{j,\tilde n_j - K} = a_{j, \tilde n_j-1} - a_{j, K-1}.$ Hence equation (\ref{e1}) is equivalent to \begin{align} a_{j,K} + a_{j, \tilde k_j} &= a_{j,K-1} + \sum_{i \neq j} a_{i,k_i}. \nonumber\end{align} Taking into account (\ref{e2}), we hence find \begin{align} S_2 \ni a_{l, \tilde n_l} + a_{j, \tilde k_j} &= \mu_{\tilde n + K e_j} + a_{j, \tilde k_j} = a_{j,K-1} + \prod_{r=1}^m \tilde n_r + \sum_{i \neq j} a_{i,k_i} \in S_1. \nonumber\end{align} This contradicts the fact that $S_1$ and $S_2$ are disjoint. \end{proof} Clearly, given two principal reversible subcuboids of $M,$ one must contain the other, since both contain a consecutive sequence of integers starting from 0 and the entries of $M$ are all different. Therefore the concept of {\it maximality\/} of a proper principal reversible subcuboid of $M$ is well-defined, and there is a unique maximal proper principal reversible subcuboid of $M.$ \begin{theorem}\label{tbasicstruc} Let $m \in {\mathbb N}+1,$ $n \in {\mathbb N}^m \setminus \{1_m\},$ and $M \in {\mathbb N}_0^n$ a principal reversible cuboid. If $M_{[\tilde n]},$ with $\tilde n \le n,$ $\tilde n \neq n,$ is a maximal proper principal reversible subcuboid, then there is $j \in \{1, \dots, m\}$ such that $\tilde n_i = n_i$ $(i \neq j).$ \end{theorem} \begin{proof} Suppose $\tilde n_j < n_j$ and $\tilde n_k < n_k$ for some $j \neq k.$ Then, by Lemma \ref{lfindmu}, $\mu_{\tilde n} \in \{a_{j,\tilde n_j}, a_{k, \tilde n_k}\};$ w.l.o.g. let $\mu_{\tilde n} = a_{j,\tilde n_j}.$ Then, by Lemma \ref{lMkey}, the next smallest missing number from each extension $M_{[\tilde n + e_j]},$ $M_{[\tilde n + 2 e_j]}, \dots$ of $M_{[\tilde n]}$ in direction $j$ will again be found in direction $j,$ until a larger principal reversible subcuboid $M_{[\tilde n + K e_j]}$ is completed, with some $K > 0.$ As the $k$th entry of the multiindex $\tilde n + K e_j$ is equal to $\tilde n_k < n_k,$ $M_{[\tilde n + K e_j]} \neq M;$ on the other hand, $M_{[\tilde n]}$ is a proper principal reversible subcuboid of $M_{[\tilde n + K e_j]},$ contradicting its maximality. \end{proof} \begin{theorem}\label{tbasicfact} Let $m \in {\mathbb N}+1,$ $n \in {\mathbb N}^m \setminus \{1_m\}$ and $M \in {\mathbb N}_0^n$ a principal reversible cuboid. Then there is some $j \in \{1, \dots, m\}$ and some $\tilde n \in {\mathbb N}^m$ such that $\tilde n_i = n_i$ $(i \neq j),$ $\tilde n_j < n_j,$ $\tilde n_j | n_j$ and $M_{[\tilde n]}$ is a principal reversible subcuboid of $M.$ Moreover, for any $\hat n \le \tilde n$ and $k \in \Ap{\frac{n_j}{\tilde n_j}},$ we have $M_{\hat n + k \tilde n_j e_j} = M_{\hat n} + k \prod_{i=1}^m \tilde n_i.$ \end{theorem} \begin{proof} Since $M$ is not just the trivial $m$-cuboid $(0) \in {\mathbb N}_0^{1_m},$ it has a maximal proper principal reversible subcuboid $M_{n'},$ with $n' \le n,$ $n' \neq n;$ note that $n' = 1_m$ and hence $M_{n'} = (0)$ is possible. By Theorem \ref{tbasicstruc}, there is $j \in \{1, \dots, m\}$ such that $n'_i = n_i$ $(i \neq j)$ and $n'_j < n_j.$ Let $\tilde n \in {\mathbb N}^m$ be such that $\tilde n_i = n_i$ $(i \neq j)$ and such that $\tilde n_j$ is the minimal number for which $M_{[\tilde n]}$ is a principal reversible subcuboid of $M.$ \par By Lemmas \ref{lmuchain} and \ref{lMkey} and eq. (\ref{eMina}), we find \begin{align} M_{\hat n + \tilde n_j e_j} &= \sum_{i \neq j} a_{i, \hat n_i - 1} + a_{j, \hat n_j - 1} + \prod_{r=1}^m \tilde n_r = M_{\hat n} + \prod_{r=1}^m \tilde n_r \nonumber\end{align} for $\hat n \le \tilde n,$ and this makes $M_{[\tilde n + \tilde n_j e_j]}$ a principal reversible subcuboid which is composed of $M_{[\tilde n]}$ and an a copy of this cuboid with entries offset with $\prod_{r=1}^m \tilde n_r.$ Applying the same reasoning to this larger subcuboid, if it is not already equal to $M,$ gives the last statement of the Theorem. \par By minimality of $M_{[\tilde n]},$ the principal reversible cuboid $M$ must be composed of a number of complete offset copies of it to contain a complete arithmetic sequence. Hence it follows that $\tilde n_j$ is a divisor of $n_j.$ \end{proof} \section{Building operators and joint ordered factorisations}\label{sjof} Theorem \ref{tbasicfact} has shown that every principal reversible cuboid, except the trivial $(0) \in {\mathbb N}_0^{1_m},$ is composed of shifted copies of a smaller principal reversible cuboid, stacked in one of the $m$ directions. By recursion, this observation gives a description of principal reversible cuboids which can be used to construct them. In order to make the construction more transparent, we introduce building operators which describe the stacking process. \par We shall use the following notation. For $k \in {\mathbb N},$ we denote the arithmetic progression vector by \def\apv#1{\overrightarrow{\ap{#1}}} \def\Apv#1{\overrightarrow{\Ap{#1}}} \begin{align} \apv k &= (0, 1, 2, \dots, k-1). \nonumber\end{align} Moreover, for $m \in {\mathbb N}$ and any multiindex $n \in {\mathbb N}^m,$ we write $1_{[n]}$ for the cuboid with dimension vector $n$ and all entries equal to 1. \par\medskip {\it Definition.} Let $k, m \in {\mathbb N},$ $j \in \{1, \dots, m\},$ $n \in {\mathbb N}^m,$ $v \in {\mathbb N}_0^k$ and $M \in {\mathbb N}_0^n.$ Then we define the {\it direction $j$ Kronecker product\/} of $v$ with $M$ as $v \otimes_j M,$ where \begin{align} (v \otimes_j M)_{\hat n + l n_j e_j} &= v_{l+1} M_{\hat n} \qquad (\hat n \le n, l \in \ap k). \nonumber\end{align} If $m = 1,$ then this product turns into the standard Kronecker product of the vectors $v \in {\mathbb N}_0^k$ and $w = M \in {\mathbb N}_0^{n_1},$ i.e. \begin{align} (v \otimes w)_{l_1 n_1 + l_2 + 1} &= v_{l_1+1} w_{l_2+1} \qquad (l_1 \in \ap k, l_2 \in \ap{n_1}). \nonumber\end{align} This product is obviously bilinear. \begin{lemma}\label{lprass} The direction $j$ Kronecker product is associative, i.e. for $k_1, k_2, m \in {\mathbb N},$ $n \in {\mathbb N}^m,$ $v \in {\mathbb N}_0^{k_1},$ $w \in {\mathbb N}_0^{k_2}$ and $M \in {\mathbb N}_0^n,$ \begin{align} v \otimes_j (w \otimes_j M) &= (v \otimes w) \otimes_j M. \nonumber\end{align} \end{lemma} \begin{proof} For any $\hat n \le n$ and $l = l_1 k_2 + l_2 \in \ap{k_1 k_2},$ $l_1 \in \ap {k_1},$ $l_2 \in \ap {k_2},$ we find \begin{align} (v \otimes_j (w \otimes_j M))_{\hat n + l n_j e_j} &= (v \otimes_j (w \otimes_j M))_{\hat n + l_2 n_j e_j + l_1 k_2 n_j e_j} \nonumber\\ &= v_{l_1+1} (w \otimes_j M)_{\hat n + l_1 n_j e_j} = v_{l_1+1} w_{l_2+1} M_{\hat n} \nonumber\\ &= (v \otimes w)_{l+1} M_{\hat n} = ((v \otimes w) \otimes_j M)_{\hat n + l n_j e_j}, \nonumber\end{align} as required. \end{proof} {\it Definition.} Let $k, m \in {\mathbb N},$ $j \in \{1, \dots, m\}.$ Then we define the {\it building operator\/} \def{\cal B}{{\cal B}} ${\cal B}_{j,k}$ as the operation which turns any cuboid $M \in {\mathbb N}_0^n,$ $n \in {\mathbb N}^m,$ into \begin{align} {\cal B}_{j,k}(M) &= \left(\prod_{r=1}^m n_r \right) \apv k \otimes_j 1_{[n]} + 1_k \otimes_j M \in {\mathbb N}_0^{n + (k-1)n_j e_j}. \nonumber\end{align} \par\medskip\noindent The following observation shows that the composition of two building operators working in the same coordinate direction is just one building operator. \begin{lemma}\label{lcborepeat} Let $k_1, k_2, m \in {\mathbb N}$ and $j \in \{1, \dots, m\}.$ Then ${\cal B}_{j,k_1} \circ {\cal B}_{j,k_2} = {\cal B}_{j,k_1 k_2}.$ \end{lemma} \begin{proof} Let $M \in {\mathbb N}_0^n,$ $n \in {\mathbb N}^m.$ Then \begin{align} {\cal B}_{j,k_1} \circ {\cal B}_{j,k_2} (M) &= {\cal B}_{j,k_1} \left(\left(\prod_{r=1}^m n_r \right) \apv{k_2} \otimes_j 1_{[n]} + 1_{k_2} \otimes_j M \right) \nonumber\\ &= \left(\prod_{r=1}^m n_r \right) k_2\,\apv{k_1} \otimes_j 1_{[n + (k_2-1)n_j e_j]} \nonumber\\ &\qquad + 1_{k_1} \otimes_j \left(\left(\prod_{r=1}^m n_r \right) \apv{k_2} \otimes_j 1_{[n]} + 1_{k_2} \otimes_j M \right) \nonumber\\ &= \left(\prod_{r=1}^m n_r \right) (k_2\,\apv{k_1} \otimes 1_{k_2} + 1_{k_1} \otimes \apv{k_2}) \otimes_j 1_{[n]} + (1_{k_1} \otimes 1_{k_2}) \otimes_j M \nonumber\\ &= \left(\prod_{r=1}^m n_r \right) \apv{k_1 k_2} \otimes_j 1_{[n]} + 1_{k_1 k_2} \otimes_j M = {\cal B}_{j,k_1 k_2} (M), \nonumber\end{align} using Lemma \ref{lprass} in the penultimate line. \end{proof} Applying this setup in conjunction with Theorem \ref{tbasicfact}, we can deduce the following structure theorem for principal reversible cuboids. \begin{theorem}\label{tcbuild} Let $m \in {\mathbb N},$ $n \in {\mathbb N}^m$ and $M \in {\mathbb N}_0^n$ a principal reversible cuboid. Then there is a number $L \in {\mathbb N}$ and numbers $j_l \in \{1, \dots, m\},$ $f_l \in {\mathbb N}+1$ $(l \in \{1, \dots, L\})$ such that $\prod_{j_l = j} f_l = n_j$ $(j \in \{1, \dots, m\})$ and \begin{align} M &= {\cal B}_{j_L, f_L} \circ {\cal B}_{j_{L-1}, f_{L-1}} \circ \cdots \circ {\cal B}_{j_1, f_1} ((0)), \label{ecbuild}\end{align} where $(0) \in {\mathbb N}_0^{1_m}$ is the trivial principal reversible cuboid. \hfill\break Without loss of generality, we can assume that $j_l \neq j_{l-1}$ $(l \in \{2, \dots, L\}).$ \end{theorem} \begin{proof} Using the building operator defined above, the statement of Theorem \ref{tbasicfact} can be paraphrased in the following way. There is some $j \in \{1, \dots, m\}$ and $f \in {\mathbb N},$ $f | n_j,$ such that $M = {\cal B}_{j,f} (M_{[\tilde n]}),$ where $\tilde n \in {\mathbb N}^m$ has entries $\tilde n_i = n_i$ $(i \neq j)$ and $\tilde n_j = n_j / f.$ Here $M_{[\tilde n]}$ is again a principal reversible cuboid. Unless this is the trivial cuboid $(0) \in {\mathbb N}_0^{1_m},$ we can again apply Theorem \ref{tbasicfact} to it, and thus recursively obtain the building operator chain in (\ref{ecbuild}). The last statement reflects Lemma \ref{lcborepeat}, which allows fusion of consecutive building operators in the same direction into one. \end{proof} Theorem \ref{tcbuild} shows that principal reversible cuboids are obtained from building operator chains; the coefficients of such a chain arise from factorising the individual dimensions $n_j$ $(j \in \{1, \dots, m\})$ of the principal reversible cuboid, and arranging the factors in a sequence such that consecutive factors in the sequence belong to different coordinate directions. \par Note that in the special (and untypical) case $m = 2,$ this condition (which corresponds to the last sentence in Theorem \ref{tcbuild}) enforces alternation of directions $j_1 = 1,$ $j_2 = 2,$ $j_3 = 1,$ $j_4 = 2,$ etc.\ (or the analogue starting with $j_2 = 2$), ending with either the same or the other direction depending on whether $L$ is odd or even. If $n_1 = n_2$ and we start with $j_1 = 1,$ this gives a building operator chain equivalent to Ollerenshaw and Br\'ee's construction of principal reversible squares \cite{rOB}. However, if $m > 2,$ then the possible patterns are considerably more complex. \par\medskip {\it Definition.} Let $m \in {\mathbb N}$ and $n \in {\mathbb N}^m.$ Then we call \begin{align} ((j_1, f_1), (j_2, f_2), \dots, (j_L, f_L)) \in (\{1, \dots, m\} \times ({\mathbb N}+1))^L, \nonumber\end{align} where $L \in {\mathbb N},$ a {\it joint ordered factorisation\/} of $n = (n_1, \dots, n_m)$ if \begin{align} \prod_{j_l = j} f_l &= n_j \qquad (j \in \{1, \dots, m\}) \nonumber\end{align} and $j_l \neq j_{l-1}$ $(l \in \{2, \dots, L\}).$ \par\medskip\noindent By Theorem \ref{tssten}, the entries on the coordinate axes of a principal reversible cuboid form a sum system (with the entries of each component set appearing in increasing order on the corresponding axis, and the coordinate axes arranged in the order of the smallest non-zero entry of the component sets). Thus the building operator chain of Theorem \ref{tcbuild} also gives rise to a construction for the corresponding sum system, as follows. \begin{theorem}\label{tssbuild} Let $m \in {\mathbb N}.$ Suppose the sets $A_1, A_2, \dots, A_m \subset {\mathbb N}_0$ form a sum system. Let $n_j := |A_j|$ $(j \in \{1, \dots, m\}).$ Then there is a joint ordered factorisation $((j_1, f_1), \dots, (j_L, f_L))$ of $(n_1, \dots, n_m)$ such that \begin{align} A_j &= \sum_{j_l = j} \left(\prod_{s=1}^{l-1} f_s \right) \ap{f_l} \qquad (j \in \{1, \dots, m\}). \label{essbuild}\end{align} Conversely, given any joint ordered factorisation of $n \in {\mathbb N}^m,$ $(\ref{essbuild})$ generates an $m$-part sum system. \end{theorem} \begin{proof} By Corollary \ref{cssprc}, there is a one-to-one relationship between $m$-part sum systems and principal reversible $m$-cuboids with dimension vector $n \in ({\mathbb N}+1)^m.$ Consider a principal reversible cuboid $M \in {\mathbb N}_0^n$ and its corresponding building operator chain (\ref{ecbuild}). Set $M^{(0)} = (0) \in {\mathbb N}_0^{1_m}$ and $M^{(l)} = {\cal B}_{j_l, f_l} \circ \dots \circ {\cal B}_{j_1, f_1} (0)$ $(l \in \{1, \dots, L\});$ then $M = M^{(L)}$ and $M^{(l)} = {\cal B}_{j_l, f_l}(M^{(l-1)})$ $(l \in \{1, \dots, L\}).$ By recursion, the number of entries of $M^{(l)}$ (which is equal to the product of its dimensions) will be $F_l := \prod_{s=1}^l f_s.$ \par Let $A_j^{(l)}$ be the set of entries on the $j$-th coordinate axis of $M^{(l)},$ for any $j \in \{1, \dots, m\}$ and $l \in \{0, \dots, L\}.$ Then we find that $A_j^{(0)} = \{0\}$ for all $j \in \{1, \dots, m\},$ and that $A_j^{(l)} = A_j^{(l-1)}$ if $j \neq j_l,$ and \begin{align} A_{j_l}^{(l)} &= A_{j_l}^{(l-1)} \cup (A_{j_l}^{(l-1)} + F_{l-1}) \cup (A_{j_l}^{(l-1)} + 2 F_{l-1}) \cup \cdots \cup (A_{j_l}^{(l-1)} + (f_l-1) F_{l-1}) \nonumber\\ &= A_{j_l}^{(l-1)} + \left(\prod_{s=1}^{l-1} f_s \right) \ap{f_l}; \label{essunion}\end{align} note that, since $M^{(l-1)}$ is a principal reversible cuboid, \begin{align} F_{l-1} &= \prod_{s=1}^{l-1} f_s = \sum_{r=1}^m \max A_r^{(l-1)} + 1 > \max A_{j_l}, \nonumber\end{align} so the union in (\ref{essunion}) is a union of disjoint sets. The formula (\ref{essbuild}) follows by recursion, as $A_j = A_j^{(L)}.$ \end{proof} We conclude with some examples to illustrate the workings of Theorem \ref{tssbuild} to construct sum systems, and further the use of Theorems \ref{tSdsSs1}, \ref{tSdsSs2} to obtain corresponding sum-and-distance systems. \par\medskip {\it Example 1.\/} Take $m = 3,$ $n = (15, 8, 6)$ and consider the joint ordered factorisation \begin{align} ((1,5),(2,2),(1,3),(3,3),(2,2),(3,2),(2,2)); \nonumber\end{align} then, by equation (\ref{essbuild}), we find the corresponding sum system \begin{align} A_1 = &\{0,1,2,3,4,10,11,12,13,14,20,21,22,23,24\}, \nonumber\\ A_2 = &\{0,5,90,95,360,365,450,455\}, \nonumber\\ A_3 = &\{0,30,60,180,210,240\}, \nonumber\end{align} which generates all integers $0, 1, \dots, 6!-1$ exactly once. Rearranging the same factors in a different joint ordered factorisation, \begin{align} ((1,5),(3,3),(2,2),(3,2),(2,2),(1,3),(2,2)), \nonumber\end{align} we obtain a different sum system with component sets of the same cardinalities $n_1, n_2, n_3,$ and the same target set, \begin{align} A_1 = &\{0,1,2,3,4,120,121,122,123,124,240,241,242,243,244\}, \nonumber\\ A_2 = &\{0,15,60,75,360,375,420,435\}, \nonumber\\ A_3 = &\{0,5,10,30,35,40\}. \nonumber\end{align} \par\medskip {\it Example 2.\/} Consider $m = 3$ and $n = (14,8,6),$ all even. Then the joint ordered factorisation \begin{align} ((1,2),(3,3),(2,2),(3,2),(2,2),(1,7),(2,2)) \nonumber\end{align} gives the corresponding sum system \begin{align} \tilde A_1 &= \{0,1,48,49,96,97,144,145,192,193,240,241,288,289\}, \nonumber\\ \tilde A_2 &= \{0,6,24,30,336,342,360,366\}, \nonumber\\ \tilde A_3 &= \{0,2,4,12,14,16\}, \nonumber\end{align} and hence, by Theorem \ref{tSdsSs1}, the non-inclusive sum-and-distance system \begin{align} A_1 = \{1, 95, 97, 191, 193, 287, 289\}, A_2 = \{306, 318, 354, 366\}, A_3 = \{8, 12, 16\}. \nonumber\end{align} \par\medskip {\it Example 3.\/} Consider $m = 3$ and $n = (15, 7, 9),$ all odd. Then the joint ordered factorisation \begin{align} ((1,5),(2,7),(3,3),(1,3),(3,3)) \nonumber\end{align} generates the sum system \begin{align} \tilde A_1 &= \{0,1,2,3,4,105,106,107,108,109,210,211,212,213,214\}, \nonumber\\ \tilde A_2 &= \{0,5,10,15,20,25,30\}, \nonumber\\ \tilde A_3 &= \{0,35,70,315,350,385,630,665,700\}, \nonumber\end{align} and further, by Theorem \ref{tSdsSs2}, the inclusive sum-and-distance system \begin{align} A_1 = \{1, 2, 103, 104, 105, 106, 107\}, A_2 = \{5, 10, 15\}, A_3 = \{35, 280, 315, 350\}. \nonumber\end{align} \par\medskip {\it Example 4.\/} For $m = 5$ and $n = (28,20,30,18,12),$ the joint ordered factorisation \begin{align} ((1,7),(2,4),(5,2),(3,2),(4,2),(2,5),(4,9),(3,3),(1,4),(5,3),(3,5),(5,2)) \nonumber\end{align} gives, by formula (\ref{essbuild}), the five-part sum system \begin{align} A_1 &= \{0,1,2,3,4,5,6,30240,30241,30242,30243,30244,30245,30246, 60480, \nonumber\\ &\qquad 60481,60482,60483,60484,60485,60486,90720,90721,90722,90723,90724, \nonumber\\ &\qquad 90725,90726\}, \nonumber\\ A_2 &= \{0,7,14,21,224,231,238,245,448,455,462,469,672,679,686,693,896,903, \nonumber\\ &\qquad 910,917\}, \nonumber\\ A_3 &= \{0,56,10080,10136,20160,20216,362880,362936,372960,373016,383040, \nonumber\\ &\qquad 383096,725760,725816,735840,735896,745920,745976,1088640,1088696, \nonumber\\ &\qquad 1098720,1098776,1108800,1108856,1451520,1451576,1461600,1461656, \nonumber\\ &\qquad 1471680,1471736\}, \nonumber\\ A_4 &= \{0,112,1120,1232,2240,2352,3360,3472,4480,4592,5600,5712,6720,6832, \nonumber\\ &\qquad 7840,7952,8960,9072\}, \nonumber\\ A_5 &= \{0,28,120960,120988,241920,241948,1814400,1814428,1935360,1935388, \nonumber\\ &\qquad 2056320,2056348\}, \nonumber\end{align} which generates the integers $0, 1, 2, \dots, 10!-1,$ each exactly once.
{ "timestamp": "2017-12-15T02:07:03", "yymm": "1712", "arxiv_id": "1712.05195", "language": "en", "url": "https://arxiv.org/abs/1712.05195" }
\section{Introduction} Given a convex polygon $P$ with $n$ edges, the \emph{minimum all-flush $k$-gon problem} asks for the minimum (with respect to area throughout this paper) $k$-gon whose edges are all flushed with edges of $P$ (i.e. each edge must contain an edge of $P$ as a subregion). In other words, it asks for the minimum $k$-gon that circumscribes $P$ touching $P$ edge-to-edge. This problem was proposed by Aggarwal et al. \cite{kgon94}. They solved it in $O(n\sqrt{k\log n}+n\log n)$ time using their technique for computing the \emph{minimum weight $k$-link path}. For $k = \Omega(\log n)$, this improves over an $O(kn+n\log n)$ time algorithm based on the \emph{matrix-search technique} \cite{kgon87} (however, $O(kn)$ is better when $k$ is regarded as a constant as discussed below), which improves over an $O(kn\log n + n\log^2 n)$ time algorithm implicitly given in \cite{kgon82}. All these previous work follow the same approach: first, choose an arbitrary edge $e$ of $P$ and find the minimum all-flush $k$-gon $Q$ flushed by $e$, which reduces to solving an instance of the minimum weight $k$-link path problem; second, compute the minimum all-flush $k$-gon from $Q$. The term $n\sqrt{k\log n}$ comes from the first step and $n\log n$ comes from the second. Schieber \cite{kgon95soda} slightly improves the first term by optimizing the underlying technique for computing the minimum weight $k$-link path. Yet the second term $n\log n$ is unchanged. To the best of our knowledge, no one has improved this part even for the simplest case of $k=3$. However, for $k=3$, we can solve the dual problem, i.e. computing the maximum area triangle (MAT) inside a convex polygon, in $O(n)$ time \cite{Jin17a,linear-correct}. So, it is interesting to know whether we can also compute in linear time the minimum all-flush triangle (MFT). \smallskip We settle this question affirmatively in this paper by improving the aforementioned second term to $n$ for $k=3$. In our algorithm, after computing $Q$, we first compute another triangle $Q'$ from $Q$ and then compute the MFT from $Q'$, both in linear time. (However, note that computing $Q$ and $Q'$ will be put together in this paper and referred to as the initial step of our algorithm. The main difficulty lies in computing the MFT from $Q'$.) \medskip The MFT problem is as fundamental as the \emph{minimum enclosing triangle} problem studied in history \cite{Tri-Enclose-Area-nlogn2,Tri-Enclose-Area,linear-correct} and may find similar applications in more realistic problems. By computing the MFT, we obtain a simple container of $P$ to accelerate the polygon collision detection. Moreover, it can be applied in finding a good packing of $P$ into the plane. In the packing problem, we want to pack non-overlap copies of $P$ in the plane, so that the ratio between the uncovered area and the area covered by the copies is as small as possible. \medskip \noindent \textbf{Literature of the ``dual'' problem.\mbox{ }} For the MAT problem, there is a well-known linear time algorithm given by Dobkin and Snyder \cite{linear-wrong-DS}, which was found \textbf{incorrect} by Keikha et al.\ \cite{linear-open-arxiv} recently. Nonetheless, there is a correct but more involved linear solution given by Chandran and Mount \cite{linear-correct} based on the \emph{rotating-caliper technique} \cite{rotatingcaliper}. Jin~\cite{Jin17a} recently reported another linear time algorithm, which is much simpler than the one in \cite{linear-correct}. Another algorithm was reported by Kallus \cite{Kallus17b}. Although, the MFT and MAT problems are often viewed as dual (from the combinatorial perspective) \cite{kgon94,kgon95soda}, to our knowledge, there is no reduction from an instance of the MFT problem to an instance of the MAT problem that allows us to translate an algorithm of the latter to the former. See a discussion in appendix~\ref{sect:supply}. \medskip \noindent \textbf{Rotate-and-Kill technique.\mbox{ }} Jin~\cite{Jin17a} introduced a so-called \emph{Rotate-and-Kill technique} for solving the polygon inclusion problem, which will be applied in this paper for finding the MFT. So, let us briefly review how this technique is applied on the MAT problem. Consider a na\"{\i}ve algorithm for finding the MAT: enumerate a vertex pair $(V,V')$ of $P$ and computes the vertex $V^*$ so that the area of $\triangle VV'V^*$ is maximum. It suffers from enumerating too many pairs of $(V,V')$. In fact, only a few of these pairs are effective as implied by the following iterative process called Rotate-and-Kill. Let $V+1$ denote the clockwise next vertex of $V$. Jin \cite{Jin17a} designed a constant time subroutine $\mathsf{Kill}(V,V')$, called \emph{killing criterion}, which returns either $V$ or $V'$, so that $V$ is returned only if (a) no pair in $\{(V,V'+1),(V,V'+2),\ldots\}$ forms an edge of an MAT and $V'$ is returned only if (b) no pair in $\{(V+1,V'),(V+2,V'),\ldots\}$ forms an edge of an MAT. Now, assume a pair $(V,V')$ is given in the current iteration. We kill $V$ if $\mathsf{Kill}(V,V')=V$ and otherwise kill $V'$, and then move on to the next iteration $(V+1,V')$ or $(V,V'+1)$. In this way, only $O(n)$ pairs of $(V,V')$ are enumerated and the algorithm is thus improved to $O(n)$ time.\smallskip In addition to the MAT, \cite{Jin17a} also computes the minimum enclosing triangle optimally by this new technique. Naturally, \cite{Jin17a} guesses that their technique is powerful for solving other related problems. A precondition for applying the Rotate-and-Kill technique is that at least one of (a) and (b) holds at any iteration. This is indeed truth for many polygon inclusion problems since the locally optimal solutions in such problems usually admit an interleaving property (see \cite{kgon82} or Definition~\ref{def:interleaving} below) implying that (a) and (b) cannot fail simultaneously. When the above precondition is satisfied for a given problem, the biggest challenge in applying the technique is that we need an efficient killing criterion specialized to the problem. Usually, a criterion that runs in $O(\mathsf{poly}(\log n))$ or even $O(\log n)$ time is easy to find. Yet we wish to have an (amortized) $O(1)$ time criterion as shown in \cite{Jin17a}. For the MFT problem, although we can borrow the framework in \cite{Jin17a}, we must settle this main challenge by developing new ideas. In fact, our criterion is more tricky than the one in \cite{Jin17a}. \medskip \noindent \textbf{Other related work.\mbox{ }} Searching for extremal shapes with special properties enclosing or enclosed by a given polygon were initiated in \cite{linear-wrong-DS, kgon82, kgonE-area-FOCS84}, and have since been studied extensively. Chandran and Mount's algorithm \cite{linear-correct} is an extension of O'Rourke et al.'s linear time algorithm \cite{Tri-Enclose-Area} for computing the minimum triangle enclosing $P$. The latter is an improvement over an algorithm of Klee and Laskowski \cite{Tri-Enclose-Area-nlogn2}. The minimum perimeter enclosing triangle can be solved in $O(n)$ time \cite{Tri-Enclose-Peri}. The maximum perimeter enclosed triangle can be solved in $O(n\log n)$ time \cite{kgon82}. \cite{kgon82, kgon87, kgon94, kgon95soda, kgonE-area-FOCS84, kgonE-area-VC85, kgonE-peri-IPL08} studied extremal area / perimeter $k$-gon inside or outside a convex polygon. In particular, the maximum $k$-gon can be computed in $O(n\log n)$ time when $k$ is a constant \cite{kgon87,kgon94,kgon95soda} and it remains open whether this can be optimized to linear time (at least for $k=4$). \cite{3d-Suri-JC02, 3d-Vivien-CGTA04} studied the extremal polytope problems in three dimensional space. Brass and Na \cite{BRASS10} solves another related problem: Given $n$ half-planes (in arbitrary position), find the maximum bounded intersection of $k$ half-planes out of them. We refer the readers to the introduction of \cite{Jin15} and \cite{Jin17a} for more related work. \smallskip \noindent \textbf{Key motivation.} The well-known rotating-caliper technique is powerful in solving a lot of polygon enclosing problems, but not easy to apply in most polygonal inclusion problems. To our knowledge, there was no generic technique for solving the polygon inclusion problem as claimed in \cite{YanKe87square} before the Rotate-and-Kill technique (noticing that \cite{linear-wrong-DS} is wrong). Thus, for attacking the inclusion problems, there is a necessity to further develop the unmature Rotate-and-Kill technique, especially by finding more of its applications. This motivates us to study the MFT problem in this paper (even though it is actually a polygon enclosing problem). Nonetheless, we believe that our result brings some new understanding of the technique that might be helpful for improving other related problems. \subsection{Preliminaries}\label{subsect:pre} Let $v_1,\ldots,v_n$ be a clockwise enumeration of the vertices of the given convex polygon $P$. For each $i$, denote by $e_i$ the directed line segment $\overrightarrow{v_iv_{i+1}}$. We call $e_1,\ldots,e_n$ the $n$ edges of $P$. Assume that no three vertices of $P$ lie in the same line and moreover, all edges of $P$ are \textbf{pairwise-nonparallel}. Let $\ell_i$ denote the extended line of $e_i$, and $\mathsf{p}_i$ denote the half-plane delimited by $\ell_i$ and containing $P$, and $\mathsf{p}^C_i$ denote the complementary half-plane of $\mathsf{p}_i$.\smallskip When three distinct edges $e_i,e_j,e_k$ lie in clockwise order, the region bounded by $\mathsf{p}_i,\mathsf{p}_j,\mathsf{p}_k$ is denoted by $\triangle e_ie_je_k$ and is called an \emph{all-flush triangle}. Throughout, whenever we write $\triangle e_ie_je_k$, we assume that \emph{$e_i,e_j,e_k$ are distinct and lie in clockwise order}. Denote the area of $\triangle e_ie_je_k$ by $\mathsf{Area}(\triangle e_ie_je_k)$. This area may be unbounded. We can use the following observation to determine the finiteness of $\mathsf{Area}(\triangle e_ie_je_k)$. \begin{definition}[Chasing relation] Edge $e_i$ is \emph{chasing} another edge $e_j$, denoted by $e_i\prec e_j$, if the intersection of $\ell_i,\ell_j$ lies between $e_i,e_j$ clockwise. \end{definition} \begin{observation}\label{obs:finite-condition} $\mathsf{Area}(\triangle e_ie_je_k)$ is finite if and only if: $e_i\prec e_j,e_j\prec e_k$ and $e_k\prec e_i$. \end{observation} \begin{observation}\label{obs:finite-exists} There exists a tuple $(e_i,e_j,e_k)$ such that $e_i\prec e_j,e_j\prec e_k$ and $e_k\prec e_i$. \end{observation} \begin{proof} Choose $e_i$ arbitrarily. Choose $j$ so that $e_i\prec e_j$ but $e_{j+1}\prec e_i$. Let $k=j+1$. \end{proof} For the all-flush triangles with finite areas, we can define the notion of \emph{3-stable}. (Note that finiteness is a prerequisite of being 3-stable because otherwise subsequent lemmas, e.g. Lemma~\ref{lemma:interleaving}, would fail or be too complicated to state; see discussions in Appendix~\ref{sect:supply}.) \begin{definition}\label{def:3-stable} Consider any all-flush triangle $\triangle e_ie_je_k$ with a finite area. Edge $e_i$ is \emph{stable} if no all-flush triangle $\triangle e_{i'}e_je_k$ is smaller than $\triangle e_ie_je_k$; edge $e_j$ is \emph{stable} if no all-flush triangle $\triangle e_ie_{j'}e_k$ is smaller than $\triangle e_ie_je_k$; and edge $e_k$ is \emph{stable} if no all-flush triangle $\triangle e_ie_je_{k'}$ is smaller than $\triangle e_ie_je_k$. Moreover, triangle $\triangle e_ie_je_k$ is \emph{3-stable} if $e_i,e_j,e_k$ are all stable. \end{definition} Combining Observation~\ref{obs:finite-condition} and \ref{obs:finite-exists}, there exist all-flush triangles with finite areas. Moreover, by Definition~\ref{def:3-stable}, if a finite all-flush triangle is not 3-stable, we could find a smaller such triangle. Therefore, to find the minimum area all-flush triangle, it suffices if we first compute all the 3-stable triangles and then select the minimum among them. Below we introduce the notion of \emph{interleaving} and an important property of 3-stable triangles, whose corollary shows that there are not too many such triangles. \begin{definition}\label{def:interleaving} Two flushed triangles $\triangle e_re_se_t$ and $\triangle e_ie_je_k$ are \emph{interleaving} if there is a list of edges $e_{a_1},\ldots,e_{a_6}$ which lie in clockwise order (in a non-strict manner; so neighbors may be identical), in which $\{e_{a_1},e_{a_3},e_{a_5}\}$ equals $\{e_r,e_s,e_t\}$ and $\{e_{a_2},e_{a_4},e_{a_6}\}$ equals $\{e_i,e_j,e_k\}$. \end{definition} \begin{lemma}\label{lemma:interleaving} Any two 3-stable triangles are interleaving. \end{lemma} \begin{corollary}\label{corol:O(n)} There are $O(n)$ 3-stable triangles. \end{corollary} Easy proofs of Lemma~\ref{lemma:interleaving} and Corollary~\ref{corol:O(n)} are deferred to Appendix~\ref{sect:interleavity-unimodal-stable}. \subsection{Overview of our approach}\label{subsect:techover} \noindent \textbf{Initial step.\mbox{ }} We first compute one 3-stable triangle by a somewhat trivial algorithm. Denote the resulting 3-stable triangle by $\triangle e_re_se_t$. Let $J=\{e_s,\ldots,e_t\}$ and $K=\{e_t,\ldots,e_r\}$, where $\{e_x,\ldots,e_y\}$ denotes the set of edges between $e_x$ to $e_y$ clockwise including $e_x$ and $e_y$. \smallskip \noindent \textbf{The na\"{\i}ve approach.\mbox{ }} For each 3-stable triangle $\triangle e_ie_je_k$, since it must interleave $\triangle e_re_se_t$ (by Lemma~\ref{lemma:interleaving}), we can assume without loss of generality that $e_j\in J$ and $e_k\in K$. Therefore, the following algorithm computes all the 3-stable triangles: Enumerate $(e_b,e_c)\in J\times K$ and for each such edge pair, compute the 3-stable triangle(s) $\triangle e_ie_je_k$ with $(e_j,e_k)=(e_b,e_c)$. However, this algorithm costs $\Omega(|J\times K|)$ time, which is $\Omega(n^2)$ in worst (and most) cases. We say $(e_b,e_c)$ is \emph{dead} if there does not exist an edge $e_a$ such that $\triangle e_ae_be_c$ is 3-stable. Clearly, it is unnecessary to enumerate a dead pair in the above algorithm. Further, there are only $O(n)$ pairs that are not dead according to Corollary~\ref{corol:O(n)}. Therefore, the above algorithm could be improved if those pairs that are not dead can be found efficiently. \smallskip \noindent \textbf{Rotate-and-Kill.\mbox{ }} Initially, set $(b,c)=(s,t)$, i.e. set $e_b,e_c$ to be the first edges in $J,K$ respectively. Iteratively, choose one of the following operations: Kill $b$ (i.e.\ $b\leftarrow b+1$); or kill $c$ (i.e.\ $c\leftarrow c+1$). Obey the following rules. \quad \emph{$b$ is killed only if (1) the pairs in $\{(e_{b},e_{c+1}),(e_b,e_{c+2}),...,(e_b,e_r)\}$ are all dead}, and \quad \emph{$c$ is killed only if (2) the pairs in $\{(e_{b+1},e_c),(e_{b+2},e_c),...(e_t,e_c)\}$ are all dead}. The termination condition is $(b,c)=(t,r)$, i.e. $e_b,e_c$ are the last edges in $J,K$ respectively. Suppose both rules are obeyed, the iteration would eventually reach the state $(b,c)=(t,r)$ and at that moment \emph{all the pairs that are not dead would have been enumerated}. To see this more clearly, observe that $(e_t,e_r)$ is not dead (because $\triangle e_se_te_r$ is 3-stable), and observe that by induction, at each iteration of $(b,c)$, an pair $(e_{b'},e_{c'})$ that is not dead either has been enumerated already or satisfies that $e_{b'}\in \{e_b,\ldots, e_t\}$ and $e_{c'}\in \{e_c,\ldots,e_r\}$. The above Rotate-and-Kill process shall be finalized with a function $\mathsf{Kill}(b,c)$, called \emph{killing criterion}, which guides us to kill $b$ or $c$. It returns $b$ only if (1) holds and $c$ only if (2) holds. Above all, notice that such a criterion does exist. This is because (1) or (2) holds at each iteration. Suppose neither (1) nor (2) in some iteration and without loss of generality that $(e_b,e_{c+g})$ and $(e_{b+h},e_c)$ are not dead. This suggests two 3-stable triangles $\triangle e_{a_1}e_be_{c_+g}$ and $\triangle e_{a_2}e_{b+h}e_c$, which definitely cannot be interleaving and thus contradicts Lemma~\ref{lemma:interleaving}. The criterion is the kernel of the algorithm; designing it is the crucial part of the paper. \smallskip \noindent \textbf{Logarithmic killing criterion.\mbox{ }} Two criterions obey the rules: Return $b$ when (1) holds and $c$ otherwise. Or, return $c$ when (2) holds or $b$ otherwise. Yet they are not computationally efficient. Computing (1) (or (2)) costs $O(n)$ time by trivial methods, or $O(\log n)$ time by binary searches (see Appendix~\ref{sect:nlogn}). These can only lead to $O(n^2)$ or $O(n\log n)$ time solutions. \smallskip \noindent \textbf{Amortized constant time killing criterion.\mbox{ }} We design an amortized $O(1)$ time killing criterion in Section~\ref{sect:linear}. Briefly, given $(b,c)$, we compute a specific directed line $L=L_{b,c}$ (in $O(1)$ time) and compare it with $P$. Then, return $b$ or $c$ depending on whether $P$ lies on the right of $L$. We make sure that the slope of $L$ monotonously increase throughout the entire algorithm, thus it only costs amortized $O(1)$ time to compare the convex polygon $P$ with $L$. \newcommand{\mathsf{a}}{\mathsf{a}} \smallskip \noindent \textbf{Compute the 3-stable triangle(s).\mbox{ }} It remains to specify how we compute the 3-stable triangle $\triangle e_ie_je_k$ with $(j,k)=(b,c)$ in (amortized) constant time. We first compute $\mathsf{a}_{b,c}=a$ so that $\mathsf{Area}(\triangle e_ae_be_c)$ is minimum (see Definition~\ref{def:OPT} below for a rigorous definition of $\mathsf{a}_{b,c}$), and then check whether $\triangle e_ae_be_c$ is 3-stable and report it if so. We apply two basic lemmas here. The unimodality of $\mathsf{Area}(\triangle e_ae_be_c)$ for fixed $b,c$ (Lemma~\ref{lemma:area-unimodal}) states that if $e_a$ is enumerated clockwise along the interval of edges for which $\triangle e_ae_be_c$ is all-flush and $\mathsf{Area}(\triangle e_ae_be_c)$ is finite, this area would first decrease and then increase. The bi-monotonicity of $\mathsf{a}_{b,c}$ (Lemma~\ref{lemma:OPTmono}) states that if $e_b$ or $e_c$ moves clockwise along the boundary of $P$, so does $\mathsf{a}_{b,c}$. Because $e_b,e_c$ move clockwise during the Rotate-and-Kill process, $\mathsf{a}_{b,c}$ moves clockwise by the bi-monotonicity and thus can be computed using the unimodality in amortized $O(1)$ time. Checking 3-stability (not necessary) reduces to checking whether $e_a,e_b,e_c$ are stable in this triangle, which only takes $O(1)$ time also by the unimodality. A pseudo code of our main algorithm is given in Appendix~\ref{sect:supply}. \section{Compute one 3-stable triangle}\label{sect:alg-one} \newcommand{\mathsf{D}}{\mathsf{D}} To present our algorithm, we first give two basic lemmas mentioned in the last paragraph of Subsection~\ref{subsect:techover}. Their easy proofs are put into Appendix~\ref{sect:interleavity-unimodal-stable} due to space limit. For each $i$, let $\mathsf{D}_i$ denote the vertex with the furthest distance to $\ell_i$. Given points $X,X'$ on the boundary of $P$, we denote by $(X\circlearrowright X')$ the boundary portion of $P$ that starts from $X$ and clockwise to $X'$ which does not contain endpoints $X,X'$. \begin{definition}\label{def:OPT} Consider any edge pair $(e_b,e_c)$ such that $e_b \prec e_c$. Notice that ``$e_a\prec e_b$ and $e_c\prec e_a$'' is equivalent to ``$e_a\in (\mathsf{D}_b\circlearrowright \mathsf{D}_c)$''. We define $\mathsf{a}_{b,c}$ to be the \textbf{smallest} (i.e.\ clockwise first) $a$ such that $\mathsf{Area}(\triangle e_ae_be_c)=\min\left(\mathsf{Area}(\triangle e_ae_be_c) \mid e_a\in \{e_{c+1},\ldots,e_{b-1}\}\right).$ For the special case where $\mathsf{D}_b=\mathsf{D}_c$, $\mathsf{Area}(\triangle e_ae_be_c)$ are infinite for all $e_a\in \{e_{c+1},\ldots,e_{b-1}\}$ by Observation~\ref{obs:finite-condition} and we define $\mathsf{a}_{b,c}$ to be the previous edge of $\mathsf{D}_b$. See Figure~\ref{fig:Def-OPT} below. Sometimes we adopt the convention to abbreviate $e_i$ as $i$. Hence $\mathsf{a}_{b,c}$ denotes $e_{\mathsf{a}_{b,c}}$. \end{definition} \begin{figure}[b] \centering\includegraphics[width=.6\textwidth]{Def-OPT}\\ \caption{Illustration of the definition of $\mathsf{a}_{b,c}$.}\label{fig:Def-OPT} \end{figure} \begin{lemma}[Unimodality of $\mathsf{Area}(\triangle e_ae_be_c)$ for fixed $b,c$]\label{lemma:area-unimodal} Given $b,c$ so that $e_b\prec e_c$ and $\mathsf{D}_b\neq \mathsf{D}_c$, function $\mathsf{Area}(\triangle e_ae_be_c)$ is unimodal for $e_a\in (\mathsf{D}_b \circlearrowright \mathsf{D}_c)$. Specifically, this function \emph{strictly} decreases when $e_a$ is enumerated clockwise from the next edge of $\mathsf{D}_b$ to $\mathsf{a}_{b,c}$; and for $e_a=\mathsf{a}_{b,c}$, we have $\mathsf{Area}(\triangle e_{a+1}e_be_c)\geq \mathsf{Area}(\triangle e_ae_be_c)$; and it \emph{strictly} increases when $e_a$ is enumerated clockwise from the \emph{next edge of} $\mathsf{a}_{b,c}$ to the previous edge of $\mathsf{D}_c$. \end{lemma} \begin{lemma}[Bi-monotonicity of $\mathsf{a}_{b,c}$]\label{lemma:OPTmono} Let $E$ denote $\{e_{c+1},\ldots,e_{b-1}\}$ in the following claims. \begin{enumerate} \item Assume $e_b$ is chasing $e_c,e_{c+1}$, so $\mathsf{a}_{b,c},\mathsf{a}_{b,c+1}$ are defined. Notice that these two edges lie in $E$ according to Definition~\ref{def:OPT}. We claim that $\mathsf{a}_{b,c},\mathsf{a}_{b,c+1}$ lie in clockwise order in $E$. \item Assume $e_b,e_{b+1}$ are chasing $e_c$, so $\mathsf{a}_{b,c},\mathsf{a}_{b+1,c}$ are defined. Notice that these two edges lie in $E$ according to Definition~\ref{def:OPT}. We claim that $\mathsf{a}_{b,c},\mathsf{a}_{b+1,c}$ lie in clockwise order in $E$. \end{enumerate} Here, ``lie in clockwise order'' is in a non-strict manner; which means equal is allowed. \end{lemma} To find a 3-stable triangle, our first goal is to find a triangle with two stable edges. We find it as follows. Assign $r=1$, enumerate an edge $e_s$ clockwise and compute $t_s=\mathsf{a}_{r,s}$ for each $s$, and then select $s$ so that $\mathsf{Area}(\triangle e_re_se_{t_s})$ is minimum. In other words, we compute $s,t$ so that $\triangle e_re_se_t$ is the smallest all-flush triangle with $r=1$. Using the bi-monotonicity of $\{\mathsf{a}_{r,s}\}$ (Lemma~\ref{lemma:OPTmono}) with the unimodality of $\mathsf{Area}(\triangle e_re_se_t)$ for fixed $r,s$ (Lemma~\ref{lemma:area-unimodal}), the computation of $t_s$ only costs amortized $O(1)$ time, hence the entire running time is $O(n)$. We claim that $\triangle e_re_se_t$ has a finite area and moreover, $e_s,e_t$ are stable in $\triangle e_re_se_t$. By Observation~\ref{obs:finite-condition} and the proof of Observation~\ref{obs:finite-exists}, for any given edge $e_r$, there exist $e_j,e_k$ so that $\triangle e_re_je_k$ has a finite area. This easily implies the finiteness of $\triangle e_re_se_t$. If $e_s$ (or $e_t$) is not stable in $\triangle e_re_se_t$, we could get a smaller triangle $\triangle e_re_{s'}e_t$ (or $\triangle e_re_se_{t'}$) with $r=1$, contradicting the fact that $\triangle e_re_se_t$ is the smallest all-flush triangle with $r=1$.\medskip Now, $e_s$ and $e_t$ are stable in $\triangle e_re_se_t$. If $e_r$ is also stable (which can be determined in $O(1)$ time by Lemma~\ref{lemma:area-unimodal}), we have found a 3-stable triangle. What if $e_r$ is not stable? By Lemma~\ref{lemma:area-unimodal}, this means either $\mathsf{Area}(\triangle e_{r+1}e_se_t) < \mathsf{Area}(\triangle e_re_se_t)$ or $\mathsf{Area}(\triangle e_{r-1}e_se_t) < \mathsf{Area}(\triangle e_re_se_t)$. \textbf{Assume the former occurs} and our subroutine for this case is given in Algorithm~\ref{alg:I2}. The latter can be handled by a symmetric subroutine as shown in Algorithm~\ref{alg:full}. \begin{algorithm}[h] \caption{Find a 3-stable triangle for the case $\mathsf{Area}(\triangle e_{r+1}e_se_t) < \mathsf{Area}(\triangle e_re_se_t)$.}\label{alg:I2} \While{$r+1\neq s$ and $\mathsf{Area}(\triangle e_{r+1}e_se_t) < \mathsf{Area}(\triangle e_re_se_t)$}{ $r\leftarrow r+1$;\label{code:r++}\\ \Repeat{none of the above two conditions hold}{ \textbf{while} $s+1\neq t \text{ and }\mathsf{Area}(\triangle e_re_{s+1}e_t)<\mathsf{Area}(\triangle e_re_se_t)$ \textbf{do} $s\leftarrow s+1$; \label{code:s++}\\ \textbf{while} $t+1\neq r \text{ and }\mathsf{Area}(\triangle e_re_se_{t+1})<\mathsf{Area}(\triangle e_re_se_t)$ \textbf{do} $t\leftarrow t+1$; \label{code:t++} } } \end{algorithm} To analysis Algorithm~\ref{alg:I2}, we introduce two notions: back-stable and forw-stable. Consider any all-flush triangle $\triangle e_ie_je_k$ with a finite area. Edge $e_i$ is \emph{back-stable} if $\mathsf{Area}(\triangle e_ie_je_k)\leq \mathsf{Area}(\triangle e_{i-1}e_je_k)$ (or $i-1=k$). Edge $e_i$ is \emph{forw-stable} if $\mathsf{Area}(\triangle e_ie_je_k)\leq \mathsf{Area}(\triangle e_{i+1}e_je_k)$ (or $i+1=j$). Symmetrically, we can define back-stable and forw-stable for $e_j$ and $e_k$. Note that back-stable plus forw-stable means stable. This applies Lemma~\ref{lemma:area-unimodal}. \begin{observation}\label{Obs:back-forw} Assume $e_b$ is back-stable in $\triangle e_ae_be_c$. See Figure~\ref{fig:Obs-back-forw}. Then, \begin{itemize} \item it is also back-stable in $\triangle e_{a+1}e_be_c$ when $a+1\neq b$ and $\triangle a_{a+1}e_be_c$ is finite. \item it is also back-stable in $\triangle e_ae_be_{c+1}$ when $c+1\neq a$ and $\triangle e_ae_be_{c+1}$ is finite. \end{itemize} These claims are trivial; see an enhanced version with a proof in Appendix~\ref{sect:interleavity-unimodal-stable} (Observation~\ref{Obs:back-forw-full}). \end{observation} \begin{figure}[h] \centering \includegraphics[width=.24\textwidth]{Obs-back-forw}\\ \caption{Illustration of Observation~\ref{Obs:back-forw}.}\label{fig:Obs-back-forw} \end{figure} \begin{observation}\label{obs:algI2} Throughout Algorithm~\ref{alg:I2}, the following hold. \begin{enumerate} \item $\triangle e_re_se_t$ has a finite area, which strictly decreases after every change of $r,s,t$. \item Edges $e_r,e_s,e_t$ are back-stable in $\triangle e_re_se_t$. \item Edges $e_s,e_t$ are forw-stable after the repeat-until sentence (Line~3 to Line~6), and $e_r$ is forw-stable when the algorithm terminates. \end{enumerate} \end{observation} \begin{proof} Part~1 is obvious. Part~3 is easy: Whenever one of $e_r,e_s,e_t$ is not forw-stable, the algorithm moves it forwardly. We prove part~2 in the following. Initially, $\mathsf{Area}(\triangle e_{r+1}e_se_t) < \mathsf{Area}(\triangle e_re_se_t)$, so $\mathsf{Area}(\triangle e_re_se_t) < \mathsf{Area}(\triangle e_{r-1}e_se_t)$ by Lemma~\ref{lemma:area-unimodal}, i.e. $e_r$ is back-stable. When $r\leftarrow r+1$ is to be executed at Line~\ref{code:r++}, $\mathsf{Area}(\triangle e_{r+1}e_se_t) < \mathsf{Area}(\triangle e_re_se_t)$. This means $e_r$ will be back-stable after this sentence. Furthermore, by Observation~\ref{Obs:back-forw}, a back-stable edge remains back-stable when we move another edge forwardly, so $e_r$ remains back-stable when $s$ or $t$ is increased. By some similar arguments, $e_s,e_t$ are always back-stable. Notice that initially $e_s,e_t$ are back-stable since they are stable (guaranteed by the previous step). \end{proof} Algorithm~\ref{alg:I2} terminates eventually according to part~1 of Observation~\ref{obs:algI2}. Moreover, $\triangle e_re_se_t$ is 3-stable at the end. This follows from the other two parts of Observation~\ref{obs:algI2}. Finally, observe that $r$ can never return to $1$ according to the fact that the initial triangle $\triangle e_re_se_t$ is the smallest one with $r=1$. Moreover, observe that $r,s,t$ can only move clockwise and $e_r,e_s,e_t$ always lie in clockwise order. These together imply that the total number of changes of $r,s,t$ is bounded by $O(n)$ and hence Algorithm~\ref{alg:I2} runs in $O(n)$ time. \footnote{ Although Algorithm~\ref{alg:I2} looks similar to the kernel step in \cite{linear-wrong-DS} (by coincidence), our entire algorithm is essentially different from that in \cite{linear-wrong-DS}. Most importantly, our first step for finding the ``2-stable'' triangle sets the initial value of $(r,s,t)$ differently. In addition, our algorithm has an omitted subroutine symmetric to Algorithm~\ref{alg:I2} which handles the case where $e_r$ is forw-stable but not back-stable, but \cite{linear-wrong-DS} does not. Unfortunately, some previous reviewers irresponsibly regarded our algorithm in this section the same as the algorithm in \cite{linear-wrong-DS} and claimed that this part of algorithm is not original.} \section{Compute all the 3-stable triangles in $O(n)$ time}\label{sect:linear} Recall the framework of our algorithm in Subsection~\ref{subsect:techover}. This section presents the kernel of our algorithm --- the killing criterion. First, we give some observations and a lemma. \begin{definition}\label{def:hyperbola-area} See Figure~\ref{fig:def-trianglearea}. Given rays $r,r'$ originating at $O$ and a hyperbola branch $h$ admitting $r,r'$ as asymptotes. Construct an arbitrary tangent line of $h$ and assume that it intersects $r,r'$ at points $A,A'$ respectively. From basic knowledge of hyperbolas, the area of $\triangle OAA'$ is a constant. This area is defined as the \emph{triangle-area} of $h$, denoted by $\mathsf{Area}(h)$. \end{definition} \begin{observation}\label{obs:area-to-intersection} Let $r,r',O,h$ be the same as above and $\phi$ be the quadrant region bounded by $r,r'$ and containing $h$. Consider any halfplane $g$ which contains $O$ and is delimited by $l$. \begin{enumerate} \item The area of $g\cap \phi$ is smaller than $\mathsf{Area}(h)$ if and only if $l$ is disjoint with $h$. \item The area of $g\cap \phi$ is identical to $\mathsf{Area}(h)$ if and only if $l$ is tangent to $h$. \item The area of $g\cap \phi$ is larger than $\mathsf{Area}(h)$ if and only if $l$ cuts $h$ (i.e. is a secant of $h$). \end{enumerate} \end{observation} Observation~\ref{obs:area-to-intersection} is trivial; proof omitted. Recall that $\mathsf{p}_k$ is the half-plane delimited by $\ell_k$ and containing $P$ for each $k$. Let $\phi^+_k=\mathsf{p}_{k-1}\cap \mathsf{p}^C_k$ and $\phi^-_k=\mathsf{p}^C_{k-1}\cap \mathsf{p}_k$ denote two quadrant regions divided by $\ell_{k-1}$ and $\ell_k$. (Subscripts are taken modulo $n$ in all such places.) \newcommand{\mathsf{h}}{\mathsf{h}} \begin{definition}\label{def:hvkej} Consider any vertex $v_k$. See Figure~\ref{fig:def-h}. \begin{itemize} \item For every $e_j$ such that $e_j\prec e_k$ and $e_j\neq e_{k-1}$, define $\mathsf{h}^+_{v_k,e_j}$ to be the hyperbola branch asymptotic to $\ell_{k-1},\ell_k$ in $\phi^+_k$ with triangle-area as much as the area of $\phi^-_k\cap \mathsf{p}_j$. \item For every $e_j$ such that $e_{k-1}\prec e_j$ and $e_j\neq e_k$, define $\mathsf{h}^-_{v_k,e_j}$ to be the hyperbola branch asymptotic to $\ell_{k-1},\ell_k$ in $\phi^-_k$ with triangle-area as much as the area of $\phi^+_k\cap \mathsf{p}_j$. \end{itemize} \end{definition} \begin{figure}[h] \begin{minipage}[b]{.25\textwidth} \centering \includegraphics[width=.65\textwidth]{Def-triangle-area}\\ \caption{Illustration of Definition~\ref{def:hyperbola-area}}\label{fig:def-trianglearea} \end{minipage} \quad \begin{minipage}[b]{.25\textwidth} \centering \includegraphics[width=.8\textwidth]{Def-h}\\ \caption{Illustration of Definition~\ref{def:hvkej}}\label{fig:def-h} \end{minipage} \quad \begin{minipage}[b]{.44\textwidth} \centering \includegraphics[width=.75\textwidth]{Obs-stablebc}\\ \caption{Illustration of Observation~\ref{obs:bc-stable}}\label{fig:stablebc} \end{minipage} \end{figure} For convenience, we use two abbreviations in the following. Let ``intersect'' be short for ``cut or be tangent to'' and ``avoid'' be short for ``be disjoint with or tangent to''. \begin{observation}\label{obs:bc-stable} Assume $\triangle e_ae_be_c$ has a finite area where $e_b,e_c$ are stable. See Figure~\ref{fig:stablebc}. \begin{enumerate} \item $\ell_a$ intersects $\mathsf{h}^-_{v_{b+1},e_c}$ when $b+1\neq c$ and avoids $\mathsf{h}^-_{v_b,e_c}$ when $e_{b-1}\prec e_c$. \item $\ell_a$ intersects $\mathsf{h}^+_{v_c,e_b}$ when $b\neq c-1$ and avoids $\mathsf{h}^+_{v_{c+1},e_b}$ when $e_b\prec e_{c+1}$. \end{enumerate} \textbf{Note}: Applying Observation~\ref{obs:finite-condition} on $\triangle e_ae_be_c$, we get $e_b\prec e_c$. Therefore, each of the four hyperbolas $\mathsf{h}^-_{v_{b+1},e_c}$, $\mathsf{h}^-_{v_b,e_c}$, $\mathsf{h}^+_{v_c,e_b}$, $\mathsf{h}^+_{v_{c+1},e_b}$ are defined under the corresponding condition. \end{observation} \begin{proof} Assume $e_{b+1}\prec e_c$. Because $e_b$ is stable, $\mathsf{Area}(\triangle e_ae_be_c) \leq \mathsf{Area}(\triangle e_ae_{b+1}e_c)$. So the area of $\mathsf{p}_a\cap (\mathsf{p}^C_b\cap \mathsf{p}_{b+1})$ is at least the area of $\mathsf{p}_c\cap \mathsf{p}_b\cap \mathsf{p}^C_{b+1}$. Namely, the former area is no smaller than $\mathsf{Area}(\mathsf{h}^-_{v_{b+1},e_c})$. Applying Observation~\ref{obs:area-to-intersection}, this means $\ell_a$ intersects $\mathsf{h}^-_{v_{b+1},e_c}$. Assume $e_{b-1}\prec e_c$. This implies that $e_a\neq e_{b-1}$; otherwise $e_a\prec e_c$ and $\triangle e_ae_be_c$ is infinite. Because $e_{b-1}\neq e_a$ and $e_b$ is stable, $\mathsf{Area}(\triangle e_ae_be_c) \leq \mathsf{Area}(\triangle e_ae_{b-1}e_c)$. Therefore, the area of $\mathsf{p}_a\cap (\mathsf{p}^C_{b-1}\cap \mathsf{p}_b)$ is at most the area of $\mathsf{p}_c\cap \mathsf{p}_{b-1}\cap \mathsf{p}^C_b$. In other words, the former area is no larger than $\mathsf{Area}(\mathsf{h}^-_{v_b,e_c})$. This means $\ell_a$ avoids $\mathsf{h}^-_{v_b,e_c}$ by Observation~\ref{obs:area-to-intersection}. \smallskip Symmetrically, because $e_c$ is stable, we can prove claim~2. \end{proof} Recall the 3-stable triangle $\triangle e_re_se_t$ and the conditions (1) and (2) in Subsection~\ref{subsect:techover}. Observation~\ref{obs:bc-stable} represents ``stable'' by line-hyperbola intersection conditions. The following lemma provides sufficient conditions of (1) and (2) in guise of line-hyperbola intersections. \newcommand{\textcolor[rgb]{0,0,1}{h^+_{v_{c+1},e_b}}}{\textcolor[rgb]{0,0,1}{h^+_{v_{c+1},e_b}}} \newcommand{\textcolor[rgb]{0,0,1}{\hyperbola^-_{v_{b+1},e_{c+1}}}}{\textcolor[rgb]{0,0,1}{\mathsf{h}^-_{v_{b+1},e_{c+1}}}} \newcommand{\textcolor[rgb]{1,0,0}{h^+_{v_{c+1},e_{b+1}}}}{\textcolor[rgb]{1,0,0}{h^+_{v_{c+1},e_{b+1}}}} \newcommand{\textcolor[rgb]{1,0,0}{\hyperbola^-_{v_{b+1},e_c}}}{\textcolor[rgb]{1,0,0}{\mathsf{h}^-_{v_{b+1},e_c}}} \begin{lemma}\label{lemma:kill-strict} Assume $e_b\in \{e_s,\ldots,e_t\}$, $e_c\in \{e_t,\ldots,e_r\}$, $e_b,e_{b+1},e_c,e_{c+1}$ are distinct and $e_b\prec e_{c+1}$. Note that $\textcolor[rgb]{0,0,1}{h^+_{v_{c+1},e_b}}$, $\textcolor[rgb]{0,0,1}{\hyperbola^-_{v_{b+1},e_{c+1}}}$, $\textcolor[rgb]{1,0,0}{h^+_{v_{c+1},e_{b+1}}}$ and $\textcolor[rgb]{1,0,0}{\hyperbola^-_{v_{b+1},e_c}}$ are defined; see Figure~\ref{fig:kill-strict}. \begin{enumerate} \item \begin{enumerate} \item When some edge pair $(e_b,e_{c'})\in \{(e_b,e_{c+1}),(e_b,e_{c+2}),\ldots,(e_b,e_r)\}$ is not dead, and hence there exists $e_a$ so that $\triangle e_ae_be_{c'}$ is 3-stable, $$\text{$\ell_a$ must (i) intersect both $\textcolor[rgb]{0,0,1}{h^+_{v_{c+1},e_b}}$ and $\textcolor[rgb]{0,0,1}{\hyperbola^-_{v_{b+1},e_{c+1}}}$ and (ii) belongs to $\{\ell_{c+2},\ldots,\ell_{b-1}\}$.}$$ \item If (I) \underline{no line in $\{\ell_{c+2},\ldots,\ell_{b-1}\}$ intersects both $\textcolor[rgb]{0,0,1}{h^+_{v_{c+1},e_b}}$ and $\textcolor[rgb]{0,0,1}{\hyperbola^-_{v_{b+1},e_{c+1}}}$}, we can infer that $(e_b,e_{c+1}),(e_b,e_{c+2}),\ldots,(e_b,e_r)$ are all dead, namely, (1) holds. \end{enumerate} \emph{To be clear, throughout this paper, $(e_b,e_{c+1}),(e_b,e_{c+2}),\ldots,(e_b,e_r)$ is empty when $c=r$.}\smallskip \item \begin{enumerate} \item When some edge pair $(e_{b'},e_c)\in \{(e_{b+1},e_c),(e_{b+2},e_c),\ldots,(e_t,e_c)\}$ is not dead, and hence there exists $e_a$ so that $\triangle e_ae_{b'}e_c$ is 3-stable, $$\text{$\ell_a$ must (i) avoid both $\textcolor[rgb]{1,0,0}{h^+_{v_{c+1},e_{b+1}}}$ and $\textcolor[rgb]{1,0,0}{\hyperbola^-_{v_{b+1},e_c}}$ and (ii) belongs to $\{\ell_{c+2},\ldots,\ell_{b-1}\}$.}$$ \item If (II) \underline{no line in $\{\ell_{c+2},\ldots,\ell_{b-1}\}$ avoids both $\textcolor[rgb]{1,0,0}{h^+_{v_{c+1},e_{b+1}}}$ and $\textcolor[rgb]{1,0,0}{\hyperbola^-_{v_{b+1},e_c}}$}, we can infer that $(e_{b+1},e_c),(e_{b+2},e_c),\ldots,(e_t,e_c)$ are all dead, namely, (2) holds. \end{enumerate} \emph{To be clear, throughout this paper, $(e_{b+1},e_c),(e_{b+2},e_c),\ldots,(e_t,e_c)$ is empty when $b=t$.} \end{enumerate} \end{lemma} \begin{figure}[h] \begin{minipage}[b]{.5\textwidth} \centering \includegraphics[width=.95\textwidth]{Obs-mono-hyper1} \end{minipage} \begin{minipage}[b]{.5\textwidth} \centering \includegraphics[width=.835\textwidth]{Obs-mono-hyper2} \end{minipage} \caption{Illustration of the proof of Lemma~\ref{lemma:kill-strict}.}\label{fig:kill-strict} \end{figure} \begin{proof} 1.b and 2.b are the contrapositives of 1.a and 2.a; so we only prove 1.a and 2.a. See Figure~\ref{fig:kill-strict}~(a) and Figure~\ref{fig:kill-strict}~(b) for the illustrations of the proofs of 1.a and 2.a respectively. \smallskip \noindent \emph{Proof of 1.a-(i)}. Because $e_{c'}\in \{e_{c+1},\ldots, e_r\}$ and is stable in $\triangle e_ae_be_{c'}$, by the unimodality in Lemma~\ref{lemma:area-unimodal}, $\mathsf{Area}(\triangle e_ae_be_{c+1})\leq \mathsf{Area}(\triangle e_ae_be_c)$. Equivalently, the area of $\mathsf{p}_a\cap (\mathsf{p}_c \cap \mathsf{p}^C_{c+1})$ is at least $\mathsf{Area}(\textcolor[rgb]{0,0,1}{h^+_{v_{c+1},e_b}})$. Applying Observation~\ref{obs:area-to-intersection}, this means $\ell_a$ intersects $\textcolor[rgb]{0,0,1}{h^+_{v_{c+1},e_b}}$. Applying Observation~\ref{obs:bc-stable}.1 on $\triangle e_ae_be_{c'}$, line $\ell_a$ intersects $\textcolor[rgb]{0.44,0.00,0.94}{\mathsf{h}^-_{v_{b+1},e_{c'}}}$. Moreover, $\textcolor[rgb]{0.44,0.00,0.94}{\mathsf{h}^-_{v_{b+1},e_{c'}}}$ is clearly contained in the area bounded by $\textcolor[rgb]{0,0,1}{\hyperbola^-_{v_{b+1},e_{c+1}}}$. Therefore, $\ell_a$ intersects $\textcolor[rgb]{0,0,1}{\hyperbola^-_{v_{b+1},e_{c+1}}}$. \smallskip \noindent \emph{Proof of 2.a-(ii)}. Because $\triangle e_ae_be_{c'}$ is defined, $\ell_a\in \{\ell_{c'+1},\ldots, \ell_{b-1}\}$, which implies (ii). \medskip \noindent \emph{Proof of 2.a-(i)}. Because $e_{b'}\in \{e_{b+1},\ldots, e_t\}$ and is stable in $\triangle e_ae_{b'}e_c$, by the unimodality in Lemma~\ref{lemma:area-unimodal}, $\mathsf{Area}(\triangle e_ae_{b+1}e_c)\leq \mathsf{Area}(\triangle e_ae_be_c)$. Equivalently, the area of $\mathsf{p}_a\cap (\mathsf{p}^C_b \cap \mathsf{p}_{b+1})$ is at most $\mathsf{Area}(\textcolor[rgb]{1,0,0}{\hyperbola^-_{v_{b+1},e_c}})$. Applying Observation~\ref{obs:area-to-intersection}, this means $\ell_a$ avoids $\textcolor[rgb]{1,0,0}{\hyperbola^-_{v_{b+1},e_c}}$. Applying Observation~\ref{obs:bc-stable}.2 on $\triangle e_ae_{b'}e_c$, line $\ell_a$ avoids $\textcolor[rgb]{0.75,0.52,0.29}{\mathsf{h}^+_{v_{c+1},e_{b'}}}$. Moreover, the area bounded by $\textcolor[rgb]{0.75,0.52,0.29}{\mathsf{h}^+_{v_{c+1},e_{b'}}}$ clearly contains $\textcolor[rgb]{1,0,0}{h^+_{v_{c+1},e_{b+1}}}$. Therefore, $\ell_a$ avoids $\textcolor[rgb]{1,0,0}{h^+_{v_{c+1},e_{b+1}}}$. \smallskip \noindent \emph{Proof of 2.a-(ii)}. Because $\triangle e_ae_{b'}e_c$ is defined, $\ell_a\in \{\ell_{c+1},\ldots, \ell_{b'-1}\}$. Since this triangle is 3-stable, $e_c\prec e_a$. However, edges in $e_{b},\ldots, e_{b'-1}$ are chasing $e_c$, so they do not contain $e_a$. So, $\ell_a\in \{\ell_{c+1},\ldots, \ell_{b-1}\}$. Because $e_b\prec e_{c+1}$, we have $e_{b'}\prec e_{c+1}$. However, $e_a\prec e_{b'}$ because $\triangle e_ae_{b'}e_c$ is 3-stable. So, $a\neq c+1$. Altogether, $\ell_a\in \{\ell_{c+2},\ldots, \ell_{b-1}\}$, i.e. (ii) holds. \end{proof} To design a killing criterion as mentioned in Subsection~\ref{subsect:techover}, we are looking for a condition such that first it is easy to compute, and second itself and its negative implies (1) and (2) respectively. In Lemma~\ref{lemma:kill-strict}, we give two sufficient conditions of (1) and (2), which are (I) and (II) respectively, and thus reduce the problem to find an easy-to-compute condition who and whose negative imply (I) and (II). We design such a condition (X) in the next lemma.\smallskip The assumption of $b,c$ henceforth follows Lemma~\ref{lemma:kill-strict} unless otherwise stated.\smallskip \noindent \textbf{Notation.} Let $G^+_{b,c},G^-_{b,c},H^+_{b,c},H^-_{b,c}$ denote $\textcolor[rgb]{0,0,1}{h^+_{v_{c+1},e_b}},\textcolor[rgb]{0,0,1}{\hyperbola^-_{v_{b+1},e_{c+1}}},\textcolor[rgb]{1,0,0}{h^+_{v_{c+1},e_{b+1}}},\textcolor[rgb]{1,0,0}{\hyperbola^-_{v_{b+1},e_c}}$ for short. Denote by $L_{b,c}^{GG}$ the common tangent of $G^+_{b,c}$ and $G^-_{b,c}$, and denote the other three common tangents by $L_{b,c}^{HG},L_{b,c}^{GH}$ and $L_{b,c}^{HH}$ correspondingly; see Figure~\ref{fig:kill-L} and \ref{fig:tangents}. Omit subscripts $b,c$ when they are clear in context. Assume these four common tangents are directed; the direction of such a tangent is from its intersection with $\ell_{c+1}$ to its intersection with $\ell_b$. \begin{lemma}\label{lemma:kill-L} See Figure~\ref{fig:kill-L}. Choose an arbitrary directed line $L$ going from a point in (open) segment $\overline{(L^{GG}\cap \ell_{c+1})(L^{HH}\cap \ell_{c+1})}$ to a point in (open) segment $\overline{(L^{GG}\cap \ell_b)(L^{HH}\cap \ell_b)}$. If (X) \underline{$P$ lies on the right of $L$}, we have (I): no line $\ell_a\in \{\ell_{c+2},\ldots,\ell_{b-1}\}$ intersects both $G^+_{b,c}$ and $G^-_{b,c}$. Otherwise, we have (II): no line $\ell_a\in \{\ell_{c+2},\ldots,\ell_{b-1}\}$ avoids both $H^+_{b,c}$ and $H^-_{b,c}$. \end{lemma} \begin{proof} We state two crucial observations. \begin{itemize} \item[(i)] If (I) fails, a point of $P$ lies in or on the left of $L^{GG}$. \item[(ii)] If (II) fails, all points of $P$ lie in or on the right of $L^{HH}$. \end{itemize} \noindent \emph{Proof of (i).} Assume (I) fails, so there is $\ell_a\in \{\ell_{c+2},\ldots, \ell_{b-1}\}$ that intersects $G^+_{b,c},G^-_{b,c}$. This means that a point in $e_a$ (and hence in $P$) lies in or on the left of $L^{GG}$. \noindent \emph{Proof of (ii).} Assume (II) fails, so there is $\ell_a\in \{\ell_{c+2},\ldots, \ell_{b-1}\}$ which avoids $H^+_{b,c}$ and $H^-_{b,c}$. This implies that $P$ lies in the area containing the right of $L^{HH}$ and $L^{HH}$. If $P$ lies on the right of $L$, no point in $P$ lies in or on the left of $L^{GG}$, so (I) holds according to (i). Otherwise, a point of $P$ would lie on the left of $L^{HH}$, so (II) holds applying (ii). \end{proof} \begin{figure}[h] \begin{minipage}[b]{.5\textwidth} \centering \includegraphics[width=.84\textwidth]{Lemma-kill-L}\\ \caption{$L^{GG},L^{HH}$ and Lemma~\ref{lemma:kill-L}.}\label{fig:kill-L} \end{minipage} \begin{minipage}[b]{.5\textwidth} \centering \includegraphics[width=.7\textwidth]{Lemma-canput}\\ \caption{$L^{HG},L^{GH}$ and Observation~\ref{obs:d-range}.}\label{fig:tangents} \end{minipage} \end{figure} We can now briefly outline the Rotate-and-Kill process based on criterion (X). In each iteration, after \emph{compute $\mathsf{a}_{b,c}$} and \emph{output $\triangle \mathsf{a}_{b,c}e_be_c$} (as a candidate of a 3-stable triangle), we proceed to the next iteration by either killing $b$ or killing $c$. To select the right one to kill, we \emph{choose a line $L$} and compute (X), i.e. \emph{compute whether $P$ lies on the right of $L$ or not}. Note that this process is not finalized because we have not specified how to choose $L$. Computing (X) takes $O(\log n)$ time since $P$ is convex, or more satisfactory, amortized $O(1)$ time if the slope of $L$ changes monotonously throughout the process. Therefore, toward a linear time algorithm, the key is to choose $L$ so that its slope changes monotonously. This is not easy. For example, if we choose $L$ to be a particular line (like $L_{b,c}^{GG}$ or $L_{b,c}^{HH}$) at every iteration, the slope is not monotone; see counterexamples in Appendix~\ref{sect:supply}. Nevertheless, by choosing $L$ more deliberately as shown below, we obtain the monotonicity as required.\smallskip For a directed line $L$, let $d(L)$ denote its direction, which is an angle in $[0,2\pi)$, adopting the convention that $d(\overrightarrow{OA})$ increases when $A$ rotates clockwise around $O$. Let $d_1,d_2$ be the opposite directions of $e_{s+1},e_r$. Without loss of generality, assume that $[d_1,d_2]\subset [0,2\pi)$. \begin{observation}\label{obs:d-range} Let $(e_b,e_c)$ be any pair following the assumption of Lemma~\ref{lemma:kill-strict}.\\ 1. $[d(L^{HG}_{b,c}), d(L^{GH}_{b,c})] \subset [d_1,d_2]$.\\ 2. For $d\in [d(L^{HG}_{b,c}), d(L^{GH}_{b,c})]$, we can compute in constant time a line $L$ with direction $d$ from a point in (open segment) $\overline{(L^{GG}\cap \ell_{c+1})(L^{HH}\cap \ell_{c+1})}$ to a point in (open segment) $\overline{(L^{GG}\cap \ell_b)(L^{HH}\cap \ell_b)}$. \end{observation} \begin{proof} See Figure~\ref{fig:tangents}. Let $L^1_{b,c}$ be the line at $v_{c+1}$ with the opposite direction to $e_{b+1}$, and $L^2_{b,c}$ be the line at $v_{b+1}$ with the opposite direction to $e_c$. Note that the area bounded by $L^1,\ell_b,\ell_{b+1}$ is infinite, whereas $G^-_{b,c}$ has a finite triangle-area. So $L^1$ intersects $G^-_{b,c}$ by Observation~\ref{obs:area-to-intersection}. Thus $L^1$ intersects $G^-_{b,c}$ and avoids $H^+_{b,c}$. Similarly, $L^2$ intersect $G^+_{b,c}$ and avoids $H^-_{b,c}$. These together imply that $[d(L^{HG}_{b,c}), d(L^{GH}_{b,c})] \subset [d(L^1_{b,c}),d(L^2_{b,c})]$, which further implies Claim~1 because $d(L^1_{s,t})=d_1$ whereas $d(L^2_{t,r})=d_2$. $L^{HG}$ and $L^{GH}$ clearly satisfy the requirement of $L$ in Lemma~\ref{lemma:kill-L}. This implies part 2. \end{proof} We present our final killing criterion (with the specification of $L$) in Algorithm~\ref{alg:criterion}. \begin{algorithm}[h] \caption{Criterion for killing $b,c$.}\label{alg:criterion} \textbf{If} ($b+1=c$) {Return $c$;}\\ \textbf{If} ($e_b$ is not chasing $e_{c+1}$) {Return $b$;}\newline Note: $e_b,e_{b+1},e_c,e_{c+1}$ are now four distinct edges and $e_b\prec e_{c+1}$.\\ Select some direction $d\in [d(L^{HG}_{b,c}), d(L^{GH}_{b,c})]$ (specified in (3) below); \label{code:d}\\ Find any line $L$ in the gray area with direction $d$ (using Observation~\ref{obs:d-range}.2);\\ Compute the supporting line $L'$ of $P$ with direction $d$ so that $P$ is on the right of $L'$.\\ Compare $L'$ with $L$ and thus determine (X). (If $L'$ lies on the right of $L$, so does $P$, and we determine (X) holds; otherwise, $P$ must intersect $L$, and we determine (X) fails.) \\ \textbf{If} (X) return $b$; \textbf{Else} return $c$. \end{algorithm} \begin{description} \item[Selection of $d$.] We apply the following equation in Line~\ref{code:d} in choosing $d$: \setcounter{equation}{2} \begin{equation} d_{\mathsf{this-iteration}}=\left\{ \begin{array}{ll} d_{\mathsf{previous-iteration}}, & d_{\mathsf{previous-iteration}}\in [d(L^{HG}_{b,c}), d(L^{GH}_{b,c})];\\ d(L^{HG}_{b,c}), & \hbox{otherwise.} \end{array} \right. \end{equation} \item[Correctness.] If $b+1=c$, no edge in $e_{b+1},\ldots, e_t$ can chase $e_c$, so we can kill $c$. If $e_b$ is not chasing $e_{c+1}$, edge $e_b$ cannot chase any edge in $e_{c+1},\ldots, e_r$, so we can kill $b$. If (X), condition (I) holds by Lemma~\ref{lemma:kill-L} and thus (1) holds by Lemma~\ref{lemma:kill-strict}.1; so we can kill $b$. Otherwise, (II) holds by Lemma~\ref{lemma:kill-L} and thus (2) holds by Lemma~\ref{lemma:kill-strict}.2; so we can kill $c$. \item[Running time.] By the following lemma, we get the monotonicity of the slope mentioned above. So it takes amortized $O(1)$ time to compute $L'$. The other steps cost $O(1)$ time. \end{description} \begin{lemma}\label{lemma:d-mono} Assume we were given $(b,c)$ in some iteration and $(b',c')$ in a later iteration, where $(b,c)$ and $(b',c')$ both satisfy the assumption in Lemma~\ref{lemma:kill-strict}, then $d(L^{GH}_{b',c'})\geq d(L^{HG}_{b,c})$. As a corollary, when (3) is applied, variable $d$ increases monotonously during the algorithm. \end{lemma} \begin{proof} We first prove the corollary part from the first part (see an illustration in Appendix~\ref{sect:supply}). We want to show $d_{\mathsf{previous-iteration}}\leq d_{\mathsf{this-iteration}}$. This trivially holds when $d_{\mathsf{previous-iteration}}\in [d(L^{HG}_{b,c}), d(L^{GH}_{b,c})]$. Consider the other case. Assume without loss of generality $d_{\mathsf{previous-iteration}}=d(L^{HG}_{b^*,c^*})$ where $(b^*,c^*)$ denotes some previous iteration. We have (i) $d(L^{HG}_{b^*,c^*})\leq d(L^{GH}_{b,c})$ (according to the first part of the lemma) and (ii) $d(L^{HG}_{b^*,c^*})=d_{\mathsf{previous-iteration}}\notin [d(L^{HG}_{b,c}),d(L^{GH}_{b,c})]$ (by assumption). Together, $d(L^{HG}_{b^*,c^*})<d(L^{HG}_{b,c})$. In other words, $d_{\mathsf{previous-iteration}}< d_{\mathsf{this-iteration}}$. In either way, $d$ increases (non-strictly). \smallskip We prove the first part in the following. Let $A=\ell_{b+1}\cap \ell_{c+1}, B=\ell_{b'}\cap \ell_{c'}$. \smallskip \textbf{First, consider the case where $b'>b$ and $c'>c$.} See Figure~\ref{fig:lemma-mono1}. Denote \quad $M^-=\left\{ \begin{array}{ll} v_{b'+1}, & b'=b+1; \\ \ell_{b+1}\cap \ell_{b'}, & b'\geq b+2 \end{array} \right. \text{and } M^+=\left\{ \begin{array}{ll} v_{c'+1}, & c'=c+1; \\ \ell_{c+1}\cap \ell_{c'}, & c'\geq c+2. \end{array} \right .$ Denote the reflections of $A$ around $M^-,M^+$ by $A^-,A^+$ respectively. Denote the reflections of $B$ around $M^-,M^+$ by $B^-,B^+$ respectively. Let $L=\overrightarrow{A^+A^-}$ and $L'=\overrightarrow{B^+B^-}$. We state three equalities or inequalities which together imply the key inequality. \quad (i) $d(L_{b,c}^{HG})\leq d(L) $. \qquad (ii) $d(L)=d(\overrightarrow{M^+M^-})=d(L')$. \qquad (iii) $d(L')\leq d(L_{b',c'}^{GH})$. \begin{figure}[h] \begin{minipage}[b]{1\textwidth} \centering \includegraphics[width=.46\textwidth]{Lemma-monoAAa} \quad \includegraphics[width=.46\textwidth]{Lemma-monoAAb} \end{minipage} \begin{minipage}[b]{1\textwidth} \vspace{15pt} \centering \includegraphics[width=.46\textwidth]{Lemma-monoAAc} \quad \includegraphics[width=.46\textwidth]{Lemma-monoAAd} \end{minipage} \begin{minipage}[b]{1\textwidth} \vspace{15pt} \centering \includegraphics[width=.36\textwidth]{Lemma-monoAAe} \qquad \qquad \qquad \includegraphics[width=.36\textwidth]{Lemma-monoAAf} \end{minipage} \caption{Proof of Lemma~\ref{lemma:d-mono} - part I.}\label{fig:lemma-mono1} \end{figure} \noindent \emph{Proof of (i):} This reduces to showing that (i.1) $L$ intersects $H^+_{b,c}$ and (i.2) $L$ avoids $G^-_{b,c}$. Now, let us focus on the objects shown in Figure~\ref{fig:lemma-mono1}~(e). Notice that $A,v_{c+1},M^+,A^+$ lie in this order in line $\ell_{c+1}$. Further since $|AM^+|=|A^+M^+|$, we know $|A^+v_{c+1}|>|Av_{c+1}|$. Thus the area of $h\cap (\mathsf{p}_c\cap \mathsf{p}^C_{c+1})$, where $h$ denotes the half-plane parallel to $l_{b+1}$ that admits $A^+$ on its boundary and contains $v_{c+1}$, is larger than the area of the (yellow) triangle $\mathsf{p}_{b+1}\cap \mathsf{p}^C_c\cap \mathsf{p}_{c+1}$, which equals $\mathsf{Area}(h^+_{v_{c+1},e_{b+1}})$. So the triangle area bounded by $l_c,l_{c+1}$ and $L$ is even larger than $\mathsf{Area}(h^+_{v_{c+1},e_{b+1}})$. By Observation~\ref{obs:area-to-intersection}, this means $L$ intersects $h^+_{v_{c+1},e_{b+1}}$, i.e.\ (i.1) holds. Assume $A,M^-,v_{b+1},A^-$ lie in this order in $\ell_{b+1}$ (otherwise the order would be $A$, $M^-$, $A^-$, $v_{b+1}$, which is easier). Similarly, the area bounded by $l_b,l_{b+1}$ and $L$ is smaller than $\mathsf{Area}(h^-_{v_{b+1},e_{c+1}})$, which by Observation~\ref{obs:area-to-intersection} means that $L$ avoids $h^-_{v_{b+1},e_{c+1}}$, i.e.\ (i.2) holds. \smallskip \noindent \emph{Proof of (iii):} This reduces to showing that (iii.1) $L'$ intersects $H^-_{b',c'}$ and (iii.2) $L'$ avoids $G^+_{b',c'}$. See Figure~\ref{fig:lemma-mono1}~(f); they are symmetric to (i.1) and (i.2) respectively; proof omitted. \smallskip \textbf{In the following, assume $b=b'$ or $c=c'$.} We discuss four subcases. \begin{itemize} \item[Case~1] $b'=b+1,c'=c$. See Figure~\ref{fig:lemma-mono2}~(a). Note that $H^+_{b,c}=G^+_{b',c'}$ in this case. Let $A^-$ be the reflection of $A$ around $v_{b+1}$ and $B^-$ the reflection of $B$ around $v_{b'+1}$. Let $L$ be the tangent line of $H^+_{b,c}$ that passes through $A^-$, and $L'$ the tangent line of $G^+_{b',c'}$ that passes through $B^-$. We argue that (i) $d(L_{b,c}^{HG})\leq d(L) $ and (iii) $d(L')\leq d(L_{b',c'}^{GH})$ still hold in this case. They follow from the observations that the triangles with light color in the figure are smaller than their opposite triangles with dark color, which follow from the facts $|Bv_{b'+1}|=|B^-v_{b'+1}|$ and $|Av_{b+1}|=|A^-v_{b+1}|$. Moreover, points $v_{b'+1},v_{b+1},B^-,A^-$ clearly lie in this order in $\ell_{b+1}$, and thus $d(L)<d(L')$. Altogether, $d(L^{HG}_{b,c})\leq d(L^{GH}_{b',c'})$. \item[Case~2] $b'=b,c'=c+1$. See Figure~\ref{fig:lemma-mono2}~(b); symmetric to Case~1; proof omitted. \begin{figure}[h] \begin{minipage}[b]{.5\textwidth} \centering \includegraphics[width=.8\textwidth]{Lemma-monoA10} \end{minipage} \begin{minipage}[b]{.5\textwidth} \centering \includegraphics[width=.85\textwidth]{Lemma-monoA01} \end{minipage} \caption{Proof of Lemma~\ref{lemma:d-mono} - part II.}\label{fig:lemma-mono2} \end{figure} \item[Case~3] $b'\geq b+2,c'=c$. \textbf{Hint: This case is more difficult because $G^+_{b',c'}$ is now ``below'' $H^+_{b,c}$ as shown in Figure~\ref{fig:lemma-mono3}~(a); this proof contains several more tricks.} Let $A^-,B^-$ be the reflection of $A,B$ around $M^-$ respectively and $L$ be the tangent line of $H^+_{b,c}$ that passes through $A^-$, and $L'$ the tangent line of $G^+_{b',c'}$ that passes through $B^-$. As in the previous cases, (i) and (iii) hold and thus it reduces to showing that $d(L')>d(L)$. \smallskip See Figure~\ref{fig:lemma-mono3}~(b). Make a parallel line $L_2$ of $L$ at $B^-$, which intersects $\ell_c,\ell_{c+1}$ at $B_1,B_2$ respectively. It reduces to showing that the area bounded by $L_2,l_c,l_{c+1}$, namely $\mathsf{Area}(\triangle v_{c+1}B_1B_2)$, is smaller than the area bounded by $L',l_c,l_{c+1}$. The latter equals to $\mathsf{Area}(G^+_{b',c'})=\mathsf{Area}(\triangle v_{c+1}BX)$, where $X=\ell_{c+1}\cap \ell_{b'}$. Let $X^-$ be the reflection of $X$ around $M^-$. Assume the parallel line of $L$ at $X^-$ intersects $\ell_c,\ell_{c+1}$ at $X_1,X_2$ respectively. Let $D=\ell_{b+1}\cap \ell_{c}$, and $E$ be the point on $\ell_c$ so that $\overline{XE}$ is parallel to $\overline{AD}$. Clearly, $\mathsf{Area}(\triangle v_{c+1}B_1B_2)<\mathsf{Area}(\triangle v_{c+1}X_1X_2)$ and $\mathsf{Area}(\triangle v_{c+1}BX)>\mathsf{Area}(\triangle v_{c+1}EX)$. So it further reduces to proving that (I) $\mathsf{Area}(\triangle v_{c+1}X_1X_2)<\mathsf{Area}(\triangle v_{c+1}EX)$. \begin{figure}[h] \begin{minipage}[b]{.5\textwidth} \centering \includegraphics[width=.85\textwidth]{Lemma-monoB1} \end{minipage} \begin{minipage}[b]{.5\textwidth} \centering \includegraphics[width=.82\textwidth]{Lemma-monoB2} \end{minipage} \caption{Proof of Lemma~\ref{lemma:d-mono} - part III.}\label{fig:lemma-mono3} \end{figure} Assume $L$ intersects $\ell_c,\ell_{c+1}$ at $A_1,A_2$. We know (a) $\mathsf{Area}(\triangle v_{c+1}A_1A_2)=\mathsf{Area}(\triangle v_{c+1}DA)$ since $L$ is tangent to $H^+_{b,c}$, and so (b): $|v_{c+1}A_2|<|v_{c+1}A|$. Since segment $\overline{A^-X^-}$ is a translate of $\overline{XA}$, (c) $|AX|=|A_2X_2|$. Combining (b) with (c), $|A_2X_2|/|v_{c+1}A_2|>|AX|/|v_{c+1}A|$, so (d) $|v_{c+1}X_2|/|v_{c+1}A_2| <|v_{c+1}X|/|v_{c+1}A|$. Further since $\overline{X_1X_2}$ is parallel to $\overline{A_1A_2}$ whereas $\overline{XE}$ is parallel to $\overline{AD}$, fact (a) and (d) together imply (I).\smallskip \item[Case~4] $b'=b,c'\geq c+2$. Symmetric to Case~3. For completeness, we prove it in Appendix~\ref{sect:supply}. \end{itemize} \end{proof} \clearpage
{ "timestamp": "2018-07-17T02:11:17", "yymm": "1712", "arxiv_id": "1712.05081", "language": "en", "url": "https://arxiv.org/abs/1712.05081" }
\subsubsection{\@startsection{subsubsection {3 {\z@ {0ex plus 0.1ex minus 0.1ex {0ex {\normalfont\normalsize\itshape} \makeatother \usepackage[T1]{fontenc} \usepackage[USenglish,UKenglish,french,spanish,italian]{babel} \usepackage[nodayofweek,level]{datetime} \newcommand{\formatdate{13}{12}{2017}}{\formatdate{13}{12}{2017}} \setcounter{secnumdepth}{3} \graphicspath{{./}{figures/}} \ifCLASSINFOpdf \else \fi \hyphenation{op-tical net-works semi-conduc-tor} \begin{document} \begin{titlepage} \begin{tabular}{l r} \includegraphics[bb=20bp 00bp 500bp 450bp,clip,scale=0.3]{logo_njit_1.png} \hspace{6cm} & \includegraphics[bb=0bp -200bp 500bp 550bp,clip,scale=0.2]{logo1.png} \end{tabular} \begin{center} \textsc{\LARGE Edge Computing Aware NOMA for 5G Networks}\\[1.5cm] {\Large \textsc{Abbas Kiani}}\\ {\Large \textsc{Nirwan Ansari}}\\ [2cm] {} {\textsc{TR-ANL-2017-007}\\ \selectlanguage{USenglish} \large \formatdate{13}{12}{2017}} \\[3cm] {\textsc{Advanced Networking Laboratory}}\\ {\textsc{Department of Electrical and Computer Engineering}}\\ {\textsc{New Jersy Institute of Technology}}\\[1.5cm] \vfill \end{center} \end{titlepage} \selectlanguage{USenglish} \begin{abstract} With the fast development of Internet of things (IoT), the fifth generation (5G) wireless networks need to provide massive connectivity of IoT devices and meet the demand for low latency. To satisfy these requirements, Non-Orthogonal Multiple Access (NOMA) has been recognized as a promising solution for 5G networks to significantly improve the network capacity. In parallel with the development of NOMA techniques, Mobile Edge Computing (MEC) is becoming one of the key emerging technologies to reduce the latency and improve the Quality of Service (QoS) for 5G networks. In order to capture the potential gains of NOMA in the context of MEC, this paper proposes an edge computing aware NOMA technique which can enjoy the benefits of uplink NOMA in reducing MEC users' uplink energy consumption. To this end, we formulate a NOMA based optimization framework which minimizes the energy consumption of MEC users via optimizing the user clustering, computing and communication resource allocation, and transmit powers. In particular, similar to frequency Resource Blocks (RBs), we divide the computing capacity available at the cloudlet to computing RBs. Accordingly, we explore the joint allocation of the frequency and computing RBs to the users that are assigned to different order indices within the NOMA clusters. We also design an efficient heuristic algorithm for user clustering and RBs allocation, and formulate a convex optimization problem for the power control to be solved independently per NOMA cluster. The performance of the proposed NOMA scheme is evaluated via simulations. \end{abstract} \begin{IEEEkeywords} Mobile edge computing, NOMA, power control \end{IEEEkeywords} \section{Introduction}\label{sec:Introduction} With the fast development of mobile Internet and Internet of Things (IoT), mobile data traffic is anticipated to witness explosive growth in the years to come. To support this unprecedented growth, both academic and industrial communities have conducted extensive research to design the fifth generation (5G) wireless networks. The 5G networks are to offer significant improvements of wireless network capacity and the user experience~\cite{thompson20145g}, and demand spectral-efficient multiple access techniques. To this end, Non-Orthogonal Multiple Access (NOMA) techniques~\cite{saito2013non} have been recognized as promising solutions for 5G and have attracted extensive research recently. In contrast with Orthogonal Multiple Access (OMA) techniques, where the radio resources are allocated orthogonally to multiple users, NOMA allows multiple users to share the same resources. By serving multiple users simultaneously over the same radio resources, more users can be supported, thus leading to a significant increase in the network capacity. This improvement is nevertheless available at the expense of intra-cell interference as well as additional complexity at the receiver side. To deal with the intra-cell interferences and the complexity, NOMA splits the users in the power domain based on their respective channel conditions and employs efficient Multi-User Detection (MUD) technique such as Successive Interference Cancellation at the receiver side~\cite{kim2013non}. In parallel with the explosive growth of mobile data traffic, our daily life witnesses a significant increase in demands for running sophisticated applications in the mobile devices for social networking, business, etc.~\cite{barbarossa2014communicating}. Moreover, in future user-centric 5G networks, the IoT users participate in sensing and computing tasks, and computation-intensive tasks need to be offloaded to either the cloud or the computing resources at the edge. To this end, Mobile Edge Computing (MEC), which is being standardized by an Industry Specification Group (ISG) lunched by the European Telecommunications Standards Institute (ETSI)~\cite{hu2015mobile}, is recognized as one of the key emerging technologies for 5G networks. The idea of MEC is to provide computing capabilities in proximity of users and within the Radio Access Network (RAN), thereby reducing the latency and improving the Quality of Service (QoS)~\cite{hu2015mobile}. In this paper, we focus on two aforementioned emerging technologies of 5G, i.e., NOMA and MEC, and propose a novel MEC aware NOMA technique for 5G networks. Our proposed scheme is motivated by the fact that the joint allocation of communication and computing resources greatly improves the performance of the system. In other words, it may happen that one type of resources is wasted due to congestion of other type of resources. While several works such as~\cite{di2013joint,yu2016joint} have investigated the joint allocation of computing and communication resources, non of the existing works consider a joint optimization technique in the context of NOMA with consideration of intra-cell interferences. To this end, the current study aims to address the aforementioned issue by proposing a joint optimization technique to allocate the computing and communication resources based on the requirements of both MEC and NOMA. \textbf{Contributions:} We have made three major contributions. 1) We propose a novel NOMA augmented edge computing model that captures the gains of uplink NOMA in MEC users' energy consumption. Specifically, we design a NOMA based optimization framework that minimizes the energy consumption of MEC users via optimizing the user clustering, computing and communication resource allocation, and transmit powers. To this end, similar to frequency Resource Blocks (RBs), we define the notion of computing RBs and investigate the joint allocation of the frequency and computing RBs. More importantly, we consider a time constraint for each edge computing task, and accordingly the minimum data rate requirement of each user is established based on its deadline. 2) We design an efficient heuristic algorithm for user clustering and RBs allocation. Moreover, we formulate a convex optimization problem for the transmission power control to be solved independently per NOMA cluster. 3) We evaluate the performance of our proposed NOMA scheme and the heuristic algorithm via extensive simulations in which we show the benefits of NOMA in reducing the MEC users' uplink energy consumption. We also evaluate the effects of computing capacity and its division strategy of computing RBs on the total energy consumption. \textbf{Related works:} The related works to this paper include MEC, NOMA, and sub-carrier scheduling. In the past few years, a large and cohesive body of work investigated the major challenges of MEC and the researchers came up with a variety of policies and algorithms. Recently, Chiang \textit{et al.}~\cite{chiang2016fog} summarized the opportunities and challenges of edge computing in the networking context of IoT and Gonzalez \textit{et al.}~\cite{gonzalez2016fog} explored the applications of edge computing in IoT. Yu \textit{et al.}~\cite{yu2016joint} proposed a joint subcarrier and CPU time allocation algorithm for MEC. A hierarchical MEC model designed based on the principle of LTE-Advanced backhaul network is introduced in~\cite{kiani2017towards} in which the so called field, shallow, and deep cloudlets are located hierarchically in three different tiers of the network. A task scheduling scheme for code partitioning over time and the hierarchical cloudlets is also proposed in~\cite{kiani2017optimal}. Moreover, a novel approach to mobile edge computing for the IoT architecture is presented in~\cite{sun2}. A hybrid architecture that harnesses the synergies between edge caching and C-RAN is proposed in~\cite{tandon2016harnessing}. Lee \textit{et al.}~\cite{lee2009proportional} explored the fundamental problem of LTE Single-carrier FDMA uplink scheduling by adopting the conventional time domain proportional fair algorithm. As discussed earlier, this paper proposes NOMA based model for MEC. Recently, several research studies have identified the potential benefits of NOMA in both the downlink and uplink. For instance, Al-Imari~\textit{et al.}~\cite{al2014uplink} proposed a NOMA scheme for uplink that allows more than one user to share the same subcarrier while a joint processing is implemented at the receiver to detect the users' signals. Zhang~\textit{et al.}~\cite{zhang2016uplink} designed an uplink power control scheme where eNB distinguishes the multiplexing users in the power domain, and theoretically analyzed the outage performance and the achievable sum-rate of the proposed scheme. Liang~\textit{et al.}~\cite{liang2017non} proposed the so called non-orthogonal random access (NORA) scheme based on SIC to tackle the the access congestion problem. In NORA, the difference of time of arrival is used to identify multiple users. Shipon~\textit{et al.}~\cite{ali2016dynamic} also designed a sum-throughput maximization problem under transmission power constraints for both uplink and downlink NOMA. Moreover, Tabassum~\textit{et al.}~\cite{tabassum2017modeling} characterized the rate coverage probability of a user in NOMA cluster with a given rank as well as the mean rate coverage probability of all users in the cluster for perfect SIC, imperfect SIC, and imperfect worst case SIC. While there are numerous research activities that investigate NOMA technique and its benefits in 5G networks, there is no prior work that study the advantages of NOMA in the context of edge computing. To this end, the current studied aims at proposing a novel edge computing aware NOMA model to reduce the uplink energy consumption of MEC users via utilizing the gains of uplink NOMA. Moreover, we take into consideration of the deadline requirements of MEC users in the user clustering. The rest of the paper is organized as follows. Section~\ref{sec:System Model} describes the system model and problem formulation. We propose our optimization framework and the corresponding heuristic algorithms in Section~\ref{sec:optimization}. Finally, Sections~\ref{sim:results} and~\ref{conclude} present numerical results and conclude the paper, respectively. \section{System Model and Problem Formulation}\label{sec:System Model} We consider a single-cell scenario, where one eNB equipped with a cloudlet serves the uniformly distributed edge computing users. Denote $\mathcal{U}=\{1,...,U\}$ as the set of users each with a task to be offloaded to the cloudlet via eNB. In the following, the users and tasks are used interchangeably. Each task $u$ is characterized by the workload $\lambda_{u}$, i.e., the number of CPU cycles required to complete the execution of the task, and the input $L_u$, i.e., the number of bits that must be transferred from the user to the eNB. \subsection{Communication Resources} We assume that the available bandwidth is divided into a set of frequency resource blocks $\mathcal{R}_f=\{1,...,M_f\}$ and the bandwidth of each resource block is $B$. According to the NOMA schemes, it is assumed that the users transmit over the resource blocks in an non-orthogonal manner, i.e., more than one user can share the same resource blocks. Therefore, the users are assumed to be divided into different groups called the NOMA clusters, where a set of frequency RBs are allocated to each NOMA cluster. Denote $\mathcal{I}=\{1,...,N\}$ as the set of NOMA clusters and $\beta^{r,i}$ as the binary variable to indicate the allocation of RB $r\in\mathcal{R}_f$ to NOMA cluster $i\in\mathcal{I}$. Here, $\beta^{r,i}=1$ if RB $r$ is allocated to NOMA cluster $i$, and $\beta^{r,i}=0$ otherwise. Given the principles of NOMA in which at least two users must share the same frequency resource blocks, we set $N\leq\lfloor\frac{U}{2}\rfloor$. We assume that an efficient MUD technique such as SIC~\cite{kim2013non} is applied at the eNB to decode the message signals in which the users are required to be ordered in each NOMA cluster. We define $\mathcal{J}=\{1,2,...,u_{max}\}$ as the set of the orders in a cluster. Here, $u_{max}$ is defined as the maximum number of users allowed to share a RB, hereby, reducing the complexity at the receiver side. It is also assumed that $Nu_{max}\geq U$. According to the principles of MUD techniques~\cite{al2014uplink}, the message of the $j$-th user in the cluster is decoded before all the users with higher indices. Consequently, the $j$-th user of a cluster experiences interference from all the users $\{j+1,j+2,...\}$ in that cluster. In other words, the first user to be decoded ($j = 1$) will see interference from all the other users $j = 2,...,u_{max}$, and the second user to be decoded will see interference from the users $j = 3,...,u_{max}$, and so on. Denote $p_u^r$ as the transmission power of user $u$ over RB $r$ and $\alpha_u^{i,j}$ as the binary variable to indicate the assignment of user $u$ to the $j$-th order in cluster $i$. Here, $\alpha_u^{i,j}=1$ if the assignment occurs, and $\alpha_u^{i,j}=0$ otherwise. The achievable data rate of user $u$ is given by \begin{equation}\label{equ1} R_u=\sum_{i\in\mathcal{I}}\sum_{j\in\mathcal{J}}\alpha_u^{i,j}\sum_{r\in\mathcal{R}_f}\beta^{r,i}B\log(1+\frac{h_u^rp_u^r}{\sigma^2+\sum_{k\in\mathcal{U}\setminus u}\sum_{l=j+1}^{u_{max}}\alpha_k^{i,l}h_k^rp_k^r}) \end{equation} where $h_u^r$ denotes the channel gain between user $u$ and the eNB on RB $r$, and $\sigma^2$ is the noise power. Note that by $h_u^r$, we assume that the channel conditions vary across RBs as well as users. Therefore, the uplink transmission time of task $u$ is \begin{eqnarray}\label{equ2} T_u=\frac{L_u}{R_u} \end{eqnarray} and the energy consumption of user $u$ is given by \begin{eqnarray}\label{equ3} E_u=T_u\sum_{r\in\mathcal{R}_f}p_u^r \end{eqnarray} \subsection{Computing Resources} Analogous to the communication resources, we assume the computing capacity of the cloudlet is divided into different computing RBs. For example, one computing RB can be one virtual machine or one core. Denote $\mathcal{R}_c=\{1,...,M_c\}$ as the set of all computing RBs. We also assume the capacity of one computing RB is equal to $C$ CPU cycles per second. It is assumed that a number of computing RBs is allocated to each order index of each NOMA cluster. Therefore, the computing time of task $u$ is \begin{eqnarray}\label{equ4} Q_u=\sum_{i\in\mathcal{I}}\sum_{j\in\mathcal{J}}\frac{\alpha_u^{i,j}\lambda_u}{x^{i,j}C} \end{eqnarray} where $x^{i,j}$ denotes the number of computing RBs allocated to order index $j$ of NOMA cluster $i$. \section{Optimization Problem}\label{sec:optimization} In this section, we formulate an optimization problem to minimize the summation of the energy consumption of all the users with constraint on the total transmission and computing time. In particular, we enforce a deadline $D_u$ as an upper limit on the total time of task $u$ as follows \begin{eqnarray}\label{equ6} T_u+Q_u\leq D_u \end{eqnarray} Constraint~(\ref{equ6}) is equivalent to the following data rate requirement \begin{eqnarray}\label{equ7} R_u\geq\frac{L_u}{D_u-Q_u} \end{eqnarray} where $Q_u\leq D_u$. Therefore, we propose to solve the following optimization problem, \begin{eqnarray}\label{equ8} \text{P1}:\underset{\alpha_u^{i,j},~\beta^{r,i}~p_u^r,~x^{i,j}}{\text{minimize}} \sum_{u\in\mathcal{U}}E_u\nonumber \end{eqnarray} \vspace{-.17in} \begin{eqnarray} s.t.~\text{C1}:R_u\geq\frac{L_u}{D_u-Q_u}~\forall u\in\mathcal{U}\nonumber \end{eqnarray} \vspace{-.17in} \begin{eqnarray} \text{C2}:Q_u\leq D_u~\forall u\in\mathcal{U}\nonumber \end{eqnarray} \vspace{-.17in} \begin{eqnarray} \text{C3}: \sum_{i\in\mathcal{I}}\sum_{j\in\mathcal{J}}\alpha_u^{i,j}=1~\forall u\in\mathcal{U}\nonumber \end{eqnarray} \vspace{-.17in} \begin{eqnarray} \text{C4}: \sum_{u\in\mathcal{U}}\sum_{j\in\mathcal{J}}\alpha_u^{i,j}\geq2~\forall i\in\mathcal{I}\nonumber \end{eqnarray} \vspace{-.17in} \begin{eqnarray} \text{C5}: \sum_{i\in\mathcal{I}}\beta^{r,i}\leq1~\forall r\in\mathcal{R}_f\nonumber \end{eqnarray} \vspace{-.17in} \begin{eqnarray} \text{C6}: \sum_{i\in\mathcal{I}}\sum_{j\in\mathcal{J}}x^{i,j}\leq M_c\nonumber \end{eqnarray} \vspace{-.17in} \begin{eqnarray} \text{C7}: \alpha_u^{i,j}\leq\alpha_u^{i,j-1}~\forall u\in\mathcal{U},~i\in\mathcal{I},~2\leq j\leq u_{max}\nonumber \end{eqnarray} \vspace{-.17in} \begin{eqnarray} \text{C8}:\sum_{r\in\mathcal{R}_f} p_u^r\leq P_u^{max}~\forall u\in\mathcal{U}\nonumber \end{eqnarray} \vspace{-.17in} \begin{eqnarray} \text{C9}: p_u^r\geq0~\forall u\in\mathcal{U},~r\in\mathcal{R}_f\nonumber \end{eqnarray} \vspace{-.17in} \begin{eqnarray} \text{C10}: \alpha_u^{i,j}\in\{0,1\}~\forall u\in\mathcal{U},~i\in\mathcal{I},~j\in\mathcal{J}\nonumber \end{eqnarray} \vspace{-.17in} \begin{eqnarray} \text{C11}: \beta^{r,i}\in\{0,1\}~\forall r\in\mathcal{R}_f,~i\in\mathcal{I}\nonumber \end{eqnarray} \vspace{-.17in} \begin{eqnarray} \text{C12}: x^{i,j}\in\mathds{N}~\forall i\in\mathcal{I},~j\in\mathcal{J}\nonumber \end{eqnarray} Inequality constraints C1 is the computing aware minimum data rate requirement per user. $Q_u$ is upper bounded by $D_u$ in constraint C2. The equality constraint C3 is to ensure that each user is assigned to only one NOMA cluster and also only one order index within that NOMA cluster. Constraint C4 is to ensure that at least two users are assigned to each cluster. In addition, we use the inequality constraints C5 to ensure that each frequency RB is allocated to only one cluster and C6 to bound the total allocated computing RBs by the available computing RBs. Constraint C7 is designed to give assignment priority to a lower value of order in one NOMA cluster over all the higher values of order in the same cluster. Moreover, by constraints C8, the total transmission power of user $u$ is limited to power budget $P^{max}_{u}$. Finally, constraint C9 restricts variable $p_u^r$ to positive values, constraints C10 and C11 restrict the variables $\alpha_u^{i,j}$ and $\beta^{r,i}$ to binary choices, and constraint C12 is to restrict variables $x^{i,j}$ to the integer values. Note that P1 defines a flexible Mixed Integer Non-Linear Programming (MINLP) problem which involves binary, integer and real variables. However, finding an optimal solution to this problem is intractable and presents computing complexity where the complexity grows fast with the number of variables. Furthermore, the objective function in P1 is not generally a convex function. Therefore, in order to reduce the complexity and obtain high quality solutions in reasonable time, we follow a two-phase approach. First, we propose an efficient heuristic algorithm for user clustering and RBs allocation. In fact, the heuristic algorithm is designed to decide about the binary and the integer variables. Second, having the binary and integer variables removed, we formulate a convex optimization problem for the transmission power control to be solved independently per NOMA cluster. \subsection{User Clustering and RBs allocation}\label{sec:heuristic} The pseudo code for the user clustering and RBs allocation is summarized in Algorithms~\ref{alg:1}. As shown in this algorithm, we carry out the user clustering, computing RBs allocation and frequency RBs allocation in three separated phases. Denote $\bar{h}_u=\frac{\sum_{r\in\mathcal{R}_f}h_u^r}{M_f}$ as the average channel condition of user $u$. In the clustering phase (lines 2-9), we follow a clustering method based on the average channel conditions. In other words, the users with the higher average channel gains are assigned to the lower order indices of the clusters. By doing so, a user with a higher channel gain does not interfere to the users with the lower channel gains since its interference is canceled out by the SIC receiver. Thus, the users with higher channel gains can transmit with the maximum transmission power, thereby, improving the sum-rate of the cluster. Let $u^{i,j}$ be the user assigned to the $j$th index of cluster $i$ and $\mathcal{U}^i$ as the set of all the users assigned to cluster $i$. Note that $u^{i,j}$ and $\mathcal{U}^i$ are known after the clustering phase. In the next phase, i.e., the computing RBs allocation phase (lines 11-31), we first allocate the computing RBs to satisfy condition $Q_u<D_u$ for all the users. Here, we assume the available number of computing RBs are sufficient to satisfy such a condition. Then, for each of the remaining computing RBs, we search the set of all the clusters to find a favorite cluster, i.e., $\hat{i}$. Denote $Q_{u^{i,j}}^{x^{i,j}+1}$ as the computing time of user $u^{i,j}$ if we increase $x^{i,j}$ to $x^{i,j}+1$. The favorite user is identified by comparing the terms $\frac{L_{u^{i,j}}}{D_{u^{i,j}}-Q_{u^{i,j}}^{x^{i,j}}}-\frac{L_{u^{i,j}}}{D_{u^{i,j}}-Q_{u^{i,j}}^{x^{i,j}+1}}$. By doing so, we not only take into consideration of the input size of the users but also their deadlines in the computing resource allocation. The procedure for the frequency RBs allocation phase is presented in lines 33-54. As shown in lines 33-43, we initially allocate the channels to satisfy the minimum data rate requirement for all the users. Denote $\mathcal{R}^i$ as the set of channels allocated to cluster $i$ and $\mathcal{I}'$ as the set of clusters consisting of some users with unsatisfied minimum data rate requirement. Let $\mathcal{R}_{f_0}$ be the set of already allocated channels. The favorite cluster for each channel, i.e., $\hat{i}$, is the cluster that achieves the best sum-rate over that channel as compared to those clusters which have users with unsatisfied minimum data rate. The sets $\mathcal{R}_{f_0}$, $\mathcal{R}^i$, and powers $P_u^r$ are accordingly updated after each channel allocation. Note that we assume the maximum transmit power of each user is equally divided among all the allocated channels to its corresponding cluster. Then, for each of the remaining frequency RBs to be allocated, we search over all the clusters and identify a favorite cluster by comparing $\sum_{u\in\mathcal{U}_i}(\frac{L_u}{R_u^{\mathcal{R}^i}}-\frac{L_u}{R_u^{\mathcal{R}^i\cup r}})P_u^{max}$ (lines 44-54). Here, $R_u^{\mathcal{R}^i}$ and $R_u^{\mathcal{R}^{i\cup r}}$ denote the data rate of user $u$ based on the current allocation and that of based on the current allocation as well as the allocation of RB $r$ to cluster $i$, respectively. Note that these data rates are calculated based on the current values of $p_u^r$. In fact, the favorite cluster for each channel is the one that achieves the maximum increase in the objective function of~P1. After identification of favorite cluster $\hat{i}$, we accordingly update set $\mathcal{R}^{\hat{i}}$ as well as powers $P_u^r$ for all the users in $\mathcal{U}^{\hat{i}}$. \begin{algorithm} \caption{} \label{alg:1} \begin{algorithmic}[1] \STATE \textbf{User Clustering} \STATE sort the users in $\mathcal{U}$ such that $\bar{h}_1\geq \bar{h}_2\geq...\geq \bar{h}_U$ \FORALL {$j\in\mathcal{J}$} \IF {$jN\leq U$} \STATE assign users $\{(j-1)N+1,..., jN\}$ as the $j$-th users of clusters $\{1,...,N\}$, respectively \ELSE \STATE assign users $\{(j-1)N+1,..., U\}$ as the $j$-th users of clusters $\{1,...,U-(j-1)N\}$, respectively \ENDIF \ENDFOR \STATE \textbf{Computing RBs Allocation} \FORALL {$i\in\mathcal{I}$} \FORALL {$j\in\mathcal{J}$} \REPEAT \STATE {$x^{i,j}=x^{i,j}+1$} \STATE {$M_c=M_c-1$} \UNTIL condition $Q_{u^{i,j}}<D_{u^{i,j}}$ is satisfied \ENDFOR \ENDFOR \REPEAT \STATE $\hat{q}=0$, $\hat{i}\gets\emptyset$ and $\hat{j}\gets\emptyset$ \FORALL {$i\in\mathcal{I}$} \FORALL {$j\in\mathcal{J}$} \IF {$\frac{L_{u^{i,j}}}{D_{u^{i,j}}-Q_{u^{i,j}}^{x^{i,j}}}-\frac{L_{u^{i,j}}}{D_{u^{i,j}}-Q_{u^{i,j}}^{x^{i,j}+1}}\geq\hat{q}$} \STATE $\hat{i}\gets i$ and $\hat{j}\gets j$ \STATE $\hat{q}\gets \frac{L_{u^{i,j}}}{D_{u^{i,j}}-Q_{u^{i,j}}^{x^{i,j}}}-\frac{L_{u^{i,j}}}{D_{u^{i,j}}-Q_{u^{i,j}}^{x^{i,j}+1}}$ \ENDIF \ENDFOR \ENDFOR \STATE $x^{\hat{i},\hat{j}}=x^{\hat{i},\hat{j}}+1$ \STATE $M_c=M_c-1$ \UNTIL $M_c>0$ \STATE \textbf{Frequency RBs Allocation} \STATE $p_u^r=P_u^{max}~\forall u\in\mathcal{U},r\in\mathcal{R}_f$ \STATE {$\mathcal{R}_{f_0}\gets\emptyset$, $\mathcal{I}'\gets\mathcal{I}$ and $\mathcal{R}^i\gets\emptyset$} \FORALL {$r\in\mathcal{R}_f$} \STATE $\hat{i}=\underset {i\in\mathcal{I}'}{arg max}~(\sum_{u\in\mathcal{U}^i}R_u)$ \STATE $\beta^{r,\hat{i}}=1$, $\mathcal{R}_{f_0}\gets\mathcal{R}_{f_0}\cup r$ and $\mathcal{R}^{\hat{i}}=\mathcal{R}^{\hat{i}}\cup r$ \STATE $P_u^r=\frac{P_u^r}{|\mathcal{R}^i|+1}~\forall u\in\mathcal{U}^{\hat{i}},r\in\mathcal{R}_f$ \IF {$R_u\geq\frac{L_u}{D_u-Q_u}~\forall u\in\mathcal{U}^{\hat{i}}$} \STATE {$\mathcal{I}'\gets\mathcal{I}'\setminus\hat{i}$} \ENDIF \ENDFOR \STATE $\mathcal{R}_f=\mathcal{R}_f\setminus\mathcal{R}_{f_0}$ \FORALL {$r\in\mathcal{R}_f$} \STATE $\hat{e}=0$ and $\hat{i}\gets\emptyset$ \FORALL {$i\in\mathcal{I}$} \IF {$\sum_{u\in\mathcal{U}^i}(\frac{L_u}{R_u^{\mathcal{R}^i}}-\frac{L_u}{R_u^{\mathcal{R}^i\cup r}})P_u^{max}\geq\hat{e}$} \STATE $\hat{i}\gets i$ \STATE $\hat{e}\gets\sum_{u\in\mathcal{U}_i}(\frac{L_u}{R_u^{\mathcal{R}^i}}-\frac{L_u}{R_u^{\mathcal{R}^i\cup r}})P_u^{max}$ \ENDIF \ENDFOR \STATE $\beta^{r,\hat{i}}=1$ and $\mathcal{R}^{\hat{i}}\gets\mathcal{R}^{\hat{i}}\cup r$ \STATE $P_u^r=\frac{P_u^r}{|\mathcal{R}^i|+1}~\forall u\in\mathcal{U}^{\hat{i}},r\in\mathcal{R}_f$ \ENDFOR \end{algorithmic} \end{algorithm} In problem P1, we do not take into consideration of contiguous RB constraint. Under RB contiguity constraint, all the RBs allocated to a single user must be contiguous in frequency~\cite{lee2009proportional}. Therefore, one can extend problem P1 by adding the following contiguity constraint, \begin{eqnarray}\label{equ33} \sum_{r=k_1}^{k_2}\beta^{r,i}=k_2-k_1+1~\forall i\in\mathcal{I},\beta^{k_1,i}=\beta^{k_2,i}=1 \end{eqnarray} While the user clustering and computing RBs allocation phases in Algorithm~\ref{alg:1} are still valid to solve the extended problem under constraint~(\ref{equ33}), the frequency RBs allocation must be changed to comply with this constraint. Notice that the frequency RBs allocation under the contiguity constraint is known to be NP-hard~\cite{lee2009proportional}. To address this hardness, we can adopt the RB grouping algorithm proposed in~\cite{lee2009proportional}. In a nutshell, $M_f$ available frequency RBs are divided into $n$ groups, and the frequency RBs allocation phase in Algorithm~\ref{alg:1} is accomplished with the granularity of RB groups. As a result, by extending the unit of consideration from a single RB to the RB group, there is a wider view to obtain optimal frequency RB allocation. \subsection{Power Control} Given the user clustering and frequency RBs allocation, the binary variables $\alpha_u^{i,j}$ and $\beta^{r,i}$ in problem P1 are fixed to 0 or 1. Given the computing RBs allocation, the integer constraints vanish and the power control per NOMA cluster follows. In fact, we can now eliminate all the terms that do not depend on the transmit powers. However, the objective function of problem P1 is non-convex in transmit powers. To this end, for the power control, we propose to minimize the total power consumption of the users instead of their energy consumption since the communication time is taken care of in constraint C1. In other words, power consumption minimization problem can provide an accurate solution for the energy consumption minimization as the communication time is upper bounded in constraint C1. Let consider $i$ as an NOMA cluster of interest. Denote $\mathcal{U}^i=\{1,...,u_{max}\}$ as the set of the users assigned to order indices $\{1,...,u_{max}\}$ of cluster $i$, respectively. Recall that $\mathcal{R}^i$ is the set of frequency RBs allocated to cluster $i$. Therefore, the power control optimization problem for cluster $i$ can be written as, \begin{eqnarray}\label{equ9} \text{P2}:\underset{p_u^r}{\text{minimize}}\sum_{u\in\mathcal{U}^i} \sum_{r\in\mathcal{R}^i}p_u^r \nonumber \end{eqnarray} \vspace{-.15in} \begin{eqnarray} s.t.~\text{C1}:R_u\geq\frac{L_u}{D_u-Q_u}~\forall u\in\mathcal{U}^i\nonumber \end{eqnarray} \vspace{-.15in} \begin{eqnarray} \text{C2}:\sum_{r\in\mathcal{R}^i} p_u^r\leq P_u^{max}~\forall u\in\mathcal{U}^i\nonumber \end{eqnarray} \vspace{-.17in} \begin{eqnarray} \text{C3}: p_u^r\geq0~\forall u\in\mathcal{U}^i,~r\in\mathcal{R}^i\nonumber \end{eqnarray} Nevertheless, we can prove the following theorems: \begin{theorem} Optimization problem P2 is equivalent to the optimization problem P3. \end{theorem} \begin{eqnarray}\label{equ10} \text{P3}:\underset{S_u^r,Z_u^r,R_u}{\text{minimize}} \sum_{u\in\mathcal{U}^i}\sum_{r\in\mathcal{R}^i}e^{S_u^r} \nonumber \end{eqnarray} \vspace{-.15in} \begin{eqnarray} \text{C1}:R_u\geq\frac{L_u}{D_u-Q_u}~\forall u\in\mathcal{R}^i\nonumber \end{eqnarray} \vspace{-.15in} \begin{eqnarray} \text{C2}: \sum_{r\in\mathcal{R}^i}e^{S_u^r}\leq P_u^{max}~\forall u\in\mathcal{U}^i\nonumber \end{eqnarray} \vspace{-.15in} \begin{eqnarray} \text{C3}:Z_u^r\leq B\log(1+\frac{h_u^re^{S_u^r}}{\sigma^2+\sum_{k=u+1}^{u_{max}}h_k^re^{S_k^r}})~\forall u\in\mathcal{U}^i\nonumber \end{eqnarray} \vspace{-.15in} \begin{eqnarray} \text{C4}:R_u=\sum_{r\in\mathcal{R}^i}Z_u^r~\forall u\in\mathcal{U}^i\nonumber \end{eqnarray} \vspace{-.15in} \begin{IEEEproof} Problem P2 is not a convex problem in the current form. However, let consider the following change of variables \begin{eqnarray}\label{equ11} p_u^r=e^{S_u^r}~\forall u\in\mathcal{U}^i \end{eqnarray} where obviously~(\ref{equ11}) is an one to one mapping, and it is always possible to determine the variable $S_u^r$ from $p_u^r$ and viceversa. We also add the new variables $Z_u^r$ and $R_u$ to the problem formulation, where $Z_u^r$ is given by \begin{eqnarray}\label{equ12} Z_u^r=B\log(1+\frac{h_u^re^{S_u^r}}{\sigma^2+\sum_{k=u+1}^{u_{max}}h_k^re^{S_k^r}}) \end{eqnarray} and \begin{eqnarray}\label{equ13} R_u=\sum_{r\in\mathcal{R}^i}Z_u^r \end{eqnarray} Consequently, by substituting the new variables $S_u^r$, $Z_u^r$ and $R_u$ in P2, and after simple algebraic manipulation, we can obtain the formulation of problem P3. Note that in the formulation of P3, we have changed the equality relations $Z_u^r=B\log(1+\frac{h_u^re^{S_u^r}}{\sigma^2+\sum_{k=u+1}^{u_{max}}h_k^re^{S_k^r}})$ to the inequality constraints C3. This change is needed for the convexity of problem P3, and does not affect the optimal solution since the data rate of user $u$ on RB $r$ at the optimality cannot be less than $B\log(1+\frac{h_u^re^{S_u^r}}{\sigma^2+\sum_{k=u+1}^{u_{max}}h_k^re^{S_k^r}})$. \end{IEEEproof} \begin{theorem} Optimization problem P3 is convex in the high SINR regime. \end{theorem} \begin{IEEEproof} The objective function of P3 is the sum of exponentials and is thus convex. While it is straight forward to prove the convexity of constraints C1, C2, and C4, the inequality constraint C3 is not convex since the throughput function is a non-convex function of the powers. A commonly used solution to deal with the non-convexity of the throughput function is the approximation $\log(1+x)\approx\log(x)$ which is a valid approximation in the high SINR regime~\cite{qiu1999performance,julian2002qos,ram2009distributed}. As a result of this approximation, constraint C3 becomes \begin{eqnarray}\label{equ13} Z_u^r+\log(\sigma^2h_u^{r^{-1}}e^{-S_u^r}+\sum_{k=u+1}^{u_{max}}h_u^{r^{-1}}h_k^re^{S_k^r-S_u^r})\leq0~\forall u\in\mathcal{U}^i\nonumber \end{eqnarray} where $\log(\sigma^2h_u^{r^{-1}}e^{-S_u^r}+\sum_{k=u+1}^{u_{max}}h_u^{r^{-1}}h_k^re^{S_k^r-S_u^r})$ is the log of a sum of exponentials, and thus is convex. Therefore, problem P3 is convex in the high SINR regime and the proof is complete. \end{IEEEproof} Nonetheless, problem P3 is convex and can be solved by efficient optimization techniques such as interior point methods. Note that the complexity of this problem may increase as the numbers of users per NOMA cluster increases. However, each NOMA cluster is assumed to be limited to a few number of users. Moreover, we assume that the channel coefficients $h_u^r$ are known only to the BS and this problem is solved centrally at the BS for each NOMA cluster. \section{Simulation Results}\label{sim:results} \begin{figure} \center \epsfig{file=totalenergy.png,width=.6\linewidth,clip=} \caption{Energy consumption comparison between heuristic and optimal approaches.} \label{fig:1} \end{figure} \begin{figure} \center \epsfig{file=umax2.png,width=.6\linewidth,clip=} \caption{Energy consumption of 10 users versus number of frequency RBs for $u_{max}$=1, 2 , 3, and 4.} \label{fig:2} \end{figure} \begin{figure} \center \epsfig{file=SEversusL.png,width=.6\linewidth,clip=} \caption{Spectral efficiency versus average input.} \label{fig:3} \end{figure} \begin{figure} \center \epsfig{file=computingvU.png,width=.6\linewidth,clip=} \caption{Computing time versus number of users for different numbers of computing RBs and computing capacities.} \label{fig:4} \end{figure} \begin{figure} \center \epsfig{file=FIvU.png,width=.6\linewidth,clip=} \caption{Computing time fairness index versus number of users for different numbers of computing RBs and computing capacities.} \label{fig:5} \end{figure} In this section, we evaluate the performance of the proposed edge computing aware NOMA scheme. We consider a single-cell with 1 km radius in which the users are uniformly distributed within the cell. We set the maximum transmit power of each user to 1 W, the maximum available frequency RBs to 30, and the bandwidth of each resource block to 180 kHz. The ITU pedestrian B fast fading model, the COST231 Hata propagation model for micro cell environment~\cite{al2014uplink,3GPP}, and the Lognormal shadowing with 8 dB standard deviation are implemented. The noise power spectral density is set to be 173 dBm/Hz. Unless it is stated otherwise, we assume a total computing capacity of 300 Giga cycles per second is available as 30 computing RBs each with a capacity of 10 Giga cycles per second. The workload of each user ($\lambda_u$) is randomly generated according to a uniform distribution between 0.5 Giga cycles and 1 Giga cycles. The input of each user ($L_u$) and the deadline ($D_u$) are also randomly generated according to uniform distributions between 5000 bits and 7000 bits, and 400 ms to 500 ms, respectively. Fig.~\ref{fig:1} compares the performance of the heuristic (Algorithm~\ref{alg:1}) and optimal (problem P1) approaches by providing the total energy consumption for different number of users where $u_{max}=3$. As we can see in this figure, the heuristic algorithm incurs a total energy consumption quite close to that of the optimal approach. Meanwhile, the heuristic algorithm provides the suboptimal solution within a few seconds while the computation time of the optimal approach grows fast with the number of users. Fig.~\ref{fig:2} illustrates the effect of $u_{max}$, i.e., maximum number of users allowed to share a frequency RB, on the total energy consumption of 10 users for different number of available frequency RBs. As shown in this figure, first, the energy consumption decreases by increasing the available number of frequency RBs; second, the energy consumption improves by increasing $u_{max}$ from 1 to 2, and 3 due to the fact that the spectral efficiency improves by increasing $u_{max}$. The energy consumption gap is more significant between $u_{max}=1$ and $u_{max}=2$ or 3 as compared to that of between $u_{max}=2$ and $u_{max}=3$. This result is attributed to the fact that the intra-cell interference becomes more considerable by increasing $u_{max}$. Fig.~\ref{fig:3} is also provided to evaluate the performance of the proposed scheme with $u_{max}=3$ in terms of the spectral efficiency. As the results show, the spectral efficiency is increased by increasing both the number of users and the average input of each user. Fig~\ref{fig:4} shows the impact of the total computing capacity as well as its division into the computing RBs on the total energy consumption. In particular, we assume two different cases in terms of the total computing capacity, each with two different division scenarios. In the first case, we consider a total computing capacity of 120 Giga cycles per second, which is divided into 30 computing RBs, each with a capacity of 3 Giga cycles per second and 120 computing RBs each with a capacity of 1 Giga cycles per second as the first and second scenarios, respectively. Similarly, for the second case, the total computing capacity of 90 Giga cycles per second is assumed to be divided to 30 RBs as the first scenario and 90 RBs as the second scenario. As Fig~\ref{fig:4} shows, the energy consumption can be improved specifically for higher number of users if the computing capacity is divided into smaller RBs, i.e., the scenarios with 120 and 90 RBs. This observation is due to the fact that the users have different amounts of workloads, and thus the computing capacity can be allocated more fairly to the users when it is divided to smaller blocks. Moreover, this improvement is more considerable for the case with the capacity of 120 Giga cycles per second since the difference between the two division scenarios is more pronounced. To understand the reason of the observation in Fig~\ref{fig:4}, we should analyze the performance of the computing RBs allocation scheme in terms of the fairness. To this end, we have adopted the Jain's fairness index~\cite{jain1984quantitative} for the computing time, \begin{eqnarray}\label{equ14} \text{Fairness index}=\frac{(\sum_{u=1}^{U}Q_u)^2}{U\sum_{u=1}^{U}(Q_u)^2} \end{eqnarray} Fig~\ref{fig:5} shows the Jain's fairness index which is bounded between of 0 and 1 for the aforementioned scenarios. As we can see in this figure, the scenarios with computing RBs with granularity of 1 Giga cycles per second are fairer as compared to those with granularity of 3 Giga cycles per second. \section{conclusion}\label{conclude} In this study, we have proposed an edge commuting aware NOMA technique, which can leverage the gains of uplink NOMA in reducing MEC users' energy consumption. Specifically, we have formulated a NOMA based optimization framework that minimizes the energy consumption of MEC users via optimizing the user clustering, computing and communication resource allocation, and transmit powers. In particular, we have investigated the joint allocation of the frequency and computing RBs to the users that are assigned to different order indices within the NOMA clusters. We have also designed an efficient heuristic algorithm for user clustering and RBs allocation, and formulated a convex optimization problem for the power control to be solved independently per NOMA cluster. Moreover, we have evaluated and demonstrated the effectiveness of the proposed NOMA scheme in lowering the energy consumption via simulations. \bibliographystyle{IEEEtran}
{ "timestamp": "2017-12-15T02:00:46", "yymm": "1712", "arxiv_id": "1712.04980", "language": "en", "url": "https://arxiv.org/abs/1712.04980" }
\section*{$T_5$ and $K_{2,4}$ are SPN} We begin by recalling some known results about the matrices \[S(\btheta)=\left(\begin {array}{ccccc} 1&-\cos({\theta_1})&\cos({\theta_1+\theta_2})&\cos({\theta_4+\theta_5})&-\cos({\theta_5})\\ \noalign{\medskip}-\cos({\theta_1})&1&-\cos({\theta_2})&\cos({\theta_2+\theta_3})&\cos({\theta_5+\theta_1})\\ \noalign{\medskip}\cos({\theta_1+\theta_2})&-\cos({\theta_2})&1&-\cos(\theta_3)&\cos({\theta_3+\theta_4})\\ \noalign{\medskip}\cos({\theta_4+\theta_5})&\cos({\theta_2+\theta_3})&-\cos({\theta_3})&1&-\cos({\theta_4})\\ \noalign{\medskip}-\cos({\theta_5})&\cos({\theta_5+\theta_1})&\cos({\theta_3+\theta_4})&-\cos({\theta_4})&1\end {array} \right),\] where $\btheta\in \R^5_+$ and $\1^T\btheta\le \pi$. It is known that \begin{enumerate} \item[{\rm(a)}] $S(\btheta)$ is copositive. \item[{\rm(b)}] $S(\0)$ is the Horn matrix. In particular, it is an exceptional extremal matrix in $\cop_5$. \item[{\rm(c)}] If $\1^T\btheta= \pi$, then $S(\btheta)$ is positive semidefinite, or rank $2$. \item[{\rm(d)}] If $\btheta>0$ and $\1^T\btheta< \pi$, then $S(\btheta)$ is an exceptional extremal matrix, called a Hildebrand matrix. \item[{\rm(e)}] If $\btheta\ne \0$ has at least one zero entry and $\1^T\btheta< \pi$, then $S(\btheta)$ is an exceptional $\nn$-irreducible matrix. \end{enumerate} For (a)--(b), (e), see page 1611 in \cite{DickinsonDuerGijbenHildebrand2013a}. For (d) see \cite{Hildebrand2012}. For (c), note that in the case that $\1^T\btheta=\pi$, \begin{multline*}S(\btheta)=\\ \small \left(\begin {array}{ccccc} 1&-\cos({\theta_1})&\cos({\theta_1+\theta_2})&\cos({\theta_4+\theta_5})&-\cos({\theta_5})\\ \noalign{\medskip}-\cos({\theta_1})&1&-\cos({\theta_2})&-\cos({\theta_4+\theta_5+\theta_1})&\cos({\theta_5+\theta_1})\\ \noalign{\medskip}\cos({\theta_1+\theta_2})&-\cos({\theta_2})&1& \cos({\theta_4+\theta_5+\theta_1+\theta_2})&-\cos({\theta_5+\theta_1+\theta_2})\\ \noalign{\medskip}\cos({\theta_4+\theta_5})&-\cos({\theta_4+\theta_5+\theta_1})&\cos({\theta_4+\theta_5+\theta_1+\theta_2})&1&-\cos({\theta_4})\\ \noalign{\medskip}-\cos({\theta_5})&\cos({\theta_5+\theta_1})&-\cos({\theta_5+\theta_1+\theta_2})&-\cos({\theta_4})&1\end {array} \right)=\\ \small \left(\begin{array}{c} 1 \\ -\cos(\theta_1) \\ \cos(\theta_1+\theta_2) \\ \cos(\theta_4+\theta_5) \\ -\cos(\theta_5) \end{array} \right)\left(\begin{array}{c} 1 \\ -\cos(\theta_1) \\ \cos(\theta_1+\theta_2) \\ \cos(\theta_4+\theta_5) \\ -\cos(\theta_5) \end{array} \right)^T +\left(\begin{array}{c} 0 \\ \sin(\theta_1) \\ -\sin(\theta_1+\theta_2) \\ \sin(\theta_4+\theta_5) \\ -\sin(\theta_5) \end{array} \right)\left(\begin{array}{c} 0 \\ \sin(\theta_1) \\ -\sin(\theta_1+\theta_2) \\ \sin(\theta_4+\theta_5) \\ -\sin(\theta_5) \end{array} \right)^T.\qquad\qquad\qquad\qquad\end{multline*} (See page 1674 in \cite{DickinsonDuerGijbenHildebrand2013b}.) Finally, we mention that according to \cite[Corollary 5.8]{DickinsonDuerGijbenHildebrand2013a}, if $A\in \cop_5$ with diagonal $\1$ is not SPN, then there exist $\btheta\in \R^5_+$, $\1^T\btheta< \pi$ and a permutation matrix $P$ such that $P^TAP\ge S(\btheta)$. We can now prove the special case $n=5$ of \cite[Theorem 6.4]{Shaked2016}. \begin{theorem}\label{thm:T5} $T_5$ is SPN. \end{theorem} \begin{proof} Let $A\in \cop_5$ have $G(A)=T_5$. We may assume that $\diag (A) =\1$. Let $i$ be a vertex of degree $2$ in $G(A)$. If the two off-diagonal entries in row $i$ are both positive, then $A$ is a $4\times 4$ copositive matrix bordered by a nonnegative row and column, and is therefore SPN. If the two off-diagonal entries in row $i$ are both negative, then $A/A[i]$ is a $4\times 4$ copositive matrix, and therefore $A/A[i]$ is SPN, and so is $A$. Thus it remains to consider the case that $A$ has the following pattern, up to permutation of rows and columns: \begin{equation}A=\left(\begin{array}{ccccc} 1 & 0 & 0 & - & + \\ 0 & 1 & 0 & - & + \\ 0 & 0 &1 & + & -\\ - & - & + & 1 & - \\ +& + & - &- & 1 \end{array}\right)\label{eq:G(A)=T5}\end{equation} Suppose on the contrary that $A$ is not SPN. By \cite[Corollary 5.8]{DickinsonDuerGijbenHildebrand2013a} (and since $\diag(A)=\1$), $P^TAP\ge S(\btheta)$ for some $\btheta\in \R^5_+$ such that \begin{equation}\sum_{i=1}^5\theta_i<\pi, \label{eq:1^Ttheta<pi} \end{equation} and some permutation matrix $P$. Let $S=S(\btheta)$. Since $P^TAP\ge S$, $S$ has at most three positive entries above the diagonal. The matrix $S$ cannot have a row $i$ with no positive entry, because otherwise it would be SPN (since the $4\times 4$ copositive matrix $S/S[i]$ would be SPN). Thus $S$ has exactly three positive entries above the diagonal, and by the pattern of $A$, these have to be in the same positions as the positive entries of $P^TAP$. By considering $P^TAP$ instead of $A$, we may assume that $A\ge S$. Let $B$ be defined by setting $b_{ij}=s_{ij}$ whenever $\{i,j\}$ is an edge in $G(A)$, and $b_{ij}=a_{ij}=0$ otherwise. Then $A\ge B\ge S$, so $B$ is also copositive and not SPN. We assume therefore that $A=B$. That is, $A\ge S$ is not SPN, and $a_{ij}=s_{ij}$ for every edge $ij$ of $G(A)$, and show that in this case there exists $\btheta'\in \R^5_{+}$ such that $\1^T\btheta'=\pi$ and $A\ge S(\btheta')$. Since such $S(\btheta')$ is positive semidefinite, this contradicts the assumption that $A$ is not SPN. We first note that \begin{equation}\theta_i\le \pi/2 \text{ for every } 1\le i\le 5 .\label{eq:each<=pi/2}\end{equation} For if, say, $\theta_5>\pi/2$, then $\sum_{i=1}^4\theta_i<\pi/2$, and we would have $\cos(\theta_i+\theta_{i\plmod 1})>0$ for $i=1,2, 3$, in addition to $-\cos(\theta_5)>0$, which would mean that $S$ has at least four positive entries above the diagonal, contradicting the assumption. So $s_{i,i\plmod 1}\le 0$ for every $i=1, \dots, 5$, and the three positive entries are all of the form $s_{i,i\plmod 2}=\cos(\theta_i+\theta_{i\plmod 1})$. There exist $j\ne k$ such that $s_{j,j\plmod 2}\le 0$ and $s_{k,k\plmod 2}\le 0$, that is, $\theta_i+\theta_{i\plmod 1}\ge \pi/2$, for $i\in \{j, k\}$. Since $\sum_{i\in \{j, j\plmod 1\}\cup \{k, k\plmod 1\}}\theta_i<\pi$, we must have $\{j, j\plmod 1\}\cap \{k, k\plmod 1\}\ne \emptyset$. By further permuting row and columns we may assume that \begin{equation} \theta_4+\theta_{5}\ge\pi/2 \mbox{ and } \theta_5+\theta_{1}\ge\pi/2 \label{eq:4+5,5+1}\end{equation} As the sum of all elements in $\btheta$ is less than $\pi$, (\ref{eq:4+5,5+1}) implies that $\sum_{i=1}^3\theta_i<\pi/2$ and $\sum_{i=2}^4 \theta_i<\pi/2$. So the entries $s_{i,i\plmod 2}=\cos(\theta_i+\theta_{i\plmod 1})$, $i=1,2, 3$, in positions $(1,3)$, $(2,4)$ and $(3,5)$, are the three positive entries of $S$. The entries $s_{i,i\plmod 1}=-\cos(\theta_i)$, $i=1, \dots ,4$, are negative. And $s_{15}=-\cos(\theta_5)$, $s_{14}=\cos(\theta_4+\theta_{5})$ and $s_{25}=\cos(\theta_5+\theta_{1})$ are all nonpositive, and are the only entries above the diagonal of $S$ that may be zero. There is a $3\times 3$ identity principal submatrix in $A$. None of the entries $(1,3)$, $(2,4)$, or $(3,5)$ is in this principal submatrix, so either $A[1,2,5]=I$ or $A[1,4,5]=I$. In the first case $a_{12}=0>s_{12}$ and in the second $a_{45}=0>s_{45}$. Let $\ell$ be either $1$ or $4$, satisfying $a_{\ell, \ell\plmod 1}=0>s_{\ell, \ell\plmod 1}$. By (\ref{eq:4+5,5+1}) \begin{equation}\sum_{i\ne \ell}\theta_i \ge \pi/2. \label{eq:sumnell}\end{equation} Let $\btheta'\in \R^5_+$ have \[\theta'_i=\left\{\begin{array}{ll}\theta_i, \quad & i\ne \ell \\ \pi-\sum_{i\ne \ell} \theta_i\, ,\quad& i=\ell \end{array} \right. . \] Then \begin{equation}\theta_\ell<\theta'_\ell\le\pi/2.\label{eq:theta'ell}\end{equation} (The left inequality follows from (\ref{eq:1^Ttheta<pi}), the right one from (\ref{eq:sumnell}).) Of course, $\theta_i\le \theta'_i$ for every $i$, and $\1^T\btheta'=\pi$. Thus $S(\btheta')$ is positive semidefinite, and $A\ge S(\btheta')$. To assert this latter inequality observe that $a_{\ell,\ell\plmod 1}=0\ge (S(\btheta'))_{\ell, \ell\plmod 1}$ by (\ref{eq:theta'ell}), and for $i\ne \ell $ we have $a_{i,i\plmod 1} \ge (S(\btheta))_{i, i\plmod 1} = (S(\btheta'))_{i, i\plmod 1}$. As $\pi \ge \theta'_i+\theta'_{i\plmod 1}\ge \theta_i+\theta_{i\plmod 1}\ge 0$ for every $i$ and $\cos(t)$ decreases on $[0,\pi]$, $a_{i,i\plmod 2} \ge (S(\btheta))_{i, i\plmod 2} \ge (S(\btheta'))_{i, i\plmod 2}$. \end{proof} Instead of \cite[Theorem 9.2]{Shaked2016}, which states that every $K_{2,n}$ is SPN, we prove: \begin{theorem} If $T_{n+1}$ is SPN, then the complete bipartite graph $K_{2,n}$ is SPN. \end{theorem} The proof is essentially the same as the proof of \cite[Theorem 9.2]{Shaked2016}. \begin{proof} By induction on $n$. For $n\le 2$ this holds since every graph on at most $4$ vertices is SPN. If $n>2$ and $T_{n+1}$ is SPN, then its subgraph $T_{n}$ is SPN, and by the induction hypothesis $K_{2,n-1}$ is SPN. This implies that each proper subgraph of $K_{2,n}$ is SPN. Thus we only need to consider the case that $A\in \cop$ has a connected $G_-(A)$. In this case, there exists a vertex $i$ of degree $2$ in $\mathcal{G}(A)$, which is incident with two negative edges. By \cite[Lemma 3.4]{Shaked2016} $A/A[i]$ is copositive, and since $G(A/A[i])=T_{n+1}$, it is SPN. Thus $A$ is SPN. \end{proof} Since we only know at this point that $T_n$, $n\le 5$, is SPN, we can only deduce that $K_{2,n}$, $n\le 4$ is SPN. \def\cprime{$'$}
{ "timestamp": "2017-12-15T02:04:30", "yymm": "1712", "arxiv_id": "1712.05115", "language": "en", "url": "https://arxiv.org/abs/1712.05115" }
\section{Introduction} \label{sec:intro} Unwanted quasiparticle excitations can degrade the performance of superconducting devices. For example, they can be a limiting factor for superconducting detectors used in astronomy~\cite{Day}, and a source of errors in superconductor-based charge pumps used for metrology~\cite{PekolaRMP}. In superconducting qubits, quasiparticles tunneling across Josephson junctions interact with the qubit degree of freedom, leading to qubit relaxation. The quasiparticle-induced decay rate has been predicted \cite{GC_PRL106} and shown experimentally \cite{Paik} to linearly scale with the density of quasiparticles. In both resonators \cite{Visser} and qubits \cite{Riste} there is evidence for excess, non-equilibrium quasiparticles. While to our knowledge there is no agreed upon explanation for the origin of these excess quasiparticles at millikelvin temperatures, a number of different approaches have been explored in order to suppress them with the aim to improve device performance. These approaches include engineering the spatial profile of the superconducting gap to steer quasiparticles away from Josephson junctions \cite{Aumentado,Sun}, and cooling down a device in a magnetic field in order to generate vortices that trap quasiparticles in their cores~\cite{Nsanzineza,Wang,Taupin}. Here we consider a different way to capture quasiparticles using normal-metal traps, a proposal which has been implemented in single-electron turnstiles~\cite{Knowles}, hybrid multilayer coolers~\cite{Pekola2000,Rajauria}, superconducting resonators~\cite{Patel}, and qubits~\cite{Riwar}. Normal-metal quasiparticle traps consist of a normal metal ($N$) in tunnel contact with a superconductor ($S$): quasiparticles that tunnel from the superconductor into the normal metal can either tunnel back, or lose energy to a level below the superconducting gap, in which case cannot return back to the superconductor. A model for the effective quasiparticle trapping rate has been presented theoretically and tested experimentally in Ref.~\cite{Riwar}. In the model, the superconductor is assumed to be described by BCS theory. However, when contacting $N$ and $S$ materials, Cooper pairs can ``leak'' into the normal part; this induces superconducting correlations inside the normal metal and suppresses the superconducting order parameter. These types of phenomena are known as proximity effect and its inverse, and they have been studied since the 1960~\cite{Gennes1,Gennes2,McMilan}. Works on the proximity effect have investigated, for example, quasi-one-dimensional $N$-$S$ systems~\cite{Gueron,belzig,Moussy}, $SNS$ junctions~\cite{kup,Zhou,Ivanov,Hammer,Sueur}, $NSN$ configurations~\cite{Kauppila}, and proximity between two different superconductors~\cite{Cherkez}. For our purposes, we note here that in an $NS$ bilayer a minigap in the single-particle density of states develops in both layers and a finite subgap density of states is induced in the superconducting layer~\cite{Fominov}. In this article we focus on the subgap states, since it has been shown that their presence can increase the relaxation rate $1/T_1$ of a superconducting qubit~\cite{Leppakangas}. Our main result is that due to the competition between such a proximity effect-induced increase in the relaxation rate and the decrease of $1/T_1$ due to the trap's suppression of the quasiparticle density, there is an optimal position for the trap. If the trap is closer to a junction than this optimal position, the relaxation rate exponentially increases over a distance given by the coherence length; for a trap further away than the optimum, the decay rate slowly increases over the much longer ``trapping length'', which is determined by quasiparticle diffusion and the trap's effective trapping rate. The paper is organized as follows: in Sec.~\ref{S2} we review qubit relaxation due to quasiparticles and summarize the model of normal-metal quasiparticle trapping. In Sec.~\ref{S3} we use the quasiclassical theory of superconductivity to study first a uniform normal-superconductor bilayer. Then (Sec.~\ref{S4}) we consider a non-uniform system which models quasiparticle traps; we present both numerical self-consistent solutions for the spatial variation of superconducting order parameter and single-particle density of states as well as approximate analytical expressions for the latter. In Sec. \ref{S5} we study the qubit decay rate taking into account the proximity effect. We summarize our work in Sec.~\ref{S6}, while Appendices~\ref{app:usadal-non-unif} through \ref{App:Decay_rate} contain a number of derivations and mathematical details. \section{Qubit relaxation due to quasiparticles} \label{S2} Superconducting qubits can store quantum information in a collective degree of freedom, the phase difference $\varphi$ across a Josephson junction. Such a phase difference induces a dissipationless supercurrent carried by Cooper pairs tunneling through the junction. However, the presence of quasiparticles (due to unpaired electrons) opens an unwanted decay channel: a quasiparticle that tunnels can exchange energy with the qubit and cause its decay. The effects of the qubit-quasiparticle interaction have been studied in detail theoretically in Ref.~\cite{GC_PRB11} -- we summarize here the relevant findings of that work. Focusing henceforth for simplicity on the case of a single-junction transmon, the quasiparticle contribution to the qubit decay rate, $\Gamma_{10}$, can be written in the standard form of the product between a matrix element and a spectral density, \begin{equation}\label{eq:g10_1} \Gamma_{10} = \left|\langle 1 |\sin\frac{\varphi}{2}|0\rangle \right|^2 S(\omega_{10})\, , \end{equation} where $|0\rangle$ ($|1\rangle$) denotes the ground (excited) state of the qubit, and $\omega_{10}$ is the qubit frequency. The excitation rate $\Gamma_{01}$ is obtained by replacing $\omega_{10} \to - \omega_{10}$. The matrix element can be expressed in terms of the transmon parameters as \begin{equation}\label{eq:trmel} \left|\langle 1 |\sin\frac{\varphi}{2}|0\rangle \right|^2 = \frac{E_C}{\omega_{10}} \simeq \sqrt{\frac{E_C}{8E_J}} \end{equation} with $E_C$ the charging energy and $E_J$ the Josephson energy; this expression is valid in the transmon regime $E_J \gg E_C$. The spectral density takes a simple form under certain conditions that are often satisfied, namely: a hard gap $\Delta_0$ in the superconductor which is larger than the qubit frequency, $2\Delta_0 > \omega_{10}$; quasiparticles that are ``cold'', meaning that their typical energy $\delta E$ (or effective temperature) above the gap is small compared to qubit frequency, $\delta E \ll \omega_{10}$. Then we have \begin{equation}\label{eq:S_w_x_qp} S(\omega_{10}) \simeq \frac{8E_J}{\pi} x_\mathrm{qp} \sqrt{\frac{2\Delta_0}{\omega_{10}}} \end{equation} and $S(-\omega_{10}) \ll S(\omega_{10})$. In this expression, the prefactor proportional to $E_J$ accounts for the tunneling probability via the Ambegaokar-Baratoff relation $E_J= \Delta_0 g_T/8g_K$, where $g_T$ is the conductance of the junction and $g_K=e^2/2\pi$ is the conductance quantum. The central factor $x_\mathrm{qp}$ is the density of quasiparticles normalized by the Cooper pair density, \begin{equation}\label{eq:x_qp} x_\mathrm{qp} = \frac{2}{\Delta_0} \int_{0} d\epsilon \, n(\epsilon) f(\epsilon)\,, \end{equation} with $f(\epsilon)$ the quasiparticle distribution function and $n(\epsilon)$ the normalized density of states (DoS), which for a BCS superconductor is \begin{equation}\label{eq:nBCS} n(\epsilon) = n_\mathrm{BCS} (\epsilon) \equiv \frac{\epsilon}{\sqrt{\epsilon^2-\Delta_0^2}}\, . \end{equation} The square root factor in \eref{eq:S_w_x_qp} accounts for the final density of states of quasiparticles, after tunneling and absorbing the qubit energy. Equations \rref{eq:g10_1} and \rref{eq:S_w_x_qp} imply that suppressing the quasiparticle density near the junction can prolong the $T_1$ time, given by \begin{equation}\label{eq:T1_def} \frac{1}{T_1} = \Gamma_{10} + \Gamma_{01} \, . \end{equation} A way to suppress $x_\mathrm{qp}$ is by introducing normal-metal islands, in tunnel contact with the superconducting electrodes, which can trap quasiparticles. A model that accounts for the interplay between superconductor-normal island tunneling, energy relaxation in the normal metal, and diffusion in the superconductor was developed theoretically and tested experimentally in \ocite{Riwar}; in the model, the dynamics of the quasiparticle density is governed by a generalized diffusion equation \begin{equation} \label{eq:diffusion} \frac{\partial}{\partial t} x_\text{qp}=D_\text{qp}\nabla^2x_\text{qp}-a(\vec{r})\Gamma_\text{eff}x_\text{qp} + g \, , \end{equation} where $D_\mathrm{qp}$ is the quasiparticle diffusion constant, and the area function $a(\vec{r})$ equals 1 for coordinate $\vec{r}$ in the superconductor-normal metal contact region and 0 elsewhere. The quasiparticle generation rate $g$ phenomenologically accounts for all processes creating quasiparticles. The effective trapping rate $\Gamma_\text{eff}$ is determined by the balance between tunneling from superconductor to normal metal, the inverse escape process, and the energy relaxation of excitations in the normal metal. Experimentally, relaxation is the bottleneck limiting the effective trapping rate, see \ocite{Riwar}. More recent work has explored how to optimize trap performance by appropriately choosing the trap placement in a qubit~\cite{Hosseinkhani}. All the works summarized so far rely on the hard-gap assumption. As mentioned in the introduction, and as we will explain in more detail in Sec.~\ref{S3}, due to the proximity effect the BCS peak in the density of states broadens and a finite subgap DoS is induced in the superconductor. Therefore, in this paper we want to relax the hard-gap assumption. As a first step, we consider how to generalize the expression for the qubit decay rate, \eref{eq:g10_1}; the appropriate generalization is presented in \ocite{Leppakangas}, and for the case considered here of a single-junction transmon it amounts to a redefinition of the spectral density appearing in \eref{eq:g10_1}: \begin{equation}\label{eq:Sred} S(\omega) = S_\mathrm{t} (\omega) + S_\mathrm{p} (\omega)\, , \end{equation} where we distinguish two contributions, $S_\mathrm{t}$ due to single quasiparticle tunneling and $S_\mathrm{p}$ originating from Cooper pair processes. In terms of the distribution function $f$ they are, for positive frequency $\omega>0$, \begin{eqnarray}\label{eq:S_t} S_\mathrm{t}(\omega)&=&\int_{0}^{\infty}\!d\epsilon \, A(\epsilon,\epsilon+\omega) f(\epsilon)[1-f(\epsilon+\omega)], \\ S_\mathrm{p}(\omega)&=&\int_{0}^{\omega}\!d\epsilon \, \frac{1}{2} A(\epsilon,\omega-\epsilon) [1-f(\epsilon)][1-f(\omega-\epsilon)], \quad \quad \label{eq:S_b} \end{eqnarray} with \begin{equation} \label{eq:A} A(\epsilon,\epsilon')=\frac{16E_J}{\pi\Delta_0}[n(\epsilon)n(\epsilon')+p(\epsilon)p(\epsilon')]\, . \end{equation} The density of states $n(\epsilon)$ appearing in this expression does not necessarily take the BCS form. Both $n(\epsilon)$ and the pair amplitude $p(\epsilon)$ can be calculated within a Green's function approach, and in a normal/superconductor bilayer depend on parameters such as film thicknesses and interface resistance, as we explain next in Sec.~\ref{S3}. Here we point out that the combinations of $n$ and $p$ account for both the quasiparticle density of states and so-called coherence factors, while whether a process involves single quasiparticles or pairs is manifest in the combination of distribution functions: $f(1-f)$ for single quasiparticles, $(1-f)(1-f)$ or $ff$ for pair breaking or recombination processes, respectively. Finally, the spectral density $S$ at negative frequencies is obtained by replacing $f \to 1-f$ in \esref{eq:S_t} and \rref{eq:S_b}. For pair processes, this implies that $S_\text{p}(\omega>0)$ accounts for pair breaking by qubit relaxation, while $S_\text{p}(\omega<0)$ for qubit excitation by quasiparticle recombination. \section{Proximity effect in thin films} \label{S3} The goal of this section is to arrive at expressions for the functions $n$ and $p$ in \eref{eq:A} that take into account the proximity effect between the normal-metal trap and the qubit superconducting electrodes. These expression will then be used in Sec.~\ref{S5} to estimate the influence of the proximity effect on qubit lifetime. The calculations are based on the quasiclassical approach to superconductivity, which we briefly discuss in Appendix~\ref{app:usadal-non-unif} and is presented in more details in a number of reviews and textbooks~\cite{Rammer,Chandrasekhar,belzig2,Kopnin}. In this formalism, the properties of a superconductor can be encoded in the paring angle $\theta_S(\epsilon, r)$ which, for disordered superconductors, obeys an equation known as Usadel equation~\cite{Usadel}. Once $\theta_S$ is obtained, one can calculate quantities of interest such as the density of states and the pairing amplitude, \begin{eqnarray} \label{eq:DOS} n(\epsilon,r) & = & \mathrm{Re}[\cos \theta_S(\epsilon,r)], \\ p(\epsilon,r) & = & \mathrm{Im}[\sin \theta_S(\epsilon,r)]. \label{eq:p_def} \end{eqnarray} The Usadel equation must be supplemented by the self-consistent equation for the order parameter $\Delta(r)$, which assuming thermal equilibrium at temperature $T$ reads \begin{equation} \label{eq:op_sc} \Delta(T,r)=\frac{\nu_{0\mathrm{S}}\lambda}{2} \int_{-\omega_D}^{\omega_D}\!d\epsilon \, \tanh\left(\frac{\epsilon}{2T}\right) p(\epsilon,r) \, , \end{equation} where $\nu_{0S}$ is the density of states per spin at the Fermi energy in the normal state of the superconductor, and $\lambda$ is the effective coupling strength of the attractive electron-electron interaction responsible for superconductivity. This formalism is applicable to non-uniform superconductors -- this enables us to study the behavior of the superconductor near the edge of a trap. However, we first consider proximity in a uniform bilayer. \subsection{Uniform NS bilayers} \label{sec:unifbi} In a uniform bilayer a superconducting film of thickness $d_S$ is fully covered by a normal metal of thickness $d_N$, with both thicknesses smaller than the superconducting coherence length at zero temperature $\xi$. This implies that spatial variations across the films thicknesses can be neglected. Moreover, because the system is uniform in the plane of the films, the pairing angle is independent of position and the Usadel equations take the form (see \ocite{Fominov} and Appendix~\ref{app:usadal-non-unif}) \begin{eqnarray} \label{eq:uni_usadelS}% i\epsilon\sin\theta_{S}(\epsilon) & + & \Delta(T)\cos\theta_S(\epsilon) \\ &=& \frac{1}{\tau_S}\sin[\theta_{S}(\epsilon)-\theta_{N}(\epsilon)]\, , \nonumber \\ \label{eq:uni_usadelN}% i\epsilon\sin\theta_{N}(\epsilon) & = & \frac{1}{\tau_N}\sin[\theta_{N}(\epsilon)-\theta_{S}(\epsilon)]\, , \end{eqnarray} where we have introduced a pairing angle $\theta_N$ for the normal layer and $\Delta(T)$ must be calculated self-consistently using \eref{eq:op_sc}. The times $\tau_{i}= 2e^2\nu_i d_i R_{\mathrm{int}}A$ ($i=S,\,N$) account for the interface resistance times area product $R_\mathrm{int} A$ and the density of states at the Fermi level $\nu_i$ of the two films. Typically, $\tau_S \approx \tau_N$, and the dimensionless parameter $\tau_S \Delta$ can be used to characterize the strength of the coupling between the two layers. In the limit $\tau_S\Delta \to \infty$, to leading order we can neglect the right hand sides of \esref{eq:uni_usadelS} and \rref{eq:uni_usadelN}, so that the two layers would be decoupled; in this limit the solutions to the Usadel equations are \begin{eqnarray}\label{eq:thetaBCS} \theta_S(\epsilon) &=& \theta_{BCS}(\epsilon)\equiv \arctan \frac{i\Delta}{\epsilon} \, , \\ \theta_N(\epsilon) &=& 0 \, . \end{eqnarray} For high interface resistance, such that $\tau_S\Delta \gg 1$ but finite, a weak-coupling regime is possible. On the other hand, for a good contact between $N$ and $S$ [or sufficiently close to the critical temperature, where $\Delta(T) \to 0$] the coupling can be strong, $\tau_S \Delta \ll 1$. In this section we focus on the weak-coupling case $\tau_S\Delta \gg 1$ with $\tau_N \sim \tau_S$; some considerations on the strong-coupling one can be found in Appendix~\ref{App:Str_coup}. In the normal film, the main consequence of the contact with the superconductor is the opening of a so-called minigap in its density of states. The minigap energy $E_g$ is always small compared to the gap in the bulk superconductor, and in the weak-coupling regime is given by \begin{equation} E_g \simeq \frac{1}{\tau_N}\, , \end{equation} as already shown in the seminal work of McMillan~\cite{McMilan} and more recently rederived in \ocite{Fominov} within the quasiclassical formalism. In the superconductor, above the minigap a small but finite sub-gap density of states is induced of the form~\cite{McMilan,Fominov} \begin{equation} \label{eq:nu_old} n(\epsilon)\simeq n_>(\epsilon) \equiv \frac{1}{\tau_S\Delta} \mathrm{Re}\left[\frac{\epsilon}{\sqrt{\epsilon^2-1/\tau_N^2}}\right]. \end{equation} This expression is valid below the gap, $\epsilon \ll \Delta$, but it fails close to the minigap, as noted in \ocite{Fominov}. Indeed, as detailed in Appendix~\ref{app:unifweak} we find the validity condition \begin{equation} \label{eq:Uni_validityCond} \epsilon - \frac{1}{\tau_N} \gg \frac{1}{\tau_N} \mathrm{max}\left\{ \frac{1}{\left(\tau_S \Delta\right)^{2/3}}, \frac{1}{\tau_N \Delta} \right\}. \end{equation} Moreover, the the position of the minigap is more accurately given by \begin{equation}\label{eq:minigap_WeaCoup} \epsilon_g \simeq \frac{1}{\tau_N} \left[1 - \frac32\frac{1}{\left(\tau_S \Delta\right)^{2/3}}-\frac{1}{\tau_N\Delta}\right], \end{equation} and just above it the density of states has a square root threshold behavior, \begin{equation}\label{eq:WeaCoup_DOS} n(\epsilon) \simeq n_t(\epsilon)\equiv \frac{1}{\left(\tau_S\Delta\right)^{1/3}}\sqrt{\frac23 \tau_N \left(\epsilon - \epsilon_g \right)}, \end{equation} an expression valid for \begin{equation} \tau_N (\epsilon-\epsilon_g) \ll (\tau_S\Delta)^{-2/3}. \end{equation} In the inset of Fig.~\ref{fig:unif_weak_DOS_vs_E} we compare \esref{eq:nu_old} and \rref{eq:WeaCoup_DOS} to the density of states obtained by numerically solving the Usadel equations \rref{eq:uni_usadelS} and \rref{eq:uni_usadelN}. \begin{figure}[t \begin{center} \includegraphics[width=0.48\textwidth]{DOS_vs_E_weak} \end{center} \caption{(Color online) Density of states in a superconducting layer weakly coupled to a normal one, $\tau_S\Delta_0 = 50$. Solid lines (blue) are calculated by numerically solving the Usadel equations, \esref{eq:uni_usadelS} and \rref{eq:uni_usadelN}, and substituting the result into \eref{eq:DOS}. Dot-dashed line: BCS DoS, \eref{eq:nBCS}. Dashed lines: approximate analytical formulas: $n_{Dy}$ of \eref{eq:Dynes}, $n_t$ of \eref{eq:WeaCoup_DOS}, and $n_>$ of \eref{eq:nu_old}. The inset zooms to energies around the minigap.} \label{fig:unif_weak_DOS_vs_E} \end{figure} We now turn our attention to energy well above the minigap, $\epsilon \gg \epsilon_g$. In this energy range a broadening of the BCS peak was qualitatively predicted~\cite{McMilan} and displayed in numerical calculations~\cite{Fominov}; to our knowledge, however, no analytical formula has been presented in the literature for the case of bilayers. Interestingly, we find that the density of states has the well-known form proposed by Dynes \textit{et al}.~\cite{Dynes} to fit tunneling measurements: \begin{equation}\label{eq:Dynes} n(\epsilon) \simeq n_{Dy}(\epsilon) \equiv \mathrm{Re}\left[\frac{\epsilon + i/\tau_S}{\sqrt{(\epsilon + i/\tau_S)^2-\Delta^2}}\right]. \end{equation} A similar result was found for the case of a short $S$ wire between two $N$ leads~\cite{Kauppila}. We obtain the above formula from the following approximate expression for the pairing angle \begin{equation}\label{eq:thetaDynes} \theta_S (\epsilon) \simeq \theta_{Dy}(\epsilon) \equiv \arctan \frac{i\Delta}{\epsilon+i/\tau_S} \end{equation} [deviations from these formulas can arise for $|\epsilon/\Delta-1|\lesssim 1/(\tau_N\Delta)^2$ when $\sqrt{\tau_S \Delta} \gtrsim \tau_N \Delta$, see Appendix~\ref{app:unifweak}]. In the main panel of Fig.~\ref{fig:unif_weak_DOS_vs_E} we plot \eref{eq:Dynes} along with the result of a numerical calculation of the density of states. Using $\theta_{Dy}$ of \eref{eq:thetaDynes} in the self-consistent equation \rref{eq:op_sc} we also recover McMillan result for the suppression of the zero-temperature order parameter, \begin{equation}\label{eq:D_NS} \Delta_{NS} \simeq \Delta_0 \sqrt{1-\frac{2}{\tau_S\Delta_0}}\, , \end{equation} with $\Delta_0$ the bulk value of the order parameter. Note that \eref{eq:nu_old} and \eref{eq:Dynes} agree at leading order in the overlap region $\epsilon_g \ll \epsilon \ll \Delta$, as they both approximately take the constant value $1/\tau_S\Delta$ there; a crossover energy between the two expression can be identified with the geometric average $\sqrt{\Delta/\tau_N}$ between gap and minigap. This crossover energy is, for typical parameters, smaller than qubit frequency. Therefore, we can in general use the Dynes-like formulas as a starting point to evaluate the density of states in a non-homogenous system, which we consider next. \subsection{Proximity effect near a trap edge} \label{S4} A normal-metal trap in general covers only part of a superconducting electrode [cf. \eref{eq:diffusion}], in order to limit losses in the normal metal that could otherwise shorten the qubit lifetime Typically, traps have lateral dimension of the order of $10~\mu$m or more~\cite{Riwar}, while the thicknesses $d_S$ and $d_N$ of superconducting and normal materials are in the range of tens of nanometers. These sizes should be compared to the coherence length $\xi$, which for disordered aluminum films typically used to fabricate qubits is of the order of 200~nm. Therefore both the normal and superconducting films are thin compared to $\xi$, while the lateral dimensions of the trap are much wider than $\xi$. We can therefore effectively model the system near the trap edge as being composed by a superconducting film occupying the whole $x$-$y$ plane and a normal metal in the half plane $x>0$, see Fig.~\ref{fig:systems}. \begin{figure}[t] \begin{center} \includegraphics[width=0.41\textwidth]{Systems} \end{center} \caption{(Color online) A non-uniform NS bilayer: a superconducting film of thickness $d_S$ (bottom) is partially covered by a normal metal layer (thickness $d_N$) occupying the region $x>0$. We use this system to model the vicinity of a normal-metal quasiparticle trap (see text).} \label{fig:systems} \end{figure To study the proximity effect near such an edge, we must allow for spatially dependent paring angles. Due to translational symmetry in the $y$ direction, they are functions of coordinate $x$ only; they satisfy Usadel equations that generalize \esref{eq:uni_usadelS} and \rref{eq:uni_usadelN} by including a diffusion term (see Appendix~\ref{app:usadal-non-unif} for the derivation): \begin{equation} \label{eq:non_uni-usadalS}\begin{split} \frac{D_S}{2}\frac{\partial^2\theta_S(\epsilon,x)}{\partial x^2} &+ i\epsilon\sin\theta_{S}(\epsilon,x) + \Delta(x)\cos\theta_S(\epsilon,x) \\ & = \frac{1}{\tau_S}\sin[\theta_S(\epsilon,x)-\theta_N(\epsilon,x)]H(x)\, , \end{split}\end{equation} where $D_S$ is the diffusion constant for electrons in the normal state of $S$ and $H(x)$ is the step function [$H(x)=1$ for $x>0$ and 0 otherwise], and for $x>0$ \begin{equation} \begin{split} \label{eq:non_uni-usadalN} \frac{D_N}{2}\frac{\partial^2\theta_N(\epsilon,x)}{\partial x^2} &+i\epsilon\sin\theta_{N}(\epsilon,x) \\ & = \frac{1}{\tau_N}\sin[\theta_N(\epsilon,x)-\theta_S(\epsilon,x)], \end{split}\end{equation} with $D_N$ the diffusion constant for electrons in $N$. As before, the superconducting order parameter $\Delta(x)$ is to be found self-consistently using \eref{eq:op_sc}. To avoid any confusion, we remind~\cite{Riwar} that while $D_\mathrm{qp}$ in \eref{eq:diffusion} is proportional to $D_S$ in \eref{eq:non_uni-usadalS}, the former takes into accounts phenomenologically the dependence on energy of the distribution function in the superconductor -- information that is lost in considering the density $x_\mathrm{qp}$ -- and this usually results in $D_\mathrm{qp} \ll D_S$. \begin{figure}[t] \begin{center} \includegraphics[width=0.48\textwidth]{OP_Num_vs_SemAn_vs_Step} \end{center} \caption{(Color online) Solid line (blue): normalized order parameter $\Delta(x)/\Delta_0$ in the non-uniform NS bilayer depicted in Fig.~\ref{fig:systems} as a function of normalized distance $x/\xi$ from the trap edge. Dot-dashed line (black): non-self-consistent, step-like approximation, \eref{eq:OP_step}, which we use for analytical calculations (see text). Dashed line (red): ``first iteration'' obtained by substituting the pairing angles obtained in the step-like approximation, \esref{eq:Ana_theta_L} and \rref{eq:Ana_theta_R}, into \eref{eq:SemiAna_theta} and the latter into \eref{eq:op_sc}.} \label{fig:non_uni_sc_op} \end{figure In general, the system of Usadel plus self-consistent equations must be solved numerically. In Fig.~\ref{fig:non_uni_sc_op}, we plot with the solid line the self-consistent order paramter for such a solution, obtained following the procedure described in Appendix~\ref{app:num}. Far from the trap edge the solution must approach either the BCS one for $x\to-\infty$, or that for the uniform $NS$ bilayer for $x\to\infty$. In other words, indicating with $\theta_{Su}(\epsilon)$ the pairing angle in the $S$ component of a uniform $NS$ bilayer, the solution $\theta_S(\epsilon,x)$ for the non-uniform case interpolates between $\theta_{BCS}$ of \eref{eq:nBCS} and $\theta_{Su}$. Similarly, in the weak-coupling regime the order parameter $\Delta(x)$ interpolates between $\Delta_0$ and $\Delta_{NS}$ of \eref{eq:D_NS} as $x$ goes from $-\infty$ to $+\infty$; the difference between the two values of the order parameter is small, so we look for an approximate (not self-consistent) solution to the Usadel equations \rref{eq:non_uni-usadalS} and \rref{eq:non_uni-usadalN} in which $\Delta(x)$ is assumed to take the form (see dot-dashed line in Fig.~\ref{fig:non_uni_sc_op}) \begin{equation}\label{eq:OP_step} \Delta_s(x) = \begin{cases} \Delta_{0}, & x < 0\, , \\ \Delta_{\mathrm{NS}}, & x \geq 0\, . \end{cases} \end{equation} Moreover, for energies large compared to the minigap, $\epsilon \gg \epsilon_g$, we can neglect $\theta_N$ in comparison to $\theta_S$ at leading order in $1/\tau_N\Delta_0 \ll 1$. Hence we can approximate $\sin[\theta_S-\theta_N] \approx \sin\theta_S$, and at this order \eref{eq:non_uni-usadalS} decouples from \eref{eq:non_uni-usadalN}. With these approximations, the solution for $\theta_S$ is (cf. Ref.~\cite{belzig}) \begin{align} \label{eq:SemiAna_theta} \theta_S(\epsilon,x) & = \theta_L(\epsilon,x)H(-x) + \theta_R(\epsilon,x)H(x)\, , \\ \theta_L(\epsilon,x) & = \theta_{BCS}(\epsilon) \label{eq:SemiAna_theta_L} \\ & - 4\arctan\left\{e^{\frac{x}{\xi}\sqrt{2\alpha_1(\epsilon)}} \tan\left[\frac{\theta_{\mathrm{BCS}}(\epsilon)-\theta_0(\epsilon)}{4}\right]\right\}, \nonumber \\ \theta_R(\epsilon,x) & = \theta_{Su}(\epsilon) \label{eq:SemiAna_theta_R} \\ & - 4\arctan\left\{e^{-\frac{x}{\xi}\sqrt{2\alpha_2(\epsilon)}}\tan\left[\frac{\theta_{Su}(\epsilon)-\theta_0(\epsilon)}{4}\right]\right\}. \nonumber \end{align} Here, we define the coherence length as $\xi=\sqrt{D_S/\Delta_0}$, introduce the dimensionless functions $\alpha_1(\epsilon)=\sqrt{\Delta_0^2-\epsilon^2}/\Delta_0$ and $\alpha_2(\epsilon)=\sqrt{\Delta_{NS}^2-(\epsilon+\frac{i}{\tau_S})^2}/\Delta_0$, and $\theta_0(\epsilon)$ is the (unknown) value of $\theta_S$ at the trap edge $x=0$. By construction this expression for $\theta_S$ is continuous at $x=0$, but it should also be continuously differentiable. Equating the left and right derivatives at the edge gives us a condition that implicitly defines $\theta_0$: \begin{equation}\label{eq:SemiAna_theta0} \begin{split} &\sqrt{\alpha_1(\epsilon)}\frac{\tan\left[\frac{\theta_{BCS}(\epsilon)-\theta_0(\epsilon)}{4}\right]}{1+\tan^2\left[\frac{\theta_{BCS}(\epsilon)-\theta_0(\epsilon)}{4}\right]} + \\ &\sqrt{\alpha_2(\epsilon)}\frac{\tan\left[\frac{\theta_{Su}(\epsilon)-\theta_0(\epsilon)}{4}\right]}{1+\tan^2\left[\frac{\theta_{Su}(\epsilon)-\theta_0(\epsilon)}{4}\right]} = 0\, . \end{split} \end{equation} In the weak-coupling regime we are considering, we have $\alpha_1 \simeq \alpha_2$ so long as $|\epsilon-\Delta_0| \gg 1/\tau_S$. Therefore, except in a narrow energy region near $\Delta_0$, \eref{eq:SemiAna_theta0} has the approximate solution \begin{align} \label{eq:theta_0_fin} \theta_0\simeq\frac{1}{2}(\theta_{BCS} + \theta_{Su}). \end{align} Finally, in the energy range where our approximations apply (energy above the minigap and not too close to $\Delta_0$), $\theta_{Su}$ is well approximated by $\theta_{Dy}$ of \eref{eq:thetaDynes}, which in the same energy range is close to $\theta_{BCS}$. We can therefore linearize \esref{eq:SemiAna_theta_L} and \rref{eq:SemiAna_theta_R} to arrive at \begin{align} \label{eq:Ana_theta_L} \theta_L(\epsilon,x) & \simeq \theta_{BCS}(\epsilon) - \frac{1}{2}e^{\frac{x}{\xi}\sqrt{2\alpha_1(\epsilon)}}\left[\theta_{BCS}(\epsilon)-\theta_{Su}(\epsilon)\right],\\ \label{eq:Ana_theta_R} \theta_R(\epsilon,x) & \simeq \theta_{Su}(\epsilon) - \frac{1}{2}e^{-\frac{x}{\xi}\sqrt{2\alpha_2(\epsilon)}}\left[\theta_{Su}(\epsilon)-\theta_{BCS}(\epsilon)\right]. \end{align} \begin{figure*}[!tb] \begin{center} \includegraphics[width=\textwidth]{SemAn_vs_Num_DOS_10panels} \end{center} \caption{(Color online) Density of states in the superconducting film of Fig.~\ref{fig:systems}, calculated for $\tau_S\Delta_0=100$ at various distances from the trap edge. The DoS takes the bilayer form far from the edge in the normal-metal covered region ($x/\xi=10$) and approaches the BCS expression exponentially fast as $x/\xi$ becomes more negative. We find excellent agreement between self-consistent numerical results (blue solid lines) and semi-analytical ones (red dashed lines); see text for details.} \label{fig:SemiAn_VS_Num_DOS} \end{figure* In Fig.~\ref{fig:SemiAn_VS_Num_DOS} we compare the density of states obtained from a self-consistent numerical solution of the Usadel equations (\ref{eq:non_uni-usadalS})-(\ref{eq:non_uni-usadalN}) to an approximate semi-analytic formula which we arrive at by substituting Eqs.~(\ref{eq:Ana_theta_L})-(\ref{eq:Ana_theta_R}) [with $\theta_{Su}(\epsilon)$ found by numerically solving Eqs.~(\ref{eq:uni_usadelS})-(\ref{eq:uni_usadelN}) -- or equivalently Eq. (\ref{eq:X_full})] into \eref{eq:SemiAna_theta} and the latter into Eq.~(\ref{eq:DOS}). Our approximate formulas capture accurately the dependence of the DoS on the distance from the trap edge. While almost no deviations from the fully numerical calculation can be seen on the used scale, there are in fact differences in the region near the bulk gap, as expected -- see Appendix ~\ref{App:DOS_evolution}. In the next section we will be interested in the spatial evolution of the normalized density of states and pair amplitude away from the normal-metal trap. In order to find analytical formulas for these quantities, we further approximate \eref{eq:Ana_theta_L} by using the Dynes expression, \eref{eq:thetaDynes}, for $\theta_{Su}$ and obtain for $x<0$ at leading order (see Appendix~\ref{App:DOS_evolution} for details) \begin{eqnarray}\label{eq:n_vs_x} n(\epsilon,x) &\simeq& e^{-\sqrt{2}\frac{|x|}{\xi}\left(1-\frac{\epsilon^2}{\Delta_0^2}\right)^{1/4}}\frac{1}{2}\frac{1}{\tau_S\Delta_0}\frac{\Delta_0^3}{\left(\Delta_0^2-\epsilon^2\right)^{3/2}}, \quad \\ \label{eq:p_vs_x} p(\epsilon,x) &\simeq& e^{-\sqrt{2}\frac{|x|}{\xi}\left(1-\frac{\epsilon^2}{\Delta_0^2}\right)^{1/4}}\frac{1}{2}\frac{1}{\tau_S\Delta_0}\frac{\epsilon\Delta_0^2}{\left(\Delta_0^2-\epsilon^2\right)^{3/2}}, \end{eqnarray} for $\Delta_0-\epsilon\gg 1/\tau_S$ and $\epsilon \gg \epsilon_g$, and \begin{eqnarray} \label{eq:n_above} n(\epsilon,x) &\simeq& \frac{\epsilon}{\sqrt{\epsilon^2-\Delta_0^2}} -e^{-\frac{|x|}{\xi}\left(\frac{\epsilon^2}{\Delta_0^2}-1\right)^{1/4}} \frac{1}{\sqrt{2}}\frac{1}{\tau_S\Delta_0} \\ &\times&\frac{\Delta_0^3}{\left(\epsilon^2-\Delta_0^2\right)^{3/2}}\cos\left[\frac{|x|}{\xi}\left(\frac{\epsilon^2}{\Delta_0^2}-1\right)^{1/4}-\frac{\pi}{4}\right], \nonumber \\\label{eq:p_above} p(\epsilon,x) &\simeq& \frac{\Delta_0}{\sqrt{\epsilon^2-\Delta_0^2}} -e^{-\frac{|x|}{\xi}\left(\frac{\epsilon^2}{\Delta_0^2}-1\right)^{1/4}} \frac{1}{\sqrt{2}}\frac{1}{\tau_S\Delta_0}\\ &\times&\frac{\epsilon \Delta_0^2}{\left(\epsilon^2-\Delta_0^2\right)^{3/2}}\cos\left[\frac{|x|}{\xi}\left(\frac{\epsilon^2}{\Delta_0^2}-1\right)^{1/4}-\frac{\pi}{4}\right], \nonumber \end{eqnarray} for $\epsilon-\Delta_0 \gg 1/\tau_S$. Note that for energies above the gap the corrections to the BCS formulas are always small by construction. Moreover, both above and below the gap the corrections are small in $1/\tau_S\Delta_0$ and decay exponentially with distance over an energy-dependent length scale which is of the order of the coherence length $\xi$ away from the gap, but longer than $\xi$ close to the gap. We have now all the ingredients needed to estimate the quasiparticle-induced transition rates for a qubit with a trap, which is the focus of the next section. \section{Qubit relaxation with a trap near the junction} \label{S5} As discussed in Sec.~\ref{S2}, the qubit decay rate due to quasiparticle tunneling is proportional to the spectral density $S(\omega)$, see \eref{eq:g10_1}. The spectral density is determined by the quasiparticle distribution function $f$, the density of states $n$, and the pair amplitude $p$, see \esref{eq:Sred}-\rref{eq:A}. We have shown in the previous Section that near a trap $n$ and $p$ become position-dependent. In the next subsection we study how this dependence affects the qubit decay rate, assuming that quasiparticles are everywhere in thermal equilibrium, so that the distribution function is uniform in space. This assumption is clearly not realistic, since it leads to an increase in the quasiparticle density approaching the trap, but it will enable us to show that the changes in the spectral density due to the proximity effect do not significantly harm the qubit if the trap is sufficiently far from the junction. In contrast, in Sec.~\ref{sec:effmu} we will account in a phenomenological way for the spatially dependent suppression of the quasiparticle density caused by the trap. In this more realistic scenario, we will find an optimal position for the trap, which balances between the density suppression and the enhancement of the subgap density of states, two effects that have opposite influence on the qubit relaxation rate. Throughout this section, we assume that the qubit has reflection symmetry with respect to the junction, as in experiments~\cite{Riwar}; this means that when a trap is mentioned, it should be understood as two identical traps placed at the same distance from the junction. \subsection{Thermal equilibrium} \label{sec:TherEqu} The assumption of thermal equilibrium means that the distribution function has the Fermi-Dirac form, \begin{equation}\label{eq:feq} f^{\text{eq}}(\epsilon) = \frac{1}{e^{\epsilon/T} + 1}, \end{equation} with $T$ the quasiparticle temperature. It then follows from \esref{eq:Sred}-\rref{eq:S_b} that the spectral density obeys the detailed balance relation \begin{equation}\label{eq:detbal} S^{\text{eq}}(-\omega) = e^{-\omega/T}S^{\text{eq}}(\omega)\, . \end{equation} We assume that the quasiparticles are ``cold'', $T\ll \omega_{10}$, and therefore we can neglect the qubit excitation rate in comparison with the decay rate, since $\Gamma_{01}^{\text{eq}}/\Gamma_{10}^{\text{eq}} = e^{-\omega_{10}/T} \ll 1$. In presence of a trap, since $n$ and $p$ at the junction position depends on its distance $x$ from the trap, the quantity $A$ defined in \eref{eq:A} is also a function of $x$ and so is the spectral function. An approximate expression for $S^\text{eq}(\omega,x)$ can be obtained in the relevant regime $\epsilon_g \ll T \ll \omega \ll \Delta_0$. In practice, since the minigap energy $\epsilon_g$ is much smaller than temperature $T$, we can set the former to zero. Then for the quasiparticle tunneling part of the spectral density, we can identify three contributions (see Appendix~\ref{App:Decay_rate} for details on the derivation of the expressions discussed here): \begin{equation}\label{eq:St_eq} S^\text{eq}_\text{t}(\omega,x) = S_{aa}^\text{eq}(\omega,x) + S_{ba}^\text{eq}(\omega,x)+ S_{bb}^\text{eq}(\omega,x). \end{equation} The first contribution accounts for transitions in which the initial quasiparticle energy is above the gap -- then the final energy is also above the gap; this term is approximately independent of position [cf. \eref{eq:S_w_x_qp}], \begin{equation}\label{eq:Saa} S_{aa}^\text{eq}(\omega,x) \simeq \frac{8E_J}{\pi}x_\mathrm{qp}^\text{eq}\sqrt{\frac{2\Delta_0}{\omega}}, \end{equation} where $x_\mathrm{qp}^\text{eq} = \sqrt{2\pi T/\Delta_0} e^{-\Delta_0/T}$ coincides with the equilibrium value of the quasiparticle density in the absence of the trap. A spatial dependence in principle arises from the corrections terms in \esref{eq:n_above} and \rref{eq:p_above}, but their contributions can be neglected in comparison with the other terms in $S^\text{eq}_\text{t}$ which we now discuss. The second term in the right hand side of \eref{eq:St_eq} originates from transitions in which a quasiparticle initially below the gap absorbs the qubit energy and is excited above the gap energy: \begin{equation}\label{eq:Sba} S_{ba}^\text{eq}(\omega,x) \simeq \frac{8E_J}{\pi}x_\mathrm{qp}^\text{eq}\sqrt{\frac{2\Delta_0}{\omega}}\frac{1}{2\tau_S\Delta_0} e^{-\sqrt{2}\frac{x}{\xi}\left(\frac{2\omega}{\Delta_0}\right)^\frac14}\frac{\Delta_0}{\omega}e^{\omega/T}. \end{equation} The small factor $1/\tau_S\Delta_0$ and that exponentially decaying with distance account for the smallness of the initial density of states. In contrast, the final factor is large because the initial occupation probability is exponentially larger at lower energies. Thus at sufficiently low temperature this term can become larger than $S_{aa}$ of \eref{eq:Saa}. The last term in \eref{eq:St_eq} arises from transitions with both initial and final quasiparticle energy below the gap, \begin{equation}\label{eq:Sbb} S_{bb}^\text{eq}(\omega,x) \simeq \frac{8E_J}{\pi}\frac{1}{\left(\tau_S\Delta_0\right)^2}e^{-2\sqrt{2}\frac{x}{\xi}}2\ln (2) \frac{T}{\Delta_0}\, . \end{equation} Here the small factor $1/\tau_S\Delta_0$ is squared, and the exponential decay with distance is faster than in $S_{ba}^\text{eq}$ of \eref{eq:Sba}, because both initial and final density of states are small. However, the temperature dependence is much weaker: $S_{bb}^\text{eq}$ vanishes linearly with $T$ rather than exponentially, as $S_{ba}^\text{eq}$ does. Therefore, despite the small prefactors, this term can dominate at low temperatures. In addition to the single quasiparticle tunneling, pair events can take place. In particular, since the density of states is finite (albeit small) down to the minigap energy $\epsilon_g$, so long as $\omega_{10}> 2\epsilon_g$ a pair breaking process is possible, in which the qubit relaxes by breaking a Cooper pair and exciting two quasiparticles above the minigap (but well below the gap). From \eref{eq:S_b} the spectral density for such a process is (see Appendix~\ref{App:Decay_rate}) \begin{equation}\label{eq:Sp_eq} S_\text{p}^\text{eq} (\omega,x)= \frac{8E_J}{\pi} \frac{1}{\left(\tau_S\Delta_0\right)^2}e^{-2\sqrt{2}\frac{x}{\xi}}\left[\frac{\omega}{\Delta_0} - 2\ln(2) \frac{T}{\Delta_0}\right]. \end{equation} The spectral density does not vanish even at $T=0$, as there is no need for thermally excited quasiparticles to be present; in fact, the spectral density decreases linearly with increasing $T$ because the increased occupation of the final states suppresses this process. Interestingly, this linear in temperature term cancels with $S_{bb}$, \eref{eq:Sbb}; moreover, while $S_{ba}$ in \eref{eq:Sba} can be dominant in the limits of sufficiently small temperature and large distance, its contribution to $S_\text{t}^\text{eq}$ is negligible in the parameter range we are interested in (cf. Fig.~\ref{fig:rates_TherEqui}), so that we have approximately \begin{equation}\label{eq:Sthapp} S^\text{eq}(\omega,x) \approx \frac{8E_J}{\pi} \left[x_\mathrm{qp}^\text{eq} \sqrt{\frac{2\Delta_0}{\omega}}+\frac{1}{\left(\tau_S\Delta_0\right)^2}\frac{\omega}{\Delta_0}e^{-2\sqrt{2}\frac{x}{\xi}}\right]. \end{equation} \begin{figure}[t!] \includegraphics[width=0.48\textwidth]{Rates_TherEqui_new} \caption{(Color online) Qubit relaxation rate as a function of temperature. We assumed typical transmon parameters for the qubit (see \textit{e.g.} Ref.~\cite{Paik}): $\Delta_0=46\,$GHz, $\omega_{10}=6\,$GHz, $E_J=16\,$GHz and $E_C=290\,$MHz; and weak proximity effect, $\tau_S\Delta_0=10^3$. The solid line (red) shows the total relaxation rate, while the other lines show the contributions from the different processes discussed in the text.} \label{fig:rates_TherEqui} \end{figure Assuming that the trap is next to the junction, $x=0$, in Fig.~\ref{fig:rates_TherEqui} we plot, as a function of temperature, the qubit decay rate $\Gamma^\mathrm{eq}$ obtained by substituting \esref{eq:trmel} and \rref{eq:Sthapp} into \eref{eq:g10_1}, as well as the contributions from the processes discussed above (above gap to above gap, $aa$; below gap to above gap, $ba$; and the sum of below gap to below gap, $bb$, with pair, $\text{p}$). At ``high'' temperature, above about 120~mK but still below the qubit frequency, the dominant contribution comes from the position-independent $aa$ term. In contrast, at low temperature there is a temperature-independent plateau in $\Gamma_{10}$ originating from the sum of $bb$ and pair-breaking processes. This plateau shows that the trap can increase the decay rate exponentially in comparison with the no-trap rate, which coincides with the $aa$ term. However, the plateau is quickly suppressed by moving the trap away from the junction: for each coherence length increase in trap-junction distance, the plateau decreases by a factor $e^{2\sqrt{2}} \approx 17$. With the parameters of Fig.~\ref{fig:rates_TherEqui}, this means that for $x=4\xi$ the low-temperature decay rate would be of order $10^{-2}\,$Hz. Therefore, even though the trap adversely affects the qubit, the limitation imposed on the decay rate becomes quickly negligible by increasing the distance to the junction. The fact that the trap can only harm the qubit rather than improve its coherence is a consequence of the thermal equilibrium assumption. Next, we relax this assumption to find up to which point the trap can be beneficial. \subsection{Suppressed quasiparticle density} \label{sec:effmu} A trap can be beneficial to a qubit primarily by suppressing the quasiparticle density at the junction~\cite{Hosseinkhani}, as discussed in Sec.~\ref{S2}. Within the phenomenological diffusion model of \eref{eq:diffusion}, the typical length scale over which such a suppression takes place is given by the trapping length $\lambda_\text{tr} = \sqrt{D_\mathrm{qp}/\Gamma_\text{eff}}$; this length scale is of order 100~$\mu$m~\cite{Riwar,Hosseinkhani}, much longer than the coherence length $\xi$. As for the strength of the proximity effect, based on the experimental parameters of Ref.~\cite{Riwar}, we estimate it to be $\tau_S\Delta_0 \sim 10^3$-$10^4$. The large separation of length scales together with the weakness of normal trap-superconductor coupling, $\tau_S\Delta_0 \gg 1$, make it possible to use \eref{eq:diffusion} to calculate the spatial profile of the density, while the modifications introduced by the proximity effect can be treated as corrections. Below we will consider a realistic device geometry when calculating the position-dependent density, but first we discuss how to incorporate such a non-equilibrium quasiparticle configuration into the evaluation of the qubit transition rates. When we neglect the proximity effect, the density of states takes the BCS form, \eref{eq:nBCS}, and the quasiparticle density defined in \eref{eq:x_qp} can depend on position only through the distribution function $f$. Such dependence could arise, for example, due to a temperature profile. However, at low temperatures it is in general more appropriate to model non-equilibrium quasiparticles by introducing an effective chemical potential $\tilde\mu$~\cite{Owen} (which we measure from the Fermi energy). The reason is that recombination processes, which are needed for chemical equilibrium, are slower than the scattering processes responsible for thermalization~\cite{Scalapino}. Such a phenomenological non-equilibrium approach has been already considered in the qubit setting~\cite{Ansari}. Here to capture the spatial profile of the density we assume the distribution function $f$ to have the form \begin{equation}\label{eq:fnoneq} f(\epsilon)=\frac{1}{e^{(\epsilon-\tilde\mu)/\tilde{T}}+1}\, , \end{equation} where the effective chemical potential is a function of position, $\tilde\mu=\tilde\mu(x)$, while the effective temperature $\tilde{T}$ is homogeneous and does not necessarily coincides with the phonon bath temperature. Indeed, typical quasiparticle densities in the absence of traps are in the range $x_\mathrm{qp} \sim 10^{-7}$-$10^{-5}$, corresponding to effective temperatures (at $\tilde\mu=0$) from $\sim 145\,$mK to $\sim 200\,$mK, much higher than both the usual fridge temperature (10-20~mK) and the typical qubit temperature which is of order 35-60~mK~\cite{Vool,Jin}, as estimated from the excited state population. In the following we will present results for $\tilde{T}/\Delta_0$ in the range $0.01$ to $0.05$, corresponding to approximately 20~mK to 110~mK in aluminum; for a given effective temperature, the chemical potential can then be calculated by inverting \eref{eq:x_qp} [with $n(\epsilon)$ of \eref{eq:nBCS} and $f(\epsilon)$ of \eref{eq:fnoneq}]. So long as $e^{(\Delta_0 - \tilde\mu)/\tilde{T}}\gg 1$, the integration in \eref{eq:x_qp} gives approximately $x_\mathrm{qp} \simeq \sqrt{2\pi\tilde{T}/\Delta_0}e^{(\tilde\mu-\Delta_0)/\tilde{T}}$ and therefore we find \begin{equation}\label{eq:mu} \tilde\mu(x) = \Delta_0 + \tilde{T} \ln\left[\sqrt{\frac{\Delta_0}{2\pi\tilde{T}}}x_\mathrm{qp}(x)\right]. \end{equation} The assumption made above gives a restriction on the range of allowed effective temperatures, $2\pi\tilde{T}/\Delta_0 \gg x_\mathrm{qp}^2$, which is however not relevant in practice since usually we have $x_\mathrm{qp} < 10^{-4}$~\cite{Wang} and $2\pi\tilde{T}/\Delta_0> 10^{-2}$ (since $\tilde{T}$ should be at least comparable to the fridge temperature). This restriction also implies $\tilde\mu < \Delta_0$. We will assume that in general the quasiparticle density $x_\mathrm{qp}(x)$ is larger than the thermal equilibrium value at temperature $\tilde{T}$, so that $\tilde\mu > 0$. Note that for a given $x_\mathrm{qp}$, $\tilde\mu$ is a decreasing function of $\tilde{T}$, while for a fixed $\tilde{T}$ it is an increasing function of $x_\mathrm{qp}$. With the approach described above, given the quasiparticle effective temperature $\tilde{T}$ and the density profile $x_\mathrm{qp}(x)$, one can calculate the effective chemical potential $\tilde\mu(x)$ using \eref{eq:mu} and therefore obtain an expression for the non-equilibrium distribution function, \eref{eq:fnoneq}. Once the distribution function is known, we can evaluate the spectral density of \esref{eq:Sred}-\rref{eq:S_b}, which we denote hereinafter with $\tilde{S}$ to remind of its dependence on the non-equilibrium parameters $\tilde{T}$ and $\tilde\mu$ (and hence on junction-trap distance; we drop in this section the variable $x$ as explicit argument of the spectral density for notational compactness). Similar to the thermal equilibrium case, in the single quasiparticle tunneling contribution $\tilde{S}_\text{t}$ we distinguish three terms: $\tilde{S}_t=\tilde{S}_{aa}+\tilde{S}_{ba}+\tilde{S}_{bb}$. For the first two terms on the right hand side we find, as discussed in Appendix~\ref{app:supp}, that they are proportional to $x_\mathrm{qp}$, as in thermal equilibrium: \begin{equation} \label{eq:tSaa} \tilde{S}_{aa}(\omega)\simeq \frac{8E_J}{\pi}x_\mathrm{qp}(x)\sqrt{\frac{2\Delta_0}{\omega}} \end{equation} and \begin{equation} \label{eq:tSba}\tilde{S}_{ba}(\omega) \simeq \frac{8E_J}{\pi}x_\mathrm{qp}(x)\sqrt{\frac{2\Delta_0}{\omega}}\frac{1}{2\tau_S\Delta_0} e^{-\sqrt{2}\frac{x}{\xi}\left(\frac{2\omega}{\Delta_0}\right)^\frac14}\frac{\Delta_0}{\omega}e^{\omega/\tilde{T}}. \end{equation} Here we assume that $\omega>0$ and $\Delta_0 -\tilde\mu-\omega \gg \tilde{T}$; validity conditions for the approximations employed are discussed in more detail in Appendix~\ref{app:supp}. For the term $\tilde{S}_{bb}$ we have different regimes depending on the ratio between $\omega$ and $\tilde\mu$: \begin{equation}\label{eq:tSbb}\begin{split} &\tilde{S}_{bb} (\omega) \simeq \frac{8E_J}{\pi}\frac{1}{(\tau_S\Delta_0)^2}\times\\ &\left\{ \begin{array}{ll} e^{-2\sqrt{2}\frac{x}{\xi}} \frac{2\tilde{T}}{\Delta_0} \ln\left(\frac{1+e^{\tilde\mu/\tilde{T}}}{1+e^{(\tilde\mu-\omega)/\tilde{T}}}\right), & \tilde\mu \lesssim \omega , \\ 4e^{-2\sqrt{2}\frac{x}{\xi}\left(1-\frac{\tilde\mu^2}{\Delta_0^2}\right)^\frac14} \frac{\Delta_0^3\left(\Delta_0^2+\tilde\mu^2\right)}{\left(\Delta_0^2-\tilde\mu^2\right)^{3/2}\left(\Delta_0+\tilde\mu\right)^{3/2}} \times \\ \left(\frac{1}{\sqrt{\Delta_0-\mu-\tilde\omega}} - \frac{1}{\sqrt{\Delta_0-\tilde\mu}} \right), & \tilde\mu \gg \omega . \end{array} \right. \end{split}\end{equation} For this term the similarity with thermal equilibrium is recovered only for $\tilde\mu \ll \tilde{T}$. This is the case also for the pair process contribution $\tilde{S}_\text{p}$: \begin{equation}\label{eq:tSp}\begin{split} \tilde{S}_\text{p}(\omega) \simeq \frac{8E_J}{\pi} \frac{1}{\left(\tau_S\Delta_0\right)^2}e^{-2\sqrt{2}\frac{x}{\xi}} \frac{\tilde{T}}{\Delta_0} \frac{1}{1-e^{(2\tilde\mu-\omega)/\tilde{T}}}\times \\ \left[2\ln\frac{1+e^{(\omega-\tilde\mu)/\tilde{T}}}{1+e^{-\tilde\mu/\tilde{T}}}-\frac{\omega}{\tilde{T}}\right]. \end{split}\end{equation} However, a partial cancellation between $\tilde{S}_{bb}$ and $\tilde{S}_\text{p}$ takes place so long as $\omega-2\tilde\mu \gg \tilde{T}$, in which case we find \begin{equation} \tilde{S}_{bb}(\omega) + \tilde{S}_\text{p}(\omega) \approx \frac{8E_J}{\pi}\frac{1}{(\tau_S\Delta_0)^2}\frac{\omega}{\Delta_0}e^{-2\sqrt{2}\frac{x}{\xi}}\, , \end{equation} as in thermal equilibrium [compare to the second term in \eref{eq:Sthapp}]. Turning now to the spectral density at negative frequencies, we note that a relation similar to \eref{eq:detbal}, \begin{equation} \tilde{S}_\text{t} (-\omega) = e^{-\omega/\tilde{T}} \tilde{S}_\text{t}(\omega)\, , \end{equation} follows from \esref{eq:S_t} and \rref{eq:fnoneq}. Since we consider $\omega \gg \tilde{T}$, we can neglect the qubit excitation due to single quasiparticle tunneling. In contrast, for pair processes we find, from \eref{eq:S_b}, \begin{equation}\label{eq:tSpneg} \tilde{S}_\text{p} (-\omega) = e^{(2\tilde\mu-\omega)/\tilde{T}} \tilde{S}_\text{p}(\omega)\, . \end{equation} Therefore the rate of qubit excitation induced by quasiparticle recombination can become exponentially larger than qubit relaxation by Cooper pair breaking if $2\tilde\mu-\omega \gg \tilde{T}$. We next apply these results to a model of an actual qubit. \subsubsection{An example} \begin{figure}[t] \begin{center} \includegraphics[width=0.48\textwidth]{trap_transmon.png} \end{center} \caption{(Color online) Diagram for the right half of the transmon qubit considered here. Except for the position of the trap, it is the same design studied in Refs.~\cite{Riwar,Hosseinkhani}.} \label{fig:trap_transmon} \end{figure As a concrete example, we consider the qubit geometry depicted in Fig.~\ref{fig:trap_transmon}, where a trap with length $d$ is placed a distance $x$ away from the Josephson junction. Similar geometries have been used experimentally to measure quasiparticle recombination, trapping by vortices~\cite{Wang} and by normal-metal traps~\cite{Riwar}, and theoretically to devise how to optimize trap performance~\cite{Hosseinkhani}. Here the only difference is that we allow the long trap, $d>l$, to be close to the junction, $0\le x \le l$, so that the role of the proximity effect can be evaluated. To find the quasiparticle density $x_\mathrm{qp}(x)$ at the Josephson junction, we proceed as in \ocite{Hosseinkhani} and treat each segment of the device (except the pad with side $L_\text{pad}$) as one-dimensional. Since we are interested in the steady-state density, we set $\partial x_\mathrm{qp}/\partial t=0$ in \eref{eq:diffusion}. Solving that equation for each segment of the device, and requiring continuity of the density and current conservation at the points where different segments meet, we find \begin{align} \label{eq:x_qp_diffusion} x_\mathrm{qp}(x) = & \frac{g}{\Gamma_{\mathrm{eff}}}\bigg\{ 1 + \frac{1}{\sinh (d/\lambda_\mathrm{tr})}\bigg[ \frac{A_R}{W\lambda_\mathrm{tr}} + \cosh \left(\frac{d}{\lambda_{tr}}\right)\frac{x}{\lambda_\mathrm{tr}} \nonumber \\ &+ \cosh\left(\frac{x+d-l}{\lambda_\mathrm{tr}}\right)\frac{A_c}{W\lambda_\mathrm{tr}} \bigg] + \frac{1}{2}\left(\frac{x}{\lambda_{\mathrm{tr}}}\right)^2 \bigg\} . \end{align} Here $A_{R}=W(L+l-x-d) + L_\mathrm{pad}^2$ is the uncovered area to the right of the trap and $A_c=2W_cL_c$ is the area of gap capacitor. Equation~\rref{eq:x_qp_diffusion} makes it clear that the closer the trap is to the junction, the more the quasiparticle density is suppressed, and that significant changes in the density take place over the length scale given by $\lambda_\mathrm{tr}$. \begin{figure}[!tb] \begin{center} \includegraphics[width=0.48\textwidth]{chemical_potential0} \end{center} \caption{(Color online) The normalized effective chemical potential $\tilde\mu/\Delta_0$, see \eref{eq:mu}, as function of the normalized effective temperature $\tilde{T}/\Delta_0$ for two positions of the trap: $x=0$ (dashed line) and $x=l$ (solid). The upper effective temperature scale is given for aluminum.} \label{fig:mu_vs_x} \end{figure We can now proceed as outlined above: namely, we first calculate the effective chemical potential $\tilde\mu$ for different effective temperatures $\tilde{T}$. In Fig.~\ref{fig:mu_vs_x} we show the results of such calculations for two positions of the trap, $x=0$ and $x=l$; hereinafter we use the same realistic parameters for the qubit ($L=1\,$mm, $l=60\,\mu$m, $W=12\,\mu$m, $L_c=200\,\mu$m, $W_c=20\,\mu$m, $L_\text{pad}=80\,\mu$m) and for the trap ($d=234\,\mu$m, $\lambda_\text{tr}=86.2\,\mu$m) as in Ref.~\cite{Hosseinkhani}, cf. Refs.~\cite{Wang,Riwar}. We also use the experimentally determined values $g=10^{-4}\,$Hz~\cite{Wang} and $\Gamma_\mathrm{eff} = 2.42\times10^{5}\,$Hz~\cite{Riwar}. As expected, the effective chemical potential decreases with increasing effective temperature, and is larger when the trap is further away. Next, we calculate the qubit transition rates using the thus found chemical potential. We perform the calculation in two ways: we substitute the chemical potential into \eref{eq:fnoneq} for the distribution function and the latter into the definitions of the spectral functions, \esref{eq:S_t} and \rref{eq:S_b}; in those equations, we use the semi-analytic results for the pairing angle to determine the density of states and the pair amplitude, see \eref{eq:Ana_theta_L} and the text below \eref{eq:Ana_theta_R}, and perform the final integration over energy numerically. In a second approach, we use our approximate analytical formulas for the spectral functions, see \esref{eq:tSaa} to \rref{eq:tSpneg} (in deriving these formulas additional approximations were introduced, so the results are less accurate). The rates so obtained are shown in Fig.~\ref{fig:decay_NonEqu} for an effective temperature $\tilde{T}/\Delta_0 = 0.019$ ($\sim 40\,$mK in Al). In the left panel we distinguish the contributions to $1/T_1$ due to tunneling-induced relaxation ($\tilde\Gamma_{10,\mathrm{t}}$) and excitation ($\tilde\Gamma_{01,\mathrm{t}}$) and pair process excitation ($\tilde\Gamma_{01,\mathrm{p}}$); the pair process relaxation rate is much smaller than the excitation rate [cf. \eref{eq:tSpneg}] and not visible on this scale. In the right panel we plot the total rate $1/T_1$, which is dominated by the tunneling-induced relaxation; the total rate is a non-monotonic function of trap-junction distance $x$, due to the competition between processes with initial quasiparticle energy below the gap, whose contributions to the rate decay exponentially with distance over a length scale of the order of the coherence length [see \esref{eq:tSba} and \rref{eq:tSbb}], and processes with above-gap initial energy, with contribution slowly increasing with $x_\mathrm{qp}$ over the much longer length scale $\lambda_\text{tr}$. The minimum in this curve thus gives the optimal position $x_o$ for the trap: for $x>x_o$, the density slowly increases, so one is not taking full advantage of the trap, but at $x<x_o$ the subgap density of states quickly increases and negates the benefit of further density suppression. Therefore we conclude that placing a trap at a distance $x_o<x<\lambda_\text{tr}$ represents the best choice. \begin{figure}[!tb] \begin{center} \includegraphics[width=0.48\textwidth]{Rates_NonEqu} \end{center} \caption{(Color online) Left: tunneling (t) and pair (p) contributions to the qubit decay rate $1/T_1$ as a function of trap-junction distance $x$ (the pair decay rate $\tilde\Gamma_{10,\mathrm{p}}$ is too small to be visible). The effective temperature is $\tilde{T}/\Delta_0 = 0.019$ (approximately 40~mK in Al), other parameters are as in Fig.~\ref{fig:rates_TherEqui}. The solid lines have been obtained by numerically calculating the integrals determining the spectral functions, while the dashed lines are our approximate analytical findings (see text for more details). Right: Total qubit decay rate, showing a minimum for a distance of a few coherence lengths.} \label{fig:decay_NonEqu} \end{figure In Fig.~\ref{fig:X_optm_vs_T} we further explore the dependence of the optimal position $x_o$ on parameters such as the effective temperature $\tilde{T}$ and the strength of the proximity effect $\tau_S\Delta_0$ (we remind that the larger this parameter, the weaker the proximity effect). Clearly, the stronger the proximity effect, the further a trap should be placed. As the effective temperature increases, on the other hand, the optimal position decreases: as already noted before, for a given density the higher the effective temperature, the smaller the effective chemical potential, and this reduces the importance of the subgap states, since their occupation decreases. Based on this figure, we conclude that when the distance is over 20 coherence lengths the proximity effect can be safely neglected; in aluminum such a distance is of the order of a few microns, which is still much less than the trapping length. \begin{figure}[t!] \begin{center} \includegraphics[width=0.48\textwidth]{X_optm_vs_T} \end{center} \caption{(Color online) Normalized optimal trap-junction distance $x_o/\xi$ as a function of normalized effective temperature $\tilde{T}/\Delta_0$ (the upper scale gives the corresponding temperature for aluminum). Solid (dashed) lines are obtained by numerically finding the position of the minimum in curves such as the solid (dashed) one in the right panel of Fig.~\ref{fig:decay_NonEqu}.} \label{fig:X_optm_vs_T} \end{figure \section{Summary} \label{S6} In this work we investigate the proximity effect between a normal-metal quasiparticle trap and the superconducting electrode of a qubit. On one hand, a trap can prolong the relaxation time of the qubit by suppressing the quasiparticle density. On the other hand, the proximity effect induces subgap states which can shorten the relaxation time. To quantify the competition between these two phenomena, we start by considering a uniform superconductor-normal metal bilayer; at relevant energies, the density of states takes the Dynes form, \eref{eq:Dynes}, with the broadening determined by interface resistance, superconducting film thickness, and its density of states at the Fermi energy. We then study how such broadening decays away from a trap edge, see \esref{eq:n_vs_x} to \rref{eq:p_above}. With these results, we can evaluate the qubit decay rate as function of the distance between trap and junction; we take into account the suppression of the quasiparticle density by introducing a distribution function which depends on two parameters, an effective temperature and a distance-dependent effective chemical potential, cf. \eref{eq:fnoneq}. Within this approach, we find that the competition between proximity effect and density suppression leads to an optimal placement for the trap, see Figs.~\ref{fig:decay_NonEqu} and \ref{fig:X_optm_vs_T}. The qubit relaxation rate exponentially increase for a trap closer to the junction than this optimum over a length scale of the order of the coherence length, while the increase of the rate when moving the trap farther away is much slower and over the much longer trapping length. Therefore, a trap should be placed at least as far from the junction as the optimum position, but no significant penalty is paid for distances up to the trapping length. While we focused here on a transmon qubit, our findings may prove useful in designing traps for other systems as well. For example, quasiparticle poisoning could be a significant hurdle for nanowire-based realization of Majorana qubits~\cite{loss}; our findings indicate that a normal-metal trap placed close to the ends of the nanowire could be detrimental, as the small minigap is not sufficient to protect zero-energy states from being thermally excited into the subgap states induced by the trap. Finally, our results on the proximity effect could also help interpreting tunneling density of state experiments such as those reported in Refs.~\cite{Moussy,Rod} \acknowledgments We gratefully acknowledge useful discussions with M. Ansari, W. Belzig, L. Glazman, I. Khaymovich, H. Pothier, and R. Riwar. This work was supported in part by the European Union under Research Executive Agency grant agreement No. CIG-618258.
{ "timestamp": "2017-12-25T02:05:48", "yymm": "1712", "arxiv_id": "1712.05235", "language": "en", "url": "https://arxiv.org/abs/1712.05235" }
\section{Introduction} \IEEEPARstart{D}{ecoherence} in superconducting qubits can be caused by dielectric loss generated on many surfaces in and around the environment of the qubit. For example, contamination layers and native oxides present on semiconductor and metallization surfaces will exhibit participation based on the amount of energy induced by the electric fields produced by the qubit system. Traditionally, finite element method (FEM) based models of the transmon qubit architectures are employed to estimate surface participation from interfaces between the substrate and metallization (SM), the free surface of the substrate (SA) and the top surface of the metallization (MA) \cite{Wenner11}, \cite{Sandberg12}, \cite{Wang15}, \cite{Calusine18}. However, the large difference between the thicknesses of such layers and the length scales associated with the overall qubit or resonator design makes accurate calculation of surface participation in these domains difficult. One method to circumvent this issue involves calculating the electric field density on the surfaces of interest and determining surface participation by modifying the appropriate components of the electric field based on the difference in dielectric constants between the interfacial layer and substrate \cite{Wenner11} - \cite{Gambetta16}. The values obtained with this approach can be highly dependent on the discretization scheme used to tessellate the various structures due to the singular behavior of the electric fields near the corners and edges of conductors. Hybrid schemes that can involve power law approximations have also been proposed \cite{Wenner11}, \cite{Wang15} to account for divergences in the electric field distributions. A more robust method to calculate surface participation can be implemented based on a two-dimensional, analytical formulation of the electric field distributions generated by metallization features on dielectric substrates \cite{Wen69}, \cite{Gillick93}. This approach, which assumes that the metallization is composed of two sheets and is perfectly conducting, incorporates the singular nature of the electric fields \cite{Jackson75} which scale as $r^{-0.5}$ where r is the distance from the metallization edges. It will be shown that a closed form solution of the electric field energy can be generated that does not diverge and is applicable to both resonator and capacitor designs. \begin{figure}[htbp!] \centering \includegraphics[width=0.35\textwidth]{Fig1.pdf} \caption{(a) Cross-sectional schematic of a two-dimensional paddle metallization deposited on a semi-infinite substrate. (b) Transformed geometry through conformal mapping from the $x-y$ plane to the $\xi-\eta$ plane using (\ref{eq:eqcon1}). The x on the bottom dotted line denotes the point corresponding to $-\infty$ and $+\infty$ along the $x$-axis from Fig. \ref{fig:padconform}a.} \label{fig:padconform} \end{figure} \section{Coplanar capacitors} \label{section:paddles} Let us first analyze the shunting capacitors associated with a transmon qubit \cite{Gambetta16}, \cite{Koch07}, \cite{Rigetti12}. A cross-sectional schematic of the geometry is shown in Fig. \ref{fig:padconform}, where we assume a semi-infinite substrate upon which two, perfect electrical conductor (PEC), metallization features with zero thickness reside. This approach ignores the effects of finite metallization thickness and substrate recess due to etching, which can impact overall capacitance \cite{Yang98} and loss \cite{Bruno15} in resonators. Under a quasi-static approximation, an electrostatic condition, in which opposite potentials $+\phi_0$ and -$\phi_0$ exist on the features, can be used to calculate the electric field energy. This scenario is analogous to the odd mode generated along a coplanar stripline design. Based on conformal mapping \cite{Wen69}, \cite{Gao08}, the structure in Fig. \ref{fig:padconform}a can be transformed from the complex $z$-plane $(z = x + i y)$ to the w-plane $(w = \xi + i \eta)$ into a parallel-plate capacitor (Fig. \ref{fig:padconform}b): \begin{equation} \label{eq:eqcon1} w = \int{\frac{dz}{\sqrt{(z^2-a^2)(z^2-b^2)}}} \end{equation} where $a$ is half of the distance between the metallization features and $b$ is half of the distance between the outer edges of the metallization features. On the $w$-plane, the metallization width is equal to $\frac{K'(k)}{b}$ with $K'(k) = K(k')$ referring to the complement of the complete ellitpic integral of the first kind, $k' = \sqrt{1-(\frac{a}{b})^2}$. The form of the electric field in Fig. \ref{fig:padconform}b is aligned parallel to the $\xi$ axis and is equal to the difference in potential ($2\phi_0$) divided by the distance between the metallic features $\frac{2 K(k)}{b}$ where $k = \frac{a}{b}$: \begin{equation}\label{eq:efieldw} E_\xi - i E_\eta = -\frac{\phi_0 b}{K(k)} \end{equation} which can also be written in the form of a complex potential: \begin{equation}\label{eq:phiw} \phi(\xi) = \frac{\phi_0 b}{K(k)} \xi \end{equation} To transform this electric field back into the $z$-plane, we can use the following relation \cite{Gillick93} \begin{equation}\label{eq:phider} E_x - i E_y = -\frac{\partial \phi}{\partial z} = -\frac{\partial \phi(\xi)}{\partial \xi} \frac{\partial \xi}{\partial w} \frac{\partial w}{\partial z} \end{equation} to arrive at: \begin{equation}\label{eq:exiey} E_x - i E_y = -\frac{\phi_0 b}{K(k)}\frac{1}{\sqrt{(z^2-a^2)(z^2-b^2)}} \end{equation} \section{Energy} \label{section:energy} To calculate the electric field energy residing in a two-dimensional volume within the coplanar design (Fig. \ref{fig:padconform}a), we note that the electric field is merely the gradient of the potential $\phi(z)$. Therefore, a solution to the volume integral \begin{equation} \label{eq:green1} \int_{V} {|\vec{E}|^2 dV} = \int_{V} {\nabla \phi \cdot \nabla \phi dV} \end {equation} can be simplified to a surface integral using Green's first identity if $\phi$ satisfies Laplace's equation $\nabla^2 \phi = 0$ \begin{equation} \label{eq:green2} \int_{V} {\nabla \phi \cdot \nabla \phi dV} = -\oint_{S_i}{\phi \vec{E} \cdot \hat{n} dS} \end{equation} where $\hat{n}$ represents the outward unit normal vector along the surface. We can use (\ref{eq:exiey}) to determine not only the electric field along this contour but also the potential since $\phi(z_2) - \phi(z_1) = -\int_{z_1}^{z_2}{\vec{E} \cdot dl}$ where $l$ corresponds to the path difference between arbitrary points $z_1$ and $z_2$ (see Appendix \ref{section:appcpc}). By transforming the calculation of the electric field energy into an integration over a closed contour $S_i$ which does not need to contain the singularities at the edges of the metallization ($x = \pm a, \pm b$), (\ref{eq:green2}) will converge. Thus, the electric field energy within an arbitrary volume with relative dielectric constant $\epsilon_r$ can be represented by: \begin{equation}\label{eq:udef} U_i = -\frac{\epsilon_0 \epsilon_r}{2} \oint_{S_i}{\phi \vec{E} \cdot \hat{n} dS} \end{equation} where the subscript $i$ refers to a particular domain residing within the substrate. For example, $U_{SM}$ and $U_{SA}$ refer to the energy in regions extending a thickness $\delta$ below the $SM$ and $SA$ interfaces, respectively, $U_{sub}$ refers to that contained within a thickness $\delta$ extending below the entire substrate top surface and $U_{tot}$ the energy of the system. Note that for two-dimensional domains, the line integral in (\ref{eq:udef}) is evaluated about a clockwise loop. \begin{figure}[htbp!] \centering \includegraphics[width=0.35\textwidth]{Fig2.pdf} \caption{(a) Calculation of electric field energy, U, within a rectangular region between the capacitor paddles. The contribution corresponding to the y-axis (labelled 1) is zero because the potential is zero and the contribution along the x-axis (labelled 2) is also zero because the component of the electric field normal to the surface is zero. Only surfaces 3 and 4 must be evaluated using (\ref{eq:udef}). (b) Calculation of U for a rectangular region directly under one paddle, corresponding to half of $U_{SM}$, where all four surfaces contribute to the integral.} \label{fig:surfint} \end{figure} Fig.'s \ref{fig:surfint}a and \ref{fig:surfint}b depict the methodology for calculating $U$ in a rectangular region within the substrate between the capacitor paddles and directly underneath a paddle, respectively. For the surface of a rectangular region aligned with the $x$ and $y$ axes, $\vec{E} \cdot \hat{n}$ simply refers to the corresponding component of $\vec{E}$ normal to that surface. The boundary conditions of $\phi$ allow us to further simplify the integral by noting that the potential is constant under the metallization ($\phi(x,0) = +\phi_{0}$ for $a \leq x \leq b$ and $-\phi_{0}$ for $-b \leq x \leq -a$), the potential field is antisymmetric with respect to $x$ $(\phi(0,y) = 0)$ and is zero at $\infty$. These boundary conditions also allow us to directly calculate the total electric field energy, $U_{tot}$, of the system within Fig. \ref{fig:padconform}a by defining two volumes: one that encompasses the entire semi-infinite substrate and one enclosing the semi-infinite vacuum, and noting that the only contribution to (\ref{eq:udef}) comes from the surfaces adjacent to the electrodes: \begin{equation}\label{eq:utotdef} U_{tot} = \epsilon_0 \left( \epsilon_{sub} + 1 \right) \left(\phi_0 \right)^2 \frac{K(k')}{K(k)} \end{equation} Because the units in (\ref{eq:utotdef}) are Joules / meter due to the two-dimensional analysis of the electric fields, the total energy can be determined by multiplying $U_{tot}$ by the length of the capacitor, $l_0$. In the case of interdigitated capacitor paddles, $l_0$ represents an effective length based on the aspect ratio of the paddles. Although the paddle corners are neglected in the calculation of (\ref{eq:utotdef}), their contribution to $U_{tot}$, which is proportional to the overall capacitance, is estimated to be less than $2 \%$ \cite{Alley70}. \begin{figure}[htbp!] \centering \includegraphics[width=0.4\textwidth]{Fig3.pdf} \caption{(a) Comparison of electric field energy per unit length within a volume of thickness $\delta$ from the top surface of a dielectric substrate $\left(U_{sub}\right)$ or only under the substrate-to-metal regions $\left(U_{SM}\right)$ normalized by the total electric field energy $\left(U_{tot}\right)$, as calculated by FEM and by the analytical model (\ref{eq:udef}) and (\ref{eq:utotdef}). The paddle dimensions are $a$ = 10 $\mu$m and $b$ = 70 $\mu$m and substrate relative dielectric constant is 11.45. (b) Ratio of $U_{SM}$ to $U_{sub}$ from the analytical model and from FEM simulations. } \label{fig:uivsutot} \end{figure} Fig. \ref{fig:uivsutot}a depicts $U_{sub}$ and $U_{SM}$ normalized by $U_{tot}$ as calculated by the surface integral model using Mathematica (Wolfram Research, Inc., Champaign, IL, USA) and by FEM using HFSS (Ansys, Inc. Canonsburg, PA, USA), where the paddle dimensions are $a$ = 10 $\mu$m and $b$ = 70 $\mu$m and substrate relative dielectric constant is 11.45. The two methods diverge in their calculation of $U_{sub} / U_{tot}$ for values of $\delta$ less than 0.1 $\mu$m but both approach $\epsilon_{sub} / \left( \epsilon_{sub}+1 \right) \approx 0.92$ at large $\delta$. A similar discrepancy in $U_{SM} / U_{tot}$ is observed in Fig. \ref{fig:uivsutot}a between the two methods for small values of $\delta$. As can be seen in Fig. \ref{fig:uivsutot}b, the ratio of $U_{SM}$ to $U_{sub}$ is predicted to be 0.5 for $\delta \leq 1 \mu$m by the analytical model and increases slightly for larger values of $\delta$, with the remainder corresponding to the electric field energy under the free surface of the substrate, $U_{SA}$. It is clear that for thin volumes the analytical model gives a better representation of the electric field energy than the finite element method. The reasons for the observed differences between the two models involve the singularities in the electric field intensity at the metallization corners, and the difficulty in accurately catpuring their effects when discretizing domains in which large disparities between $\delta$ and the lateral dimensions of the paddles exist \cite{Wenner11}, \cite{Wang15}, \cite{Gambetta16}. These complications are exacerbated in regions that are adjacent to such singularities and possess thicknesses less than 10 nm, both of which are relevant to the calculation of surface participation. \section{Participation} \label{section:partbc} The approach described in Section \ref{section:energy} is applicable to calculate the electric field energy in any prescribed volume. However, to calculate the volume participation due to thin layers with dielectric constants that differ from that of the substrate, we can utilize the matching of boundary conditions at the interface between the contamination layer and the substrate \cite{Wenner11},\cite{Sandberg12},\cite{Gambetta16},\cite{Dial16}: \begin{eqnarray} E_x^{c} &=& E_x^{i} \label{eq:eqn1} \\ \epsilon_{c} E_y^{c} &=& \epsilon_{i} E_y^{i} \label{eq:eqn2} \end{eqnarray} where the superscript $c$ refers to the hypothetical contamination layer and $i$ the actual dielectric material present in the simulation (e.g: silicon for the substrate surfaces or vacuum for the free surfaces). Within a region close to the top surface of the substrate $(y \ll a)$, the electrostatics dictate that underneath the electrodes $E_y^{c} \approx \frac{\epsilon_i}{\epsilon_c} E_y^{i}$ represents the dominant contribution to the electric field whereas $E_x^{c} \approx E_x^{i}$ dominates in the regions without metallization. In this way, we can approximate the participation in contamination layers below the paddle metallization $(U_{SM})$, above the paddle metallization $(U_{MA})$ and along the free substrate surface $(U_{SA})$ in the following manner: \begin{eqnarray} P_{SM} & \approx & \left( \frac{\epsilon_{sub}}{\epsilon_{c:SM}} \right)^2 \frac{U_{SM}^c}{U_{tot}} \label{eq:psm0} \\ P_{SA} & \approx & \frac{U_{SA}^c}{U_{tot}} \label{eq:psaeq0} \\ P_{MA} & \approx & \left( \frac{1}{\epsilon_{c:MA}} \right)^2 \frac{U_{SM}^c}{U_{tot}} \label{eq:pma0} \end{eqnarray} where $U_i^c$ refers to the surface integral in (\ref{eq:udef}) over the corresponding volumes with relative dielectric constant $\epsilon_{c:i}$. Note that it is assumed that the contribution of the electric field energy within the contamination layers is much smaller than that of the entire system. From (\ref{eq:udef}), we can also express $U_i^c$ as $\left( U_i \epsilon_{c:i} \right) / \epsilon_r$ so that (\ref{eq:psm0}) to (\ref{eq:pma0}) can be simplified to form: \begin{eqnarray} P_{SM} & \approx & \left( \frac{\epsilon_{sub}}{\epsilon_{c:SM}} \right) \frac{U_{SM}}{U_{tot}} \label{eq:psm} \\ P_{SA} & \approx & \left( \frac{\epsilon_{c:SA}}{\epsilon_{sub}} \right) \frac{U_{SA}}{U_{tot}} \label{eq:psaeq} \\ P_{MA} & \approx & \left( \frac{1}{\epsilon_{c:MA} \epsilon_{sub}} \right) \frac{U_{SM}}{U_{tot}} \label{eq:pma} \end{eqnarray} where the following relation, $U_{MA} = U_{SM} / \epsilon_{sub}$, was incorporated into (\ref{eq:pma}) due to the symmetry of the electric field distributions above and below the metallization. Because we can infer that $U_{SM} \approx U_{SA}$ for values of $\delta / a < 0.1$ from Fig. \ref{fig:uivsutot}b, we can establish approximate ratios of the various surface participation components. \begin{eqnarray} \frac{P_{SM}}{P_{SA}} & \approx & \frac{ \left( \epsilon_{sub} \right)^2}{\epsilon_{c:SM} \epsilon_{c:SA}} \label{eq:psmpsa} \\ \frac{P_{MA}}{P_{SM}} & \approx & \frac{ \epsilon_{c:SM}}{\epsilon_{c:MA} \left( \epsilon_{sub} \right)^2} \label{eq:pmasm} \end{eqnarray} Equations (\ref{eq:psmpsa}) and (\ref{eq:pmasm}) demonstrate that the surface participation components can be treated as linearly dependent for small values of $\delta / a$ \cite{Wang15}, \cite{Calusine18}. \begin{figure}[htbp!] \centering \includegraphics[width=0.45\textwidth]{Fig4.pdf} \caption{Comparison of substrate-to-metal (SM), substrate-to-air (SA) and metal-to-air (MA) participation values as a function of contamination layer thickness, $\delta$, all with relative dielectric constant $\epsilon_c = 5.0$. The symbols correspond to values calculated numerically using (\ref{eq:psm}) to (\ref{eq:pma}) and the dashed lines the approximate formulation (\ref{eq:psa}) combined with (\ref{eq:psmpsa}), (\ref{eq:pmasm}). The paddle dimensions are $a$ = 10 $\mu$m and $b$ = 70 $\mu$m and substrate relative dielectric constant is 11.45.} \label{fig:piall} \end{figure} The symbols depicted in Fig. \ref{fig:piall} represent the participation values calculated numerically using the surface integral approach (\ref{eq:psm}) to (\ref{eq:pma}) as a function of contamination layer thickness, $\delta$, for a paddle design with $a$ = 10 $\mu$m and $b$ = 70 $\mu$m, $\epsilon_{sub} = 11.45$ and equal relative dielectric constants, $\epsilon_c = 5.0$, among all of the contamination layers. It is clear that the SM participation represents the largest contribution and that the MA participation is over two orders of magnitude less: $(1/\epsilon_{sub})^2$ from (\ref{eq:pmasm}). The finding of small $P_{MA}$ values relative to $P_{SM}$ or $P_{SA}$ is consistent with the calculations of \cite{Wenner11} and \cite{Wang15}. However, the ratio of $P_{SM}$ to $P_{SA}$ of approximately 5.2, $ (\epsilon_{sub} / \epsilon_c)^2$ from (\ref{eq:psmpsa}), differs from \cite{Wenner11} and \cite{Wang15} primarily due to the values of relative dielectric constants assumed among the various simulations. Although the true compositions of the surface contamination layers are not known, a value for $\epsilon_c$ of 5.0 would be representative of organic residue or silicon oxide, which we believe to remain on the top surface of silicon substrates. Other simulations have used $\epsilon_c$ values of 10.0 which were equal to the corresponding substrate dielectric constants \cite{Wenner11}, \cite{Wang15}. From (\ref{eq:psm}) and (\ref{eq:psaeq}), we see that identical values of $\epsilon_c$ and $\epsilon_{sub}$ should produce ratios of $P_{SM}$ to $P_{SA}$ of approximately 1. \section{Participation approximation} \label{section:approx} These results can be compared to a much simpler representation based on surface participation as follows. We can directly integrate (\ref{eq:exiey}) with respect to $x$ at a specific value of $y$ where we restrict its evaluation to the region $y \ll a$. As derived in Appendix \ref{section:appapprox}, $r_i (y / a)$ within a contamination layer with relative dielectric constant $\epsilon_c$ and underneath the SA interface can be represented as a logarimthic function with respect to the ratio $y/a$: \begin{eqnarray} & & r_{SA}^c \left(\frac{y}{a} \right) = \frac{\epsilon_c}{(\epsilon_{sub} + 1)} \frac{1}{2a(1-k)K'(k) K(k)} \nonumber \\ & & \cdot \left\{ \ln \left[ 4 \left( \frac{1-k}{1+k} \right) \right] - \frac{k \ln(k)}{(1+k)} - \ln \left( \frac{y}{a} \right) \right \} \label{eq:rsa} \end{eqnarray} It is straightforward to integrate this equation with respect to $y$ to derive an approximate formulation of the electric field energy within a volume of thickness $\delta$ from the top surface of the substrate: \begin{eqnarray} P_{SA} \left(\frac{\delta}{a} \right) \approx \int_0^\delta r_{SA}^c \left(\frac{y}{a} \right) dy \nonumber \\ = \frac{\epsilon_c}{(\epsilon_{sub} + 1)}\frac{1}{2(1-k)K'(k) K(k)} \cdot \nonumber \\ \left( \frac{\delta}{a} \right) \left\{ \ln \left[ 4 \left( \frac{1-k}{1+k} \right) \right] - \frac{k \ln(k)}{(1+k)}+1 - \ln \left( \frac{\delta}{a} \right) \right \} \label{eq:psa} \end{eqnarray} Participation under the various surfaces can then be calculated using the same matching procedure with respect to the electric fields with and without a contamination layer. For example, participation associated with the SM interface is calculated by combining (\ref{eq:psmpsa}) with (\ref{eq:psa}): \begin{eqnarray} \label{eq:psmapp} P_{SM} \left(\frac{\delta}{a} \right) \approx \frac{\epsilon_{sub}^2}{\epsilon_c \left( \epsilon_{sub}+1 \right)} \frac{1}{2(1-k)K'(k) K(k)} \cdot \nonumber \\ \left( \frac{\delta}{a} \right) \left\{ \ln \left[ 4 \left( \frac{1-k}{1+k} \right) \right] - \frac{k \ln(k)}{(1+k)}+1 - \ln \left( \frac{\delta}{a} \right) \right \} \end{eqnarray} As seen in Fig. \ref{fig:piall}, the dashed lines corresponding to (\ref{eq:psa}) combined with (\ref{eq:psmpsa}) and (\ref{eq:pmasm}) closely match the numerically calculated values using the surface integral formulation. Because these curves appear linear in Fig. \ref{fig:piall}, a power law fitting can be performed with an exponent value of approximately 0.88, also arrived at in \cite{Gao08} for CPW structures. However, as shown in (\ref{eq:rsa}) and (\ref{eq:psa}), participation is represented by the integral of a function with a logarithmic dependence on $y / a$ which accounts for the deviation from an exponent of 1 and is not due to effects of finite thickness of the metallization \cite{Gao08}. In fact, the value of this exponent will vary depending on the range in $\delta / a$ considered in such a fitting. \begin{figure}[htbp!] \centering \includegraphics[width=0.45\textwidth]{Fig5.pdf} \caption{Calculation of substrate-to-air (SA) participation values as a function of the ratio of contamination layer thickness, $\delta$, to $a$ with relative dielectric constant $\epsilon_c = 5.0$.} \label{fig:psavsb} \end{figure} Based on (\ref{eq:psa}), universal curves of participation can now be generated for a given value of $k$ based on the capacitor design dimensions. Fig. \ref{fig:psavsb} depicts SA participation as a function of the dimensionless ratio of the contamination layer thickness to the paddle spacing, $\delta / a$, for several values of $b$. As $k$ increases from 1/3, corresponding to equal paddle width and spacing values, to 1/7, where the paddle widths are three times as large as their spacing, $P_{SA}$ only decreases by approximately 38\%, demonstrating a weak dependence on $k$. As shown in Fig. \ref{fig:psavsb}, the approximate formulation provides a close match (less than 0.5\% difference for $\delta / a$ = 0.1) to the participation values calculated by numerical surface integration. \section{Coplanar waveguides} \label{section:cpw} The metallization associated with a CPW design, located on the top surface of a dielectric substrate, is inverted relative to that of Fig. \ref{fig:padconform}a. As shown in Fig. \ref{fig:figcpw}a, a potential of $+\phi_0$ is applied to the center conductor, which resides between $x = -a$ and $x = a$, and the groundplane (at zero potential) spans the distance from $b$ to $\infty$ and from $-b$ to $-\infty$. \begin{figure}[htbp!] \centering \includegraphics[width=0.35\textwidth]{Fig6.pdf} \caption{(a) Cross-sectional schematic of a two-dimensional CPW metallization deposited on a semi-infinite substrate. (b) Transformed geometry through conformal mapping from the $x - y$ plane to the $\xi-\eta$ plane using (\ref{eq:eqcon1}). The x on the bottom metallization denotes the point corresponding to $-\infty$ and $+\infty$ along the $x$-axis from Fig. \ref{fig:figcpw}a.} \label{fig:figcpw} \end{figure} Although the same conformal mapping from the $z$-plane to $w$-plane (\ref{eq:eqcon1}) can be applied, the electric field corresponding to the even mode of the CPW is aligned parallel to the $\eta$ axis, taking the form: \begin{equation} \label{eq:ecpww} E_\xi - i E_\eta = -i \frac{\phi_0 b}{K'(k)} \end{equation} with the following complex potential distribution: \begin{equation} \label{eq:phicpww} \phi(\eta) = \phi_0 \left[1- \frac{b}{K'(k)} \eta \right] \end{equation} The transformed electric field distribution in the $z$-plane can be represented as: \begin{eqnarray} \label{eq:exieycpw} E_x - i E_y & = & -\frac{\partial \phi \left( \eta \right)}{\partial \eta} \frac{\partial \eta}{\partial w} \frac{\partial w}{\partial z} \nonumber \\ & = & -i \frac{\phi_0 b}{K'(k)}\frac{1}{\sqrt{(z^2-a^2)(z^2-b^2)}} \label{eq:exieycpw} \end{eqnarray} The same procedure for calculating the electric field energy through surface integration can be employed, where the electric field distribution and corresponding potential come from (\ref{eq:exieycpw}). Note that the total energy per unit length of the CPW has a different value than in the case of the coplanar capacitors (\ref{eq:utotdef}): \begin{equation}\label{eq:utotcpw} U_{tot}^{CPW} = \epsilon_0 \left( \epsilon_{sub} + 1 \right) \left(\phi_0 \right)^2 \frac{K(k)}{K(k')} \end{equation} Participation values for a CPW structure possess the same form as those shown in (\ref{eq:psm}) to (\ref{eq:pma}), where the surface integral is performed over the appropriate regions (e.g: $ a \leq |x| \leq b $ for SA). It is of interest to note that the analytical approximation (\ref{eq:psa}) holds for both coplanar capacitors and waveguides, as shown in Appendix \ref{section:appapprox}. \section{Comparison} It is instructive to compare the method of calculating surface participation presented in Section \ref{section:approx} to those simulated using different approaches in the previous literature. Fig. \ref{fig:comp} depicts calculations of the SM surface participation based on the approximate formulation (\ref{eq:psmapp}) on the horizontal axis, and the corresponding values based on various untrenched, coplanar designs extracted from \cite{Wenner11} and \cite{Wang15} on the vertical axis. The latter reference contains a survey of $P_{SM}$ values that were calculated for CPW resonators and qubits based on geometries reported in \cite{Chang13}, \cite{Barends13} and \cite{Chow12}. For these simulations, both \cite{Wenner11} and \cite{Wang15} assume equal values for the material and geometric parameters ($\delta$ = 3 nm, $\epsilon_{c:SM}$ = 10) and $\epsilon_{sub}$ = 10 or 11.7 for structures fabricated on sapphire or silicon substrates, resepctively. Qubits with interdigitated capacitors were modeled using a finger width of $2a$ and gap between fingers of $b-a$. Likewise, CPW resonators possessed centerline conductor widths of $2a$ and gaps of $b-a$. As shown in Fig. \ref{fig:comp}, the calculated values from (\ref{eq:psmapp}) replicate those reported in the prior literature, despite the assumptions of a two-dimensional design and metallization features with zero thickness. There is a good match betweeen (\ref{eq:psmapp}) and the approach of \cite{Wenner11}, that employs a complex combination of FEM simulations and modifications to the magnitude of the electric fields near the metallizaton corners, and to methods \cite{Wang15} that stitch together multiple FEM models at different length scales. \begin{figure}[htbp!] \centering \includegraphics[width=0.45\textwidth]{Fig7.pdf} \caption{Comparison of calculated surface participation values, $P_{SM}$, using (\ref{eq:psmapp}) to those predicted in \cite{Wenner11}, \cite{Wang15} for various CPW resonator and qubit designs. The latter reference contains geometries listed in \cite{Chang13},\cite{Barends13},\cite{Chow12}. The dashed line corresponds to a ratio of 1.} \label{fig:comp} \end{figure} \section{Application to a transmon qubit} Simulations of qubit architectures often require the combination of electric field distributions conducted over a large scale, comparable in size to the substrate, with finer calculations local to the junction region. Because of the large disparity in length scales associated with these two regimes, it is often impractical to use one simulation to account for all effects, and can require schemes that stitch together two different FEM models \cite{Wang15}. However, such approaches are still susceptible to errors arising from singularities in the electric field near metallization corners as well as those emanating from overlapping two different solutions over an arbitrarily defined transition region. By using the analytic models presented in the previous sections, we can avoid these issues and employ the quasi-static approximation to match the electric potential, for example, at the intersection of the shunting capacitor paddles and the junction leads. The overall surface participation represents a linear superposition of the effects from these regions. For transmon qubit designs that incorporate a large shunting capacitance (such as those in \cite{Gambetta16}), the large-scale electric field distributions will be dominated by these coplanar capacitors, which can be interdigitated or monolithic in shape. A model geometry is depicted in the top-down schematic of Fig. \ref{fig:qtotab}a, which contains the qubit junction, leads and the edges of the adjacent capacitor paddles. In this example, the geometry of the junction leads is assumed to be composed of 6 individual coplanar sections, each with a width of $2a_i$, length $l_i$, and distance from the centerline of the leads to the grounding plane $b_i$. A symmetric design with respect to the junction position is considered, consisting of a 2 $\mu$m wide, 7$\mu$m long section (\#1 and \#6), an 0.5 $\mu$m wide, 2 $\mu$m long section (\#2 and \#5) and a 0.1 $\mu$m wide, 1 $\mu$m long section (\#3 and \#4) adjacent to the junction. We can use the results of Sections \ref{section:energy} to \ref{section:cpw} to calculate the corresponding surface participation. Let us assume that SM participation represents the dominant surface loss mechanism \cite{Wisbey10}, where a 2 nm thick contamination layer with relative dielectric constant $\epsilon_c = 5.0$ and loss tangent of $4.8 \cdot 10^{-4}$ (see Appendix \ref{section:applosstan}) resides between the metallization and the silicon substrate. In the quasi-static approximation shown in Fig. \ref{fig:qtotab}a, a positive and negative potential of magnitude $\phi_0$ exists on either capacitor paddle, similar to that shown in Fig. \ref{fig:padconform}a. The value of $\phi_0$ does not need to be known to determine surface participation since $\phi_0$ cancels from the numerator and denominator in (\ref{eq:psm}) to (\ref{eq:pma}) for the analytical calculation or (\ref{eq:psmapp}) for the analytical approximation. The corresponding quality factor, Q, is equal to the inverse of the product of the SM participation and the loss tangent of the contamination layer, tan$(\delta_{SM})$. The dotted lines in Fig. \ref{fig:qtotab}b depict the Q values predicted as a function of paddle gap, $2a_0$, for two different values of $k_0$ based solely on SM participation. \begin{figure}[htbp!] \centering \includegraphics[width=0.45\textwidth]{Fig8.pdf} \caption{(a) Top-down schematic of the portion of a qubit design containing the junction, leads and adjacent capacitor paddle edges. The coplanar paddles are assumed to be at an electrical potential of $+\phi_0$ and $-\phi_0$, respectively, with a gap size of $2a_0$. The junction leads are approximated by 6 rectangular segments, each with width $2a_i$, distance between the centerline of the leads and the edges of the grounding plane (not shown) $\pm b_i$ and length $l_i$. Calculation of the surface participation of the leads is accomplished through a piecewise integration along the individual segments using the CPW approximation (Fig. \ref{fig:figcpw}a), where the cross-section is aligned with the green solid line. (b) Calculated Q values due to SM participation (tan$\left(\delta_{SM}\right) = 4.8 \cdot 10^{-4}$), junction lead participation (tan$\left(\delta_{lead}\right) = 4.8 \cdot 10^{-3}$) and overall Q based on these contributions as a function of capacitor paddle gap ($2a_0$) for two different $k_0$ values. The contamination layer thickness is constant ($\delta = 2$ nm with a relative dielectric constant $\epsilon_c = 5.0$).} \label{fig:qtotab} \end{figure} The procedure outlined in Section \ref{section:cpw}, corresponding to a CPW design, can be extended to determine the contribution of the qubit junction leads to the overall participation. The green, solid vertical line in Fig. \ref{fig:qtotab}a corresponds to the cross-sectional plane represented in Fig. \ref{fig:figcpw}a where $2 a$ refers to the width of one section of the junction leads and $\pm b$ is the distance from the centerline of the leads to the grounding planes on either side of the qubit. If we again assume that SM participation dominates surface loss, the surface participation associated with junction leads that vary in width along their length can be calculated by summing contributions from each individual section: \begin{eqnarray} \label{eq:ptotleads} P_{lead}^{SM} & \approx & \frac{\sum_{i=1}^n P_{i}^{SM} U_i l_i}{U_{tot} l_0} \nonumber \\ & = & \frac{ \epsilon_{0} \left( \epsilon_{sub}+1 \right) \left( \phi_0 \right)^2}{\frac{1}{2} C_q \left( 2 \phi_0 \right)^2} \sum_{i=1}^{n} P_{i}^{SM} \frac{K(k_i)}{K'(k_i)} l_i \nonumber \\ & \approx & \frac{\epsilon_{0} \left( \epsilon_{sub}+1 \right)}{2 C_q} K(0) \sum_{i=1}^{n} \frac{P_{i}^{SM} l_i}{K'(k_i)} \end{eqnarray} where the $i^{th}$ section possesses a length, $l_i$, width, $2 a_i$, and $k_i$ represents the ratio of the lead width to the distance between the edges of the grounding plane, $2 b_i = 2 b$. $U_i$ corresponds to the total energy per unit length as determined from (\ref{eq:utotcpw}) and $P_i^{SM}$ can be calculated by the approximate formulation (\ref{eq:psmapp}). In the limit of small $k$, $K(0)$ approaches $\frac{\pi}{2}$ and can be moved outside of the summation in the numerator of (\ref{eq:ptotleads}). The value of $2 \phi_0$ in the denominator arises from the fact that the capacitor paddles are at opposite polarities of magnitude $\phi_0$. If we assume a contamination layer in the vicinity of the leads with the same thickness and relative dielectric constant but larger loss tangent, tan$\left(\delta_{lead}\right)$ of $4.8 \cdot 10^{-3}$, which is more representative of lift-off Al metallization (see Appendix \ref{section:applosstan}), then the Q value calculated only due to SM participation of the leads is $18.8 \cdot 10^{6}$. This value is plotted in Fig. \ref{fig:qtotab}b along with the total Q, calculated by the sum of the reciprocal Q values due to SM capacitor paddle and SM lead participation, indicating that surface loss near the junction has a minor impact on the overall Q values in transmon qubit designs, particularly those that incorporate smaller shunting capacitor dimensions. \section{Conclusion} In summary we present a two-dimensional, analytical formulation for electric field energy within prescribed volumes, based on a surface integration of the potential and electric field, which provides better accuracy at capturing the effects of the singular behavior of the electric fields near the metallization edges than finite element method models. This approach has been applied to the calculation of participation at specific regions within coplanar capacitor and CPW designs, from which simple, closed-form expressions have been generated that approximate the participation of thin contamination layers. The resulting participation values can be used to combine the effects of global electric field distributions from large-scale features with those local to the junction environment to arrive at overall quality factors of qubit designs. \appendices \section{Derviation of analytic formulae for electric field and potential in coplanar designs}\label{section:appcpc} In the following appendix, we give the explicit forms of the electric potential and corresponding electric field distributions necessary to calculate the electric field energy within an arbitrary rectangular volume according to (\ref{eq:udef}). For a two-dimensional representation of a coplanar capacitor geometry, where the metallization, spanning from $a$ to $b$ and from -$a$ to -$b$, is assumed to be infinitely thin, as shown in Fig. \ref{fig:padconform}a, the potential takes the form $(y > 0)$: \begin{equation}\label{eq:phieqn} \phi (z) = \frac{\phi_0}{K(k)} \Re \left\{ F\left[ \sin^{-1} \left( \frac{z}{a} \right) , k \right] \right\} \end{equation} where $\Re$ refers to the real part of the expression, $z = x + i y$ and $F$ is the incomplete elliptic integral of the first kind. Fig. \ref{fig:phiwdashed} depicts the potential distiribution, as normalized by the applied potential $\pm \phi_0$, through the underlying dielectric substrate. \begin{figure}[htbp!] \centering \includegraphics[width=0.45\textwidth]{Fig9.pdf} \caption{Normalized electric potential $\phi(z) / \phi_0$ within the dielectric substrate as a function of position as calculated by (\ref{eq:phieqn}) for coplanar capacitor paddles where $a$ = 10 $\mu$m and $b$ = 70 $\mu$m. Dashed vertical lines denote the position of the paddles located at the top surface of the substrate. } \label{fig:phiwdashed} \end{figure} The $x$ and $y$ components of the electric field are directly derived from (\ref{eq:exiey}). However, their signs are affected by which branch cut is used in the complex $z$-plane. For cases where $x^2 \leq y^2 + \left( a^2 + b^2 \right) / 2$: \begin{eqnarray} & & E_x (z) = - \frac{\phi_0 b}{K(k)} \Re \left[ \frac{1}{\sqrt{(z^2-a^2)(z^2-b^2)}} \right] \nonumber \\ & & E_y (z) = \frac{\phi_0 b}{K(k)} \Im \left[ \frac{1}{\sqrt{(z^2-a^2)(z^2-b^2)}} \right]\label{eq:esmallx} \end{eqnarray} where $\Im$ refers to the imaginary part of the expression, or \begin{eqnarray} & & E_x (z) = \frac{\phi_0 b}{K(k)} \Re \left[ \frac{1}{\sqrt{(z^2-a^2)(z^2-b^2)}} \right] \nonumber \\ & & E_y (z) = -\frac{\phi_0 b}{K(k)} \Im \left[ \frac{1}{\sqrt{(z^2-a^2)(z^2-b^2)}} \right]\label{eq:ebigx} \end{eqnarray} when $x^2 > y^2 + \left( a^2 + b^2 \right) / 2$. For a CPW (Fig. \ref{fig:figcpw}a), where the center conductor is at a potential of $\phi_0$, the potential distribution takes a different form: \begin{equation}\label{eq:phicpw} \phi (z) = \phi_0 \left(1-\frac{1}{K'(k)} \Im \left\{ F\left[ \sin^{-1} \left( \frac{z}{a} \right) , k \right] \right\} \right) \end{equation} which is displayed in Fig. \ref{fig:cpwpot}. \begin{figure}[htbp!] \centering \includegraphics[width=0.45\textwidth]{Fig10.pdf} \caption{Normalized electric potential $\phi(z) / \phi_0$ within the dielectric substrate as a function of position as calculated by (\ref{eq:phicpw}) for a coplanar waveguide with $a$ = 10 $\mu$m and $b$ = 70 $\mu$m. Dashed vertical lines denote the position of the metallization edges located at the top surface of the substrate. } \label{fig:cpwpot} \end{figure} The orthogonal components of the electric field can be represented by: \begin{eqnarray} & & E_x (z) = \frac{\phi_0 b}{K'(k)} \Im \left[ \frac{1}{\sqrt{(z^2-a^2)(z^2-b^2)}} \right] \nonumber \\ & & E_y (z) = \frac{\phi_0 b}{K'(k)} \Re \left[ \frac{1}{\sqrt{(z^2-a^2)(z^2-b^2)}} \right]\label{eq:ecpwsmallx} \end{eqnarray} for $x^2 \leq y^2 + \left( a^2 + b^2 \right) / 2$ or \begin{eqnarray} & E_x (z) = -\frac{\phi_0 b}{K'(k)} \Im \left[ \frac{1}{\sqrt{(z^2-a^2)(z^2-b^2)}} \right] \nonumber \\ & E_y (z) = -\frac{\phi_0 b}{K'(k)} \Re \left[ \frac{1}{\sqrt{(z^2-a^2)(z^2-b^2)}} \right]\label{eq:ecpwbigx} \end{eqnarray} when $x^2 > y^2 + \left( a^2 + b^2 \right) / 2$. \section{Derivation of surface participation approximation formula}\label{section:appapprox} The previous appendix describes the calculation of electric field energy within a prescribed rectilinear region (\ref{eq:udef}), which holds for all volumes within a dielectric substrate but are solved numerically. An analytical approximation can be generated for shallow depths relative to the in-plane dimensions of the metallization features. Let us consider the description of the complex electric field (\ref{eq:exiey}) and determine the square of its norm, $|\vec{E}|^2$, as a function of position: \begin{eqnarray} |\vec{E}|^2 = \Re[\vec{E}]^2 + \Im[\vec{E}]^2 = \left(\frac{\phi_{0} b}{K(k)} \right)^2 \cdot \nonumber \\ \frac{1} {\sqrt{(x+a+iy)(x+a-iy)(x-a+iy)(x-a-iy)}} \cdot \nonumber \\ \frac{1}{\sqrt{(x+b+iy)(x+b-iy)(x-b+iy)(x-b-iy)}} \label{eq:appb1} \end{eqnarray} which, in the case of $0 < y \ll a$ exhibits finite, local maxima near $x = \pm a$ and $x = \pm b$. Equation (\ref{eq:appb1}) can be approximated using the auxiliary functions: \begin{equation}\label{eq:thetaa} \theta_a (x,y) = \left[\frac{\phi_{0} b}{K(k)}\right]^2\frac{1}{(b^2-x^2)(x+a)}\frac{1}{\sqrt{(x-a)^2+y^2}} \end{equation} near $x = a$ and: \begin{equation}\label{eq:thetab} \theta_b (x,y) = \left[\frac{\phi_{0} b}{K(k)}\right]^2\frac{1}{(x^2-a^2)(x+b)}\frac{1}{\sqrt{(x-b)^2+y^2}} \end{equation} near $x = b$. Let $\Theta$ refer to the integration of (\ref{eq:thetaa}) as a function of $x$ from $0$ to $a$ and (\ref{eq:thetab}) from $x = b$ to $\infty$: \begin{equation} \label{eq:psidef} \Theta(y) = \left[ \int_0^a \theta_a (x,y) dx + \int_b^\infty \theta_b (x,y) dx \right] \end{equation} corresponding to the SA regions. In the limit of $y \ll a$, (\ref{eq:psidef}) can be simplified to form: \begin{eqnarray} & & \Theta \left(\frac{y}{a} \right) \approx \left[ \frac{\phi_{0}}{K(k)} \right]^2 \frac{1}{[2a(1-(a/b)^2)]} \nonumber \\ & & \cdot \left\{ \left(1+ \frac{a}{b} \right) \left( \ln \left[ 4 \left( \frac{b-a}{b+a} \right) \right] -\ln \left[ \frac{y}{a} \right] \right) + \frac{a}{b} \ln \left[ \frac{b}{a} \right] \right\} \nonumber \\ & &= \left[ \frac{\phi_{0}}{K(k)} \right]^2 \frac{1}{2 a (1-k)} \nonumber \\ & & \cdot \left\{ \ln \left[ 4 \left( \frac{1-k}{1+k} \right) \right] - \ln \left[ \frac{y}{a} \right] - \frac{k \ln(k)}{1+k} \right\} \label{eq:saenergy} \end{eqnarray} Note that for small values of $y$, an integration of (\ref{eq:appb1}) over SA is equal to that over the SM surface. Equation (\ref{eq:saenergy}) allows us to represent surface participation at a specific depth, $y$ within a contamination layer with relative dielectric constant, $\epsilon_c$ as: \begin{equation} \label{eq:rsadef} r_{SA} \left( \frac{y}{a} \right) = \frac{\epsilon_0 \epsilon_c}{U_{tot}} \Theta \left( \frac{y}{a} \right) \end{equation} which can be simplified, using (\ref{eq:utotdef}), to form: \begin{equation}\label{eq:apprsa} r_{SA}\left(\frac{y}{a}\right) \approx \frac{\epsilon_c}{(\epsilon_{sub} + 1)} \left[C_1 + C_2 \ln \left(\frac{y}{a} \right) \right] \frac{1}{2 a K(k') K(k)} \end{equation} where \begin{eqnarray} C_1 & = & \frac{1}{(1-k)} \left\{ \ln \left[ 4 \left( \frac{1-k}{1+k} \right) \right] - \frac{k \ln(k)}{1+k} \right\} \nonumber \\ C_2 & = & - \frac{1}{(1-k)} \label{eq:constants} \end{eqnarray} The corresponding SM and MA surface participation values are dictated by the boundary conditions of the electric field, and have the following form when all of the contamination layers possess identical relative dielectric constants, $\epsilon_c$: \begin{eqnarray} r_{SM}\left(\frac{y}{a}\right) & \approx & \left(\frac{\epsilon_{sub}}{\epsilon_c}\right)^2 r_{SA}\left(\frac{y}{a}\right) \label{eq:apprsm} \\ r_{MA}\left(\frac{y}{a}\right) & \approx & \left(\frac{1}{\epsilon_c}\right)^2 r_{SA}\left(\frac{y}{a}\right) \label{eq:apprma} \end{eqnarray} Participation within the entire volume of a contamination layer of thickness $\delta$ below the metallization layer can be approximated by integrating these equations with respect to $y$: \begin{equation}\label{eq:apppi} P_i \left(\frac{\delta}{a}\right) = \int_0^\delta r_i \left( \frac{y}{a} \right) dy \end{equation} to arrive at the expression in (\ref{eq:psa}) for $P_{SA}$. Note that for a CPW structure, $K(k)$ must be replaced by $K'(k)$ in the denominator of the expressions for $\theta_a$ and $\theta_b$ (\ref{eq:thetaa}) and (\ref{eq:thetab}). However, using $U_{tot}^{CPW}$ from (\ref{eq:utotcpw}) in the denominator of (\ref{eq:rsadef}) results in exactly the same representation of $r_{SA}, r_{SM}$ and $r_{MA}$ as shown in (\ref{eq:apprsa}), (\ref{eq:apprsm}) and (\ref{eq:apprma}), respectively. \section{Estimation of dielectric loss tangents}\label{section:applosstan} The determination of loss tangent values for different surfaces was performed by comparing experimental Q values from qubits whose capacitor paddles were composed of sputter deposited Nb metallization with lift-off Al junction leads \cite{Gambetta16} or of completely lift-off Al metallization \cite{Chang13}. SM participation values were calculated using the analytical approximation of (\ref{eq:psmapp}), where the paddles possessed an interdigitated comb structure with equal linewidths and gaps of 1 or 5 $\mu$m for the Nb-based qubits and 5 or 30 $\mu$m for the Al qubits. As shown in Fig. \ref{fig:tandcalc}, if we assume that SM participation is the dominant mode of surface loss, then a loss tangent of $4.8 \cdot 10^{-4}$ produces Q values similar to that of the Nb qubits whereas a loss tangent of $4.8 \cdot 10^{-3}$ more closely resembles Q values associated with the Al qubits. \begin{figure}[htbp!] \centering \includegraphics[width=0.45\textwidth]{Fig11.pdf} \caption{Comparison of experimentally determined Q values with calculated surface participation values $P_{SM}$ for Nb-based qubits with Al-based junctions \cite{Gambetta16} and Al-based qubits \cite{Chang13}. The solid line corresponds to a loss tangent of tan($\delta_{SM}$) = $4.8 \cdot 10^{-4}$ and the dashed line to tan($\delta_{SM}$) = $4.8 \cdot 10^{-3}$ where it is assumed that SM surface participation is the dominant surface loss mechanism. } \label{fig:tandcalc} \end{figure}
{ "timestamp": "2018-06-19T02:13:42", "yymm": "1712", "arxiv_id": "1712.05079", "language": "en", "url": "https://arxiv.org/abs/1712.05079" }
\section{Introduction and motivation} The purpose of this paper is to study general relativistic effects of space-time curvature on fermionic spins, via an analog gravity black hole setup in the context of superfluid $\text{He}^3$-A. In particular, we will focus on such relativistic effects on fermionic quasiparticles in the latter system, which are effectively massless in the background of an analog gravity metric. These quasiparticles move with finite speed, which makes this analysis interesting, and we believe that this should have experimental significance. The motivation for this work comes from the fact that such analysis (in real black hole scenarios) involving curved space-times are difficult to envisage, especially in the context of massless fermions, which would then move with the speed of light. The analog gravity picture on the other hand offers a somewhat simpler situation to consider, from which useful physical insights can be gleaned. The relativistic computations here are carried out in coordinates that are locally flat along a geodesic trajectory. These are called Fermi normal coordinates. In general, spinning particles do not follow geodesic trajectories, but the deviation from the latter are known to be small. Since our computations are relevant for effectively massless fermions, we use null Fermi coordinates, and interpret the results as those that will be seen by a null observer who moves along a (null) geodesic trajectory, and performs a measurement on the massless fermions. In this work, we analyze the interaction Lagrangian for massless fermions in the background of an analog metric in null Fermi coordinates, and obtain an interaction term (the curvature coupling) via an {\it effective} magnetic field for these fermions that arise out of analog gravity effects. Standard analysis in quantum mechanics then implies interesting effects that should arise due to this coupling. In order to obtain numerical estimates of our results, we use the uncertainty relation and an energy condition as applicable to low energy quasiparticles in $\text{He}^3$-A. The discussion above provides the motivation and methodology of the present work. Similar computations on effective magnetic field interactions with massive fermions involving Fermi normal coordinates are well known in the literature. However, to the best of our knowledge, massless fermions were not analysed in a similar framework, as this will usually not have much physical relevance. We claim however that such analysis becomes important and interesting in analog gravity, in the context of effectively massless fermionic quasiparticles. This is the main idea developed in this paper. In the rest of this paper, we will analyse analog gravity coupling to massless fermionic quasiparticles in the context of an acoustic black hole. In the next section, we review the necessary background material, to set up the problem. Next, in section 3, we will discuss a possible scenario in which such a black hole can be realized in the laboratory. In section 4, we will first review the known physics of curvature coupling to gravity for massive particles, to set the notation and conventions. Next, we construct the relevant formulas for such couplings for massless particles, which involves determining the null Fermi coordinates. Section 5 deals with curvature couplings of massless fermions that travel in radial null geodesics, in the context of the black hole described in section 3. In section 6, we present similar results for null circular geodesics. In section 7, we provide numerical values that should be useful in an experimental setup. This is done by various estimates of relevant physical quantities. Section 8 concludes this work with a summary of the results and a limitation of our analysis. \section{Overview and setup} Our interest in this paper is in analog gravity, which is known to mimic general relativistic effects, and is an useful tool in probing effects of gravity in table-top laboratory experiments. We will first briefly review this connection. The general theory of relativity (GR) \cite{Weinberg},\cite{Wald},\cite{Hartle},\cite{7} is the most successful theory of gravity till date. Several experimental tests of GR have been performed over the last century, and validation of theoretical predictions in all cases have put the theory on a very firm footing. An interesting arena of study has been the effect of gravity on (classical) spins. In this context, the geodetic (or de Sitter) effect and the Lense-Thirring frame dragging effect are well known, and the recent gravity probe B experiment \cite{GPB} confirms GR predictions on the same to a very high degree of accuracy. It is by now an established fact that one can mimic GR effects in condensed matter systems, the topic now being known as analog gravity. The importance of this subject lies in the fact that analog systems are relatively easier to probe than space-times with curvature, and may provide an easier route to detect gravitational effects in the laboratory, the ultimate aim being to understand possible quantum effects of gravity, via this indirect route. The analog mapping of gravitational effects in condensed matter systems originated from the celebrated work of Unruh in 1981 \cite{1},\cite{2}, where it was shown that motion of elastic perturbations (like sound waves) in the background of normal moving liquids obey the same equations as that of relativistic massless scalar fields moving in a $(3+1)$-dimensional curved space-time, the curvature being determined by an analog metric. It was then realized that mimicking gravity with normal liquids might be somewhat difficult, as such liquids will contain dissipative effects, that might destroy quantum effects related to the event horizon of the associated black hole. Therefore, in order to probe such quantum effects, one should more appropriately look at superfluids. It was then found \cite{3} that low-energy Bogoliubov quasiparticles in moving superfluid $\text{He}^3 $-A obey similar relativistic equations as found by Unruh. The literature on analog gravity is rather large, and the subject continues to attract great interest in the community. Various effects like Hawking radiation etc. have been extensively investigated in this context, via several pioneering works. For a relatively recent review, we refer the reader to the excellent review of \cite{analog1}. For more recent works, see, e.g, \cite{analog2},\cite{analog3},\cite{analog4},\cite{analog5},\cite{analog6},\cite{analog7},\cite{ParthaDa}. In this paper, we consider massless fermions in static metrics, and compute the interaction of gravity with the fermion spin, in null Fermi coordinates. We believe that this is novel, and complements the results in the existing literature. The motivation for this work largely comes from considerations of analog gravity, and we consider a particular analog black hole setup involving superfluid $\text{He}^3 $-A. In textbook examples, when one considers spin, one usually refers to the vector spin of a gyroscope, or to the expectation value of a quantum spin state. However, coupling of gravity to (say) Dirac fermions assumes importance in the study of the hitherto less known relation between gravity and quantum mechanics. Such studies are also abundant in the literature. For example, the energy spectrum of the Hydrogen atom in a curved background was worked out in details by Parker \cite{11},\cite{Parker2},\cite{12}, about three decades back, and several similar works have since followed. These are particularly important in the study of possible Lorentz and CPT violating gravitational interactions. In a previous paper \cite{13}, we studied massive Dirac fermions in the background of a generic static space-time. It is well known (see, e.g \cite{8},\cite{9}) that in such a situation, the interaction Lagrangian in a suitable limit assumes the form ${\vec B}.{\vec S}$ where ${\vec B}$ is an {\it effective} magnetic field introduced by gravitational interactions. This computation was done for observers in both radial and circular geodesics, (in Fermi normal coordinates \cite{6}, see below) and it was shown in \cite{13} that the effective magnetic field can be large, within experimentally measurable ranges for a class of static backgrounds. The role of the observer is important in this context, and needs careful explanation. It is known (see, e.g \cite{YeeBander} and references therein) that in general objects with spin do not follow geodesic paths (although the difference between the actual path of such objects and corresponding geodesics are small), and the same is expected to hold for quantum spins as well. For massive fermions, one has to therefore imagine a hypothetical timelike observer who moves along a geodesic, and does measurements on a fermionic system in her reference frame. This observer has to set up a locally flat coordinate system all along the geodesic, and such a system is constructed by the prescription of Manasse and Misner \cite{6}, and is called the Ferm normal coordinate system. For example, in \cite{11}, it was assumed that the Hydrogen atom as a whole moves on a geodesic to a good degree of approximation, and the Dirac equation of the electron was solved in the Fermi normal system corresponding to a Schwarzschild background, yielding corrections due to gravity in the energy spectrum. Our computations in this paper will involve massless (Weyl) fermions in $\text{He}^3 $-A, and the issue of the observer is even more subtle. Indeed, it is difficult to visualize a null observer (moving at the speed of light) in any physical sense, and although null Fermi coordinates can be set up, in general the issue might seem to be purely of mathematical interest. However, there is one situation in which this might make physical sense, and this is in the context of analog gravity mentioned above. As we discuss in details, in this situation, the speed of light is replaced by the supercritical quasiparticle speed, and it is not difficult to imagine a null observer in analog gravity making a measurement on a massless fermionic system. Although we are unable to offer a precise experimental setup, we believe that this might be an interesting issue to pursue, and it gives rise to some novel effects in analog gravity. We proceed with this understanding. In the context of analog gravity in $\text{He}^3$-A, the quasi-particle spectrum, expanded near some special points on the Fermi surface reduces to that of charged, massless fermions, with the degrees of freedom propagating on a curved background. It is this physical fact that motivates our analysis in the present paper. In particular, we will consider the Weyl Lagrangian in the background of the curved metric that the quasiparticles see, and determine the curvature coupling of the quasiparticles with the background metric. At this point, let us mention some known facts that we will remember throughout this work. We will follow the discussion of Volovik \cite{VolovikBookHelium} (see section 4.3.2 of that book). In the present situation, we deal with particles and (fermionic) quasiparticles. Here, following \cite{VolovikBookHelium}, we can define two distinct types of observers, the ``external'' observer who deals with the realm of particles and the ``inner'' observer that deals with quasiparticles. While the former is not affected by the analog metric, the latter lives in an effective curved space-time that is the result of this metric, and free quasiparticles will move in roughly geodesic paths of this metric, according to the inner observer. For the analog metric that determines the dynamics of massless fermionic quasiparticles in $\text{He}^3 $-A, it is interesting to understand how the (charged) Weyl fermionic quasiparticles couple to the curvature of the analog metric. Studying this aspect of analog gravity is the purpose of this paper, and we will see later, might be of relevance to analog gravity experiments. To the best of our knowledge, such an analysis has not been performed previously in the literature. Once such a framework is set up, we can ask the following question : suppose we consider an external observer (in the sense of the last paragraph) moving with speed $c$ (the analog of the speed of light, see below) in $\text{He}^3 $-A, along a radial or circular path, mimicking the geodesic trajectory of the inner observer. The inner observer analyzes the system in her locally flat coordinate system along the geodesic. So, if the external observer performs an experiment on the Bogoliubov quasiparticles, we expect her to make similar observation as will the inner one. If there is any novel physics that is recorded by the latter, the same should show up in the observations of our external observer as well. To put the above discussion in perspective, we note that if the background fluid motion is radial and spherically symmetric, the corresponding effective metric is expressed as: \begin{equation} ds^2=-\left(c^2-v^2(r)\right)dt^2 \mp 2 v(r) dr dt + dr^2 + r^2 d\Omega^2 \label{a} \end{equation} where $ v(r)$ is the velocity of the fluid, and $c$ is the analog of the speed of light.\footnote{We will denote by $c_L = 3 \times 10^8~{\rm m/s}$ as the usual speed of light.} The negative and positive signs above stand for $v(r)>0$ (fluid moves away from center) and $v(r)<0$ (fluid moves towards the center), respectively. Also, $d\Omega^2$ is the standard metric on the two-sphere of unit radius. If the fluid moves towards the center with increasing velocity and the perturbations move within the fluid with a constant speed $c$, the fluid velocity, for a finite radius $ r=r_h $, may equal $c$. This radius will then, mark the position of an event horizon because any perturbation moving inside $r_h$ can never come out. Therefore, an analog Black Hole will be formed which Unruh named as Sonic Black Hole, and it is expected that many important properties of the black holes can be studied experimentally in the laboratory, via this analogy. In particular, we will understand the effect of curvature on massless quasiparticles in superfluid backgrounds, which assumes relevance following our discussion in the last paragraph. \section{Analog gravity and a possible scenario of a 2-D black hole formation} In this section, we will briefly review the formalism of analog gravity that will be used in the rest of the paper. This section is review material, and we will be brief here. The necessary details can be found in \cite{2},\cite{3}, \cite{VolovikBookHelium},\cite{5} and references therein. We begin with a brief overview of fermionic quasiparticles in $\text{He}^3 $-A. \subsection{Analog gravity in $\text{He}^3 $-A : overview}\label{overview} It is well known that the energy spectrum of fermionic quasiparticles in $\text{He}^3 $-A is given by \begin{equation} E\left({\vec p}\right) = \pm \sqrt{v_F^2\left(p-p_F\right)^2 + \frac{\Delta_v^2}{p_F^2}\left({\hat l} \times {\vec p}\right)^2} \label{disp} \end{equation} Here, ${\hat l}$ is a unit vector that is directed along the spontaneous angular momentum of the Cooper pairs in the $\text{He}^3 $-A condensate, $p_F$ is the Fermi momentum, and given in terms of the Fermi velocity $v_F$ as $p_F = v_F m^*$, with $m^*$ being an effective mass, of the order of the mass of the $\text{He}^3$ atom. $\Delta_v$ is called the gap amplitude. Zeros of the energy $E({\vec p})=0$ occur at ${\hat l} \times {\vec p} = 0$ and $p=p_F$, which translates into ${\vec p} = ep_F{\hat l}$, with $e=\pm 1$. The quasiparticle energy thus vanishes at ${\vec p} = e{\vec A}$, with an effective gauge field ${\vec A} = p_F {\hat l}$. Low energy excitations can be obtained by expanding the dispersion relation in Eq.(\ref{disp}) in powers of $\left({\vec p} - e{\vec A}\right)$. It is not difficult to see that doing this, one obtains (see, e.g \cite{3}) in terms of $c = \Delta_v/p_F$, close to ${\vec p} = e{\vec A}$, \begin{eqnarray} &&g^{\mu\nu}\left(p_{\mu} - eA_{\mu}\right)\left(p_{\nu} - eA_{\nu}\right) =0~;~~g^{00}=-1,~\nonumber\\ &&g^{0i}=0,~g^{ij}= v_F^2l^il^j + c^2\left(\delta^{ij} - l^il^j\right) \label{met} \end{eqnarray} The above metric components and the vector potential were specified in terms of the coordinates $\left(x^0,{\vec x}\right)$, where $x^0$ is the ``Newtonian'' time $t$, and ${\vec x}$ are Cartesian spatial coordinates at rest with respect to the superfluid. However, for reasons that we describe now, we will be more interested in a situation where the background superfluid is in motion, as it can give rise to a possible analog black hole. \subsection{Analog black hole in $\text{He}^3 $-A}\label{bh} We will be interested in an analog black hole in the context of $\text{He}^3 $-A. In fact, a two dimensional analog Black Hole formation can be realized in the laboratory if we consider the geometry as described by Volovik, in Fig.1(a) of his paper \cite{5} (see Fig.\ref{fg1}), which is also a generalization of the original draining bathtub geometry discussed in \cite{4}. \begin{figure}[h] \centering \includegraphics[height=5 cm,width= 8.5 cm]{Image1.pdf} \caption{Formation of a 2-D black hole in draining bathtub geometry. This is inspired by Fig.1(a) of \cite{5}.} \label{fg1} \end{figure} In this figure, a 2-D thin film of $\text{He}^3$-A, which forms the background of the system, is moving towards the orifice at the center where it goes into the third (vertical) dimension. But it is not directly connected to the wall of the container. The $\text{He}^3$-A film is moving over a thin layer of superfluid $ \text{He}^4 $ as shown. The reason of introducing superfluid $ \text{He}^4 $ film lies in the fact that the direct motion of $\text{He}^3$-A with respect to the wall of the container gives rise to some undesirable effects. Namely, the interaction of $\text{He}^3$-A with the walls produces the Cherenkov radiation of quasiparticles which collapses the superfluidity and as a result, the `supercritical' motion becomes unstable \cite{5}. But the supercritical velocity ($c\sim 3$ cm/s) of $\text{He}^3$-A is much smaller than the Landau velocity for quasiparticle radiation in superfluid $ \text{He}^4 $ (about 50 cm/s). So the $ \text{He}^4 $ layer is not excited even when $\text{He}^3$-A moves with velocity greater than $c$. This makes the super-critical motion of $\text{He}^3$-A film with respect to $ \text{He}^4 $ layer stable. To describe the geometry, it is convenient to choose a cylindrical coordinate system. The plane of the $ \text{He}^3 $ film is described by the coordinates $r$ and $\phi$. The $z$ axis is set to be normal to the film. If the thickness of the film remains constant throughout, the velocity of flow of superfluid $\text{He}^3$-A towards the center will be given as, $v(r)=-\frac{c r_h}{r}$. As it reaches the event horizon, $r=r_h$, its velocity equals $c$. Below $r_h$, it is greater than $c$ and above, less than $c$. As a result, the quasiparticles can never come out of the region $r< r_h$, forming a Black Hole type scenario. \subsection{The analog black hole metric} With the above discussion, it is now possible to write down the metric that is seen by Bogoliubov quasiparticles in $\text{He}^3 $-A. Clearly, the formula in Eq.(\ref{met}) of subsection (\ref{overview}) needs to be modified when the background superfluid $\text{He}^3 $-A is in motion. This is obtained by a simple coordinate transformation of the coordinates used in Eq.(\ref{met}), as $\left(t,{\vec x}\right) \to \left(t, {\vec x} - {\vec v}t\right)$. One chooses a coordinate system in which ${\hat l}$ (also the anisotropy axis of the velocity field $v$) points in the direction of ${\hat z}$, and after writing the energy dispertion relation of Eq.(\ref{disp}) in the new coordinate system, it can be readily verified \cite{5} that the energy spectrum for the low-energy Bogoliubov fermionic quasiparticles yields \begin{equation} (E-\mathbf{p}\cdot \mathbf{v})^2 = c^2 (p^2_x + p^2_y) + v^2_F (p_z -e p_F)^2 \label{b} \end{equation} where $e= \pm 1$. The velocity of the quasiparticles along the film ($c\sim 3$ cm/s) is much smaller than the velocity normal to the film ($v_F \sim 55$ m/s). So the degree of anisotropy of the velocity is large. Again, the velocity of the background superfluid vacuum ($v$) is purely 2-dimensional outside the orifice, i.e., it moves in the $ (r,\phi) $-plane only. Considering such a velocity field, the energy spectrum of the Bogoliubov quasiparticles in Eq.(\ref{b}) can be recast into an effective motion of a charged, massless relativistic particle in a $ (3+1) $-dimensional curved space-time with the following form of the metric \begin{eqnarray} ds^2 = - \left(c^2 - v^2(r)\right) dt^2 + 2 v(r) drdt + dr^2 + r^2 d\phi^2 + \frac{c^2}{v^2_F} dz^2 \label{c} \end{eqnarray} where we have denoted ${\vec v} = \left(v_x, v_y\right)$. The above line element also shows that the $ g_{00} $ component of the metric changes sign as $ v(r) $ becomes greater than $ c $ inside $ r=r_h $ confirming the formation of a black hole. We mention in passing that one can, in this case, make a coordinate transformation \begin{equation} t = \tau + \int \frac{v(r)dr}{c^2 - v(r)^2} \end{equation} This equation is integrable, and in terms of the coordinate $\tau$, yields the metric \begin{equation} ds^2 = -\left(c^2 - v^2\right)d\tau^2 + \frac{c^2}{c^2 - v^2}dr^2 + r^2d\phi^2 + \frac{c^2}{v_F^2}dz^2 \label{metfinal} \end{equation} The metrics in Eq.(\ref{c}) or (\ref{metfinal}) give equivalent results. We will use the form in Eq.(\ref{c}) in what follows. As mentioned in the introduction, the quasiparticles see this metric, and their dynamics is governed by the same. Since these are analogous to massless charged fermions, it is pertinent to ask what a null observer in the background of the metric of Eq.(\ref{c}) would say about the fermions. In the remainder of this paper, we study massless fermionic quasiparticles in the background of the geometry of Eq.(\ref{metfinal}) (equivalently Eq.(\ref{c})). In the next section, we present the formalism to deal with this problem. \section{Curvature coupling of quasiparticles : formalism} Let us imagine that the superfluid excitations, known as Bogoliubov quasiparticles, move in geodesics in an effectively curved space-time. Since these quasiparticles are excitations of superfluid $\text{He}^3 $-A vacuum, they are just the dressed $ \text{He}^3 $ atoms having Bogoliubov spin, and are fermionic in nature. As a result, they will exhibit the characteristic signatures of their spin while moving in (nearly) geodesic trajectories by getting coupled with the intrinsic curvature of the space-time metric that they see. As mentioned before, we have in mind an observer who makes a measurement on the fermions, in coordinates that are locally flat all along a given geodesic. To set the notations and conventions, let us first discuss the known simpler formalism of computing curvature couplings for massive fermions in brief, and then we will apply it in our specific problem. \subsection{Curvature coupling of massive fermions} Fermions in curved space-time is a well researched topic. In this paper, we are interested in the Fermi normal coordinates, i.e a coordinate system that is locally flat at each point of space-time through which the spinor travels. An extensive discussion and construction of Fermi Normal Coordinates for timelike geodesics can be found in \cite{6} by Manasse and Misner. Let us briefly review the construction of \cite{6}, to set the stage. Consider a set of four orthogonal vectors which satisfies the following two relations along a timelike geodesic of a massive particle \cite{6} \begin{equation} \hat{e}_\alpha \cdot \hat{e}_\beta = \eta_{\alpha \beta} ~~ , ~~~~ \nabla_{\nu'}(\hat{e}^{\mu'}_\alpha) \hat{e}^{\nu'}_0 = 0 \label{d} \end{equation} where $ \nabla $ denotes covariant derivative, $ \eta_{\alpha \beta} $ is the usual Minkowski metric with signature $ (-,+,+,+) $ and $ \hat{e}_0 $ represents the tangent vector to the timelike geodesic. The primed indices refer to the components of the vectors in the original coordinate system of the metric, and the unprimed indices refer to the corresponding components in Fermi normal coordinates. The structures $ \hat{e}^{\mu'}_\alpha,\hat{e}^{\nu'}_\beta... $ define the different elements of the coordinate transformation matrix from general coordinates to Fermi normal coordinates. Therefore, once the above tetrad is set as the basis of Fermi normal coordinate system, we can in principle compute the components of every tensor in this locally flat system. For Riemann curvature tensor, these components are \begin{equation} R_{\alpha \beta \gamma \delta} = \hat{e}^{\mu'}_\alpha \hat{e}^{\nu'}_\beta \hat{e}^{\lambda'}_\gamma \hat{e}^{\sigma'}_\delta R_{\mu' \nu' \lambda' \sigma'} \label{e} \end{equation} The metric close to the geodesic $ (G) $, now, looks like, up to second order in coordinates \cite{6,7} \begin{eqnarray} g_{00} &=& -1 - R_{0l0m}\vert_G ~ x^l x^m, ~~ g_{0i} = -\frac{2}{3} R_{0lim}\vert_G ~ x^l x^m \nonumber\\ g_{ij} &=& \delta_{ij} - \frac{1}{3} R_{iljm}\vert_G ~ x^l x^m \label{f} \end{eqnarray} where the Latin indices $ i,j,k,... $ take the values 1,2 and 3. Here, the observer's time dependence enters the metric only through the curvature tensor components as they are evaluated at a particular proper time along the geodesic $ G $. After obtaining such a coordinate system, we can study the covariant Dirac Lagrangian given by \begin{equation} \mathcal{L} = \sqrt{-g}(i \bar{\psi}\gamma^\alpha \mathcal{D}_\alpha \psi - m \bar{\psi}\psi) \label{g} \end{equation} where $ \gamma^\alpha $ are the normal Dirac matrices. The definition of $ \mathcal{D}_\alpha $ (covariant derivative) in Eq.(\ref{g}) is given by \begin{equation} \mathcal{D}_\alpha \equiv (\partial_\alpha -\frac{i}{4}\omega_{\beta \gamma \alpha} \sigma^{\beta \gamma}) \label{h} \end{equation} where the spin connection $ (\omega_{\beta \gamma \alpha}) $ and $ \sigma^{\beta \gamma} $ are given, respectively, by \begin{eqnarray} \omega_{\beta \gamma \alpha} &=& \hat{e}_{\beta \mu'}(\partial_\alpha \hat{e}^{\mu'} _\gamma + \Gamma^{\mu'} _{\nu' \rho'} \hat{e}^{\nu'} _\gamma \hat{e}^{\rho'} _\alpha),\nonumber\\ \sigma^{\beta \gamma} &=& \frac{i}{2}[\gamma^\beta , \gamma^\gamma] \label{i} \end{eqnarray} In the above expressions, $ \Gamma^{\mu'} _{\nu' \rho'} $ are the Christoffel connection and $ e^{\mu'} _\alpha $ denotes, as stated before, the coefficient of the transformation matrix connecting the curved and flat space-times. If the expression of $ \mathcal{D}_\alpha $ is put in the Lagrangian equation, i.e, Eq.(\ref{g}), the corresponding term coming from the spin connection involves an interaction Lagrangian of the form $ \bar{\psi}\gamma^\alpha \gamma^5 b_\alpha \psi $ \cite{8,9}. The four vector $ b^\alpha $ can be written in the following form : \begin{eqnarray} b^\sigma &=& \frac{1}{4}\epsilon ^{\alpha \beta \gamma \sigma}\hat{e}_{\beta \mu'}(\partial_\alpha \hat{e}^{\mu'}_\gamma + \Gamma^{\mu'} _{\nu' \rho'} \hat{e}^{\nu'}_\gamma \hat{e}^{\rho'}_\alpha) \nonumber \\ &\equiv& \frac{1}{4}\epsilon ^{\alpha \beta \gamma \sigma}\hat{e}_{\beta \mu'} \partial_\alpha \hat{e}^{\mu'}_\gamma \label{j} \end{eqnarray} where $ \gamma^5 = i\gamma^0 \gamma^1 \gamma^2 \gamma^3 $. In the corresponding Hamiltonian, this interaction term can be made to look like an effective interaction energy of the form $ -\vec{b} \cdot \vec{s} $ in the non-relativistic limit \cite{10}. Here, both $ \vec{b} $ and $ \vec{s} $ represent normal 3-dimensional vectors as $ b_0 $ is identically zero according to Eq.(\ref{j}). Now, to proceed further in this analysis, we need to find out the expressions of the effective magnetic field components ($ b^\sigma $) in Fermi normal coordinates. The forms of the metric components in these coordinates are already given in Eq.(\ref{f}) and the corresponding expressions of the vierbeins are \cite{11,12} \begin{eqnarray} &&\hat{e}^{\mu'} _0 = \delta^{\mu'}_0 -\frac{1}{2}R^{\mu'}{}_{l0m}\vert_G ~ x^l x^m, \nonumber\\ &&\hat{e}^{\mu'} _i = \delta^{\mu'}_i -\frac{1}{6}R^{\mu'}{}_{lim}\vert_G ~ x^l x^m \label{k} \end{eqnarray} where $ i,l,m,... $ run over spatial indices only. Therefore, we shall first set up the tetrad basis of Eq.(\ref{d}) and calculate the components of the Riemann tensor in these coordinates using Eq.(\ref{e}). Then we put the expressions of vierbeins of Eq.(\ref{k}) into Eq.(\ref{j}) to obtain the desired result in Fermi normal coordinates. The corresponding forms of $ b_\sigma $ come out to be \begin{eqnarray} b_0 &=& -\frac{1}{4}\epsilon_{0ijk}\left(\frac{2}{3}R^{jik}{}_m\big\rvert_G +\frac{1}{6}R^{jki}{}_m\big\rvert_G\right) ~ x^m, \nonumber \\ b_i &=& \frac{1}{4}\epsilon_{0ijk}\left(\frac{1}{3}R^0{}_{m}{}^{jk}\big\rvert_G - \frac{1}{3}R^{0k}{}_m{}^j \big\rvert_G \right)x^m \label{l} \end{eqnarray} The above expression of $ b_0 $ is such that it vanishes identically, as stated before. After determining $ R_{\alpha \beta \gamma \delta} $ in Fermi normal coordinates from Eq.(\ref{e}), we can straightforwardly find out the effective magnetic filed ($ \vec{b} $) due to gravitational effects in the non-relativistic limit. Although the term `magnetic field' is used here for its $ -\vec{b}\cdot \vec{s} $ type contribution to the Hamiltonian, it is different from any intrinsic magnetic field that may be present in the system. Moreover, it is to be noted that the measurement of this magnetic field is done in the vicinity of the geodesic as exactly on the geodesic the $ x^l $ are zero, forcing $ \vec{b} $ to vanish identically. \subsection{Curvature coupling of massless particles and Null Fermi coordinates} The entire analysis described in the previous subsection is done with the basic assumption that the moving particle is massive and thus the corresponding geodesic is timelike. But in analog gravity of $ \text{He}^3$-A, the quasiparticles are in principle massless fermions moving in a curved space-time as described earlier. Therefore, it is more useful to consider a null observer, and we need to reformulate the above analysis for null geodesics, and find the modified expressions of curvature couplings. The first question that arises in this regard is how to define the notion of Fermi normal coordinates for null geodesics. The construction here is somewhat subtle, and has been recently addressed in \cite{16}. The technical subtlety here is that since the tangent to a null geodesic is a null vector, the corresponding set of four vectors which act as the basis of null Fermi coordinates cannot be orthonormal. Following the construction of \cite{16}, let us define four pseudo-orthonormal vectors satisfying the same two relations as given in Eq.(\ref{d}) along a null geodesic $ \mathcal{N} $ \begin{equation} \hat{E}_A \cdot \hat{E}_B = \eta_{AB} ~~ , ~~~~ \nabla_{\nu'}(\hat{E}^{\mu'}_A) \hat{E}^{\nu'}_+ = 0 \label{w} \end{equation} where $ \hat{E}_+ $ is tangent to the null geodesic and $ \eta_{AB} $ is still the flat Minkowski metric but expressed in a new $ E^A $-basis. The matrix form of $ \eta_{AB} $ in this new basis and the corresponding line element along $ \mathcal{N} $ are given by \begin{equation} \eta_{AB} = \left( {\begin{array}{cccc} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} } \right) ,~ ds^2 \vert_\mathcal{N} = 2 E^+ E^- + \delta_{ab} E^a E^b \label{etaform} \end{equation} where each $ A,B,... $ takes the values $ (+,-,2,3) $ and each $ a,b,... $ takes $ (2,3) $. The corresponding Fermi coordinates of a point $ x $ are denoted as $ (x^A)=(x^+, x^-, x^a) $ and their definition is given in \cite{16}. Once again, the quantities $ \hat{E}^{\mu'}_A $ represent the different elements of the basis transformation matrix from the actual $ x^{\mu'} $ coordinate system to the Fermi coordinate system $ x^A $. Along $ \mathcal{N} $, the two coordinate systems are related by \begin{equation} \frac{\partial{x}^A}{\partial{x}^{\mu'}} \biggr \rvert_\mathcal{N} = \hat{E}^A_{\mu'}~, ~~~ \frac{\partial{x}^{\mu'}}{\partial{x}^A} \biggr \rvert_\mathcal{N} = \hat{E}^{\mu'}_A \label{x} \end{equation} The components of Riemann curvature tensor in Fermi coordinates are evaluated from the equation which is similar to Eq.(\ref{e}), and are given by \begin{equation} R_{ABCD} = \hat{E}^{\mu'}_A \hat{E}^{\nu'}_B \hat{E}^{\lambda'}_C \hat{E}^{\sigma'}_D R_{\mu' \nu' \lambda' \sigma'} \label{y} \end{equation} The components of the metric tensor in the vicinity of the geodesic $ \mathcal{N} $, up to second order, can be shown to be given by \begin{eqnarray} g_{++} &=& -R_{+ \bar{c} + \bar{d}} ~ \big \rvert_\mathcal{N} ~~ x^{\bar{c}} x^{\bar{d}} ~, ~~ g_{--} = -\frac{1}{3} R_{- \bar{c} - \bar{d}} ~~ \big \rvert_\mathcal{N} ~ x^{\bar{c}} x^{\bar{d}} ~,\nonumber\\ g_{+-} &=& 1 -\frac{2}{3} R_{+ \bar{c} - \bar{d}} ~ \big \rvert_\mathcal{N} ~~ x^{\bar{c}} x^{\bar{d}} ~, ~~ g_{ab} = \delta_{ab} -\frac{1}{3} R_{a \bar{c} b \bar{d}} ~ \big \rvert_\mathcal{N} ~~ x^{\bar{c}} x^{\bar{d}} ~~ ,\nonumber\\ g_{+a} &=& -\frac{2}{3} R_{+ \bar{c} a \bar{d}} ~ \big \rvert_\mathcal{N} ~~ x^{\bar{c}} x^{\bar{d}} ~, ~~ g_{-a} = -\frac{1}{3} R_{- \bar{c} a \bar{d}} ~ \big \rvert_\mathcal{N} ~~ x^{\bar{c}} x^{\bar{d}} \label{z} \end{eqnarray} where $ (\bar{a})= (-,a) $. The covariant Dirac Lagrangian for massless fermions is \begin{equation} \mathcal{L} = i \sqrt{-g} \bar{\psi}\gamma^A \mathcal{D}_A \psi \label{aa} \end{equation} where $ \mathcal{D}_A $ is given by Eq.(\ref{h}) (apart from a term involving an effective gauge field), with $ \alpha $'s replaced by $ A $'s, and the corresponding expressions of spin connection and $ \sigma^{AB} $ are also similar to Eq.(\ref{i}) : \begin{eqnarray} \omega_{BCA} &=& \hat{E}_{B \mu'}(\partial_A \hat{E}^{\mu'} _C + \Gamma^{\mu'} _{\nu' \rho'} \hat{E}^{\nu'} _C \hat{E}^{\rho'} _A),\nonumber\\ \sigma^{BA} &=& \frac{i}{2}[\gamma^B , \gamma^A] \label{ab} \end{eqnarray} We note here that there is an extra term in the covariant derivative, involving the effective gauge field, as follows from the discussion of subsection 3.1 (see \cite{VolovikWeyl}). Inclusion of this additional term makes the expressions cumbersome, and for the moment we will work with the terms of Eq.(\ref{h}) purely for ease of presentation, and the term involving the gauge field will be introduced later, following Eq.(\ref{fullCovDer}). Here, we will have to be careful in defining $ \gamma^A $. Unlike the previous case where each $ \gamma^\alpha $ represents one of the standard Dirac matrices, the forms of $ \gamma^A $'s, in this case are different. Note that the Lagrangian in flat space-time for massless fermions can be decomposed into two parts \begin{eqnarray} \mathcal{L'} &=& i \bar{\psi}\gamma^{\mu} \partial_{\mu} \psi \nonumber \\ &=& i u_{-}^\dagger \sigma^{\mu} \partial_{\mu} u_{-} + i u_{+}^\dagger \bar{\sigma}^{\mu} \partial_{\mu} u_{+} \label{ac} \end{eqnarray} where $ \sigma^{\mu} = (1, \sigma^i ) $, $ \bar{\sigma}^{\mu} = (1, -\sigma^i ) $ with $ \sigma^i $'s being the Pauli matrices and $\psi = \left(u_+,u_-\right)^T$. In case of a massive fermion, $ u_{+} $ and $ u_{-} $ cannot be separated completely, but we can describe a massless fermion by $ u_{+} $ or $ u_{-} $ alone with the respective equation of motion given by \begin{equation} i \bar{\sigma}^{\mu} \partial_{\mu} u_{+} = 0,~~\text{or} ~~ i \sigma^{\mu} \partial_{\mu} u_{-} = 0 \label{ad} \end{equation} These equations are the well known Weyl equations for massless fermions, and involve Pauli matrices. Now, let us apply this analysis to the covariant Dirac Lagrangian for massless fermions expressed in null Fermi coordinates, Eq.(\ref{aa}) \begin{eqnarray} \mathcal{L} = i \sqrt{-g} ~ u_{-}^\dagger \tilde{\sigma}^A \mathcal{D}_A u_{-} + i \sqrt{-g} ~ u_{+}^\dagger \bar{\tilde{\sigma}}^A \mathcal{D}_A u_{+} ~\left(\equiv {\mathcal L}_1 + {\mathcal L}_2\right) \label{ae} \end{eqnarray} The corresponding Weyl equations for $ u_{+} $ or $ u_{-} $ will be respectively \begin{equation} i \bar{\tilde{\sigma}}^A \mathcal{D}_A u_{+} = 0,~~\text{or} ~~ i \tilde{\sigma}^A \mathcal{D}_A u_{-} = 0 \label{af} \end{equation} Let us compare the second expressions of equations (\ref{ad}) and (\ref{af}). These expressions are similar with $ \partial_{\mu} $ replaced by $ \mathcal{D}_A $ and $ \sigma^{\mu} $ replaced by $ \tilde{\sigma}^A $. The difference between $ \sigma^{\mu} $ and $ \tilde{\sigma}^A $ is easy to understand. $ \sigma^{\mu} $ in Eq.(\ref{ad}) is just the Pauli matrices with the background flat metric given by ${\rm Diag}\left(-1,1,1,1\right)$. The forms of the $ \tilde{\sigma}^A $ in Eq.(\ref{af}) are different from Pauli matrices. The reason for this is that $ \tilde{\sigma}^A $ is expressed in terms of null Fermi coordinates and if we follow the definition (\ref{w}) of pseudo-orthonormal Fermi frames with $ \hat{E}^+ $ and $ \hat{E}^- $ being null vectors, the corresponding background locally flat metric along a null geodesic takes the form given in the first expression of Eq.(\ref{etaform}). Therefore, the transformation relations from $ \eta_{\mu \nu} \longrightarrow \eta_{AB} $ have to be applied on $ \sigma^{\mu} $ to obtain the corresponding expressions of $ \tilde{\sigma}^A $ in the new coordinate system. The forms of $ \tilde{\sigma}^A $, after this transformation, are given as $ \tilde{\sigma}^A = (\tilde{\sigma}^{+}, \tilde{\sigma}^{-}, \tilde{\sigma}^{2}, \tilde{\sigma}^{3} ) $, where we have defined \begin{eqnarray} \tilde{\sigma}^{+} = -\frac{1}{\sqrt{2}} \sigma^0 + \frac{1}{\sqrt{2}} \sigma^1 = \frac{1}{\sqrt{2}} \left( {\begin{array}{cc} -1 & 1 \\ 1 & -1 \end{array} } \right)~, \nonumber\\ \tilde{\sigma}^{-} = \frac{1}{\sqrt{2}} \sigma^0 + \frac{1}{\sqrt{2}} \sigma^1 = \frac{1}{\sqrt{2}} \left( {\begin{array}{cc} 1 & 1 \\ 1 & 1 \end{array} } \right) ~ ,\nonumber\\ \tilde{\sigma}^{2} = \sigma^{2} = \left( {\begin{array}{cc} 0 & -i \\ i & 0 \end{array} } \right) ,~ \tilde{\sigma}^{3} = \sigma^{3} = \left( {\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} } \right)~~ \end{eqnarray} Since the forms of $\tilde{\sigma}^A$ are changed from the usual Pauli matrices, so do those of the corresponding $ (4 \times 4) $ $ \gamma^A $ matrices, and as a result, they do not exactly resemble the Dirac matrices. In particular, we will use the following forms of the $\gamma^A$ matrices : \begin{eqnarray} \gamma^+ = \begin{pmatrix} 0 & {\tilde \sigma}^+ \\ -{\tilde \sigma}^- & 0\end{pmatrix},~~~ \gamma^- = \begin{pmatrix} 0 & {\tilde \sigma}^- \\ -{\tilde \sigma}^+ & 0\end{pmatrix}~,~~ \gamma^a = \begin{pmatrix} 0 & {\tilde \sigma}^a \\ -{\tilde \sigma}^a & 0\end{pmatrix}~~(a=2,3) \end{eqnarray} With the new definitions and expressions of $ \tilde{\sigma}^A $, we are now in a position to define the curvature coupling of massless fermions expressed in null Fermi coordinates. The expressions of vierbeins analogous to Eq.(\ref{k}) are \begin{eqnarray} &~&\hat{E}^{A} _{+} = \delta^{A}_{+} -\frac{1}{2}R^{A}{}_{\bar{c} {+} \bar{d}} ~ \vert_{\mathcal{N}} ~ x^{\bar{c}} x^{\bar{d}},~\nonumber\\ &~&\hat{E}^{A} _{-} = \delta^{A}_{-} -\frac{1}{6}R^{A}{}_{\bar{c} {-} \bar{d}} ~ \vert_{\mathcal{N}} ~ x^{\bar{c}} x^{\bar{d}},~\nonumber\\ &~&\hat{E}^{A} _{a} = \delta^{A}_{a} -\frac{1}{6}R^{A}{}_{\bar{c} {a} \bar{d}} ~ \vert_{\mathcal{N}} ~ x^{\bar{c}} x^{\bar{d}} \label{ag} \end{eqnarray} The corresponding expressions of affine connections, in Fermi coordinates, are \begin{eqnarray} \Gamma^A{}_{B +} ~ \vert_{\mathcal{N}} = R^{A}{}_{B \bar{a} +} \vert_{\mathcal{N}} ~ x^{\bar{a}},~~~ \Gamma^A{}_{\bar{b} \bar{c}} ~ \vert_{\mathcal{N}} = -\frac{1}{3} \left( R^{A}{}_{\bar{b} \bar{c} \bar{d}} + R^{A}{}_{\bar{c} \bar{b} \bar{d}} \right) \vert_{\mathcal{N}} ~ x^{\bar{d}} \label{ah} \end{eqnarray} Similar to the previous section, if we expand the first term of the Lagrangian (\ref{ae}) in Fermi coordinates by using the expressions of Eq.(\ref{z}), Eq.(\ref{ag}) and Eq.(\ref{ah}), it takes the following form \begin{equation} \mathcal{L}_1 = \sqrt{-g} ~ u_-^{\dagger} \left( i \tilde{\sigma}^A \partial_A + b^A \tilde{\sigma}_A + i a^A \tilde{\sigma}_A \right) u_- \label{Lag1} \end{equation} The third term which is anti-hermitian vanishes when its conjugate part is added to the Lagrangian. Therefore, the only interaction term that survives is the second one which is hermitian. The expressions of the components of this gravitational coupling term ($ b^A $) come out to be \begin{eqnarray} b^{+} &=& \frac{1}{4} \epsilon^{1ab} \left[ \frac{1}{6} \left( R_{+ \bar{m} a b} - R_{- a b \bar{m}} \right) + \frac{1}{3} \left( R_{+ a b \bar{m}} + R_{- b a \bar{m}} \right) + \frac{1}{2} \left( R_{- \bar{m} a b} + R_{+ b a \bar{m}} \right) \right] x^{\bar{m}}\nonumber\\ b^{-} &=& -\frac{1}{4} \epsilon^{1ab} \left[ \frac{1}{6} R_{- \bar{m} a b} + \frac{7}{6} R_{+ \bar{m} a b} + \frac{1}{3} \left( R_{+ a b \bar{m}} + R_{- a b \bar{m}} \right) + \frac{1}{2} \left( R_{- b a \bar{m}} + R_{+ b a \bar{m}} \right) \right] x^{\bar{m}}\nonumber\\ b^{c} &=& \frac{1}{4} \epsilon^{1ac} \left[ \frac{7}{6} R_{- a \bar{m} +} - R_{+ a + \bar{m}} + \frac{1}{6} R_{- a - \bar{m}} + \frac{1}{3} R_{+ a - \bar{m}} + \frac{1}{2} \left( R_{+ - a \bar{m}} + R_{- - a \bar{m}} \right) \right] x^{\bar{m}}~~~~~~~ \label{ai} \end{eqnarray} where again $ (\bar{a})=(-,a) $ and $ a,b,c,... $ take values $(2, 3)$. The above expressions were evaluated for the form of the covariant derivative given in Eq.(\ref{h}). Including the vector potential term, the full covariant derivative is given by \cite{VolovikWeyl} : \begin{equation} \mathcal{D}_A \equiv \partial_A - \frac{i}{4} \omega_{BDA} \sigma^{BD} - i \tilde{A}_A \label{fullCovDer} \end{equation} where, $ \tilde{A}_A = A_A + \chi_A $, with the expressions of $ \chi_A $ and $A_A$ being \begin{eqnarray} \chi_A = \frac{1}{8} \epsilon^{\lambda' \gamma' \mu' \nu'} E_{A \lambda'} ~ E^B_{\gamma'} \left( \partial _{\mu'} E_{B \nu'} - \partial _{\nu'} E_{B \mu'} \right)~,~~ A_A = \left(0,0,0,p_F \right) \end{eqnarray} The corresponding expressions of $ \chi^A $ in Fermi coordinates are evaluated to be \begin{eqnarray}\label{Ki} \chi^{+} &=& 0~, \nonumber\\ \chi^{-} &=& \frac{1}{4} \left( \frac{1}{3} R_{+32\bar{m}} + \frac{2}{3} R_{+\bar{m}23} - \frac{1}{3} R_{+23\bar{m}} \right) x^{\bar{m}}~, \nonumber\\ \chi^{2} &=& \frac{1}{4} \left( \frac{2}{3} R_{+\bar{m}3-} - \frac{1}{3} R_{+3-\bar{m}} + \frac{1}{3} R_{+-3\bar{m}} \right) x^{\bar{m}}~, \nonumber\\ \chi^{3} &=& \frac{1}{4} \left( -\frac{2}{3} R_{+\bar{m}2-} + \frac{1}{3} R_{+2-\bar{m}} - \frac{1}{3} R_{+-2\bar{m}} \right) x^{\bar{m}} \end{eqnarray} The total magnetic field including the gauge field term is now given by \begin{equation} B^A = b^A + \chi^A + A^A \label{Bfinal} \end{equation} with the form of $b^A$ given in Eq.(\ref{ai}). Note that the last term in Eq.(\ref{Bfinal}) is a constant term, and we will ignore this in our analysis. In what follows, we will focus on the first two terms of Eq.(\ref{Bfinal}) and in the next section, we proceed to evaluate the components of $B^A $ for both radial and circular null geodesics in the background of analog gravity and study its characteristic features in some details. \section{Massless fermionic quasiparticles in radial null geodesics} We start with the metric of Eq.(\ref{c}) which we reproduce here for convenience : \begin{eqnarray} ds^2 =-\left(c^2 - v^2(r)\right) dt^2 + 2 v(r) drdt + dr^2 + r^2 d\phi^2 + \frac{c^2}{v^2_F} dz^2 \nonumber \end{eqnarray} For null geodesics in this space-time, the normalization of the four-velocity yields \begin{equation} \dot{r}^2 + 2 v(r) \dot{t} \dot{r} + r^2 \dot{\phi}^2 - \left(c^2-v^2(r)\right)\dot{t}^2 + \frac{c^2}{v_F^2}{\dot z}^2 = 0 \label{o} \end{equation} where over-dots represent derivative with respect of an affine parameter along the null geodesic. For timelike geodesics, a standard choice of this affine parameter is the proper time. But in case of a null geodesic, the affine parameter cannot be the proper time. Instead, normal coordinate time or radial distance may be considered as the affine parameter, if they satisfy the geodesic equation of the form \begin{equation} \nabla_{\nu'} (u^{\mu'}) u^{\nu'} = 0 \label{aj} \end{equation} where $ u^{\mu'} $ is the tangent vector to the null geodesic under consideration. Here, by radial null geodesics, we mean the set of null geodesics for which $ \dot{\phi} = \frac{d\phi}{d\lambda} = 0 $, with $ \lambda $ being the affine parameter. Therefore, for radial null geodesics outside the orifice, Eq.(\ref{o}) becomes \begin{equation} \dot{r}^2 + 2 v(r) \dot{t} \dot{r} - \left(c^2-v^2(r)\right)\dot{t}^2 = 0 \label{ak} \end{equation} With condition (\ref{ak}) in mind, we can find out the pseudo-orthonormal Fermi tetrad basis for the analog metric along a null radial geodesic as : \begin{eqnarray} \hat{E}^{\mu'}_+ &=& \left( \frac{c+v(r)}{c^2-v(r)^2} , 1 , 0 , 0 \right)\nonumber\\ \hat{E}^{\mu'}_- &=& \left( \frac{v(r)-c}{2 c^2} - \frac{kc + kv(r)}{2 \left( c^2-v^2(r) \right)} , \frac{c^2-v(r)^2}{2c^2} - \frac{k}{2} , \frac{k}{r} , 0 \right)\nonumber\\ \hat{E}^{\mu'}_2 &=& \left( -\frac{kc + kv(r)}{c^2-v^2(r)} , -k , \frac{1}{r} , 0 \right)\nonumber\\ \hat{E}^{\mu'}_3 &=& \left( 0 , 0 , 0 , \frac{v_z}{c} \right) \label{t} \end{eqnarray} where $ k $ is a constant. The tangent vector to the geodesic, $ u^{\mu'} $ or $ \hat{E}^{\mu'}_+ $ takes the form $ \hat{E}^{\mu'}_+ = (\dot{t},\dot{r},0,0) $, for a general affine parameter $ \lambda $. In the present case, we have set $ \dot{r} = 1 $, i.e., we have explicitly chosen $ r $ as the affine parameter along the geodesic $ \mathcal{N} $. This choice simplifies the calculation as well as it satisfies the required geodesic equation condition. Now the above choice of tetrad has to satisfy the required conditions of Eq.(\ref{w}). Let us rewrite the first condition of Eq.(\ref{w}) and analyze it using the tetrad defined above. From the condition $\hat{E}_A \cdot \hat{E}_B = \eta_{AB}$, we obtain \begin{eqnarray} && \hat{E}_{-} \cdot \hat{E}_{-} = \eta_{--} = 0 ~~~~ ({\rm for} ~~ A=B=-) \nonumber \\ && \Rightarrow ~ g_{\mu' \nu'} \hat{E}^{\mu'}_{-} \hat{E}^{\nu'}_{-} = 0 \Rightarrow ~ k(k-1) = 0 \end{eqnarray} So only two values of the constant $ k $ satisfy the required conditions for $ \hat{E}_{-} $, i.e $k$ takes two values, $0$ and $1$. All other components of the tetrad automatically satisfy the required conditions of (\ref{w}). Therefore, we have two different set of tetrads with $ k = 0 $ and $ k = 1 $ which can be chosen as the basis of null Fermi frame. Having obtained the Fermi tetrad basis, we can readily find out the components of the Riemann curvature tensor in null Fermi coordinates using Eqs. (\ref{y}) and (\ref{t}). Then we use Eqs.(\ref{ai}) and (\ref{Ki}) to calculate the components of the effective magnetic field due to curvature coupling and the corresponding expressions are given by \begin{eqnarray} B^{+} &=& B^{-} = B^2 = 0 \nonumber\\ B^3 &=& \frac{k r (4 h-k y) v'(r)^2+v(r) \left(k r (4 h-k y) v''(r)+(k (-4 h+2 k y-y)+y) v'(r)\right)}{24 c^2 r}\nonumber\\ &~& \label{u} \end{eqnarray} where $ h $, $ y $, $ z $ represent observer's coordinates or Fermi coordinates. The above expression of $ B^3 $ has been evaluated for a general $ v(r) $. But in case of the draining bathtub type geometry discussed earlier, the specific form of $ v(r) $ happens to be $ v(r) = -\frac{cr_h}{r} $. So if we put this form of $ v(r) $ in Eq.(\ref{u}), we obtain the following expression of $ B^3 $ : \begin{equation} B^3 = \frac{r_h^2 \left[ 16 h k + \left(-1 + k -5 k^2 \right) y \right]}{24 r^4} \label{v} \end{equation} The expressions of $ B^3 $ for $ k=0 $ and $ k=1 $ are given by \begin{equation} B^3 = -\frac{r_h^2 y}{24 r^4} ~~(k=0) ~ ,~B^3 = \frac{r_h^2 (16 h-5 y)}{24 r^4} ~~(k=1) \label{B3onetwo} \end{equation} We need to analyze this result in more detail. Eq.(\ref{v}) tells us that the effective magnetic field component ($ b^3 $) diverges at $ r = 0 $. Since $ r = r_h $ represents the position of event horizon of the analog black hole and we are particularly interested in the phenomena occurring outside $ r_h $, the effective magnetic field is always finite in this region. From Eq.(\ref{v}) we see that the effective magnetic field falls off as $ r^{-4} $ as a function of the radial distance. \section{Massless fermionic quasiparticles in circular null geodesic} We will now compute the curvature coupling of fermionic quasiparticles in circular null geodesics. This is of course a special case, as we discuss. For such geodesics, it can be checked from Eq.(\ref{c}) that the only allowed value of the radial coordinate is $r = \sqrt{2}r_h$, for $v(r) = -cr_h/r$ as in the previous subsection. This is the analog of the photon sphere in GR \cite{Virbhadra}, and a null observer in a circular geodesic is uniquely located at this value of $r$. This is to be kept in mind in the analysis that follows. Similar to the radial case, first we need to set up the pseudo-orthonormal Fermi frame for a circular null geodesic $ \mathcal{G} $. Then we find out components of Riemann tensor in Fermi coordinates and use it to calculate the effective magnetic field. By `circular null geodesic' we mean the family of null geodesics for which $ r $ is constant, i.e., $ \dot{r} = \ddot{r} = 0 $. The corresponding Fermi frame for a circular null geodesic $ \mathcal{G} $ are found to be \begin{eqnarray} \hat{E}^{\mu'}_+ &=& \left( \frac{1}{\sqrt{c^2-v(r)^2}} , 0 , \frac{1}{r} , 0 \right) \nonumber\\ \hat{E}^{\mu'}_- &=& \left(\frac{r \phi ^2 v(r) v'(r)}{2 c^2 \sqrt{c^2-v(r)^2}}+\frac{\phi v(r) }{c^2} -\frac{1}{2 \sqrt{c^2-v(r)^2}}, \right. \nonumber\\ &~&\left. -\frac{ r\phi v(r) v'(r)}{c^2}, \frac{ \phi ^2 v(r) v'(r)}{2 c^2 }+\frac{1}{2 r},0\right)\nonumber\\ \hat{E}^{\mu'}_2 &=& \left( -\frac{\phi }{c} + \frac{v(r)}{c \sqrt{c^2-v(r)^2}}, \frac{\sqrt{c^2-v(r)^2}}{c} , -\frac{\phi \sqrt{c^2-v(r)^2}}{c r} , 0 \right)\nonumber\\ \hat{E}^{\mu'}_3 &=& \left( 0 , 0 , 0 , \frac{v_z}{c} \right) \label{al} \end{eqnarray} with the condition $rv(r)v'(r) + c^2 - v(r)^2 = 0$. It has to be remembered that the above expressions in Eq.(\ref{al}) have to be evaluated at $r = \sqrt{2}r_h$ and we have denoted $ v'(r) = \frac{\partial v(r)}{\partial r} $. It is to be noted that the tetrad components depend explicitly on $\phi$. This might seem surprising, given that the analog metric that we start with is spherically symmetric, but is due to the fact that the tetrad must satisfy the pseudo-orthonormal and parallel transport conditions given in Eq.(\ref{w}), along the null geodesic. As the first two vectors of the tetrad basis $\hat{E}^+$ and $\hat{E}^{-}$ are null, for circular geodesic they demand its components to depend explicitly on the affine parameter which in this case is chosen to be the arc-length of the circular geodesic $\sqrt{2} r_h \phi$. Similar dependence can also be seen for the timelike circular geodesic where $\phi$ dependence comes into the phases of harmonic functions \cite{13} but not explicitly, the reason being that the tetrad basis is made of the timelike tangent vector and three spacelike vectors. The corresponding expressions of the components of effective magnetic field in this case are \begin{eqnarray} B^{+} &=& B^{-} = B^2 = 0 \nonumber\\ B^3 &=& \frac{8 \sqrt{2} h \phi \left(\phi ^2+4\right)- \left(\phi ^4-96\right)y}{384 r_h^2} \label{an} \end{eqnarray} \section{Numerical Estimates} It now remains to provide numerical estimates of the $B^A$ that we have evaluated. In order to do this, we will take recourse to various approximations that we now discuss. We note that the dimension of $B^A$ for both radial and circular geodesics is an inverse length (contrary to usual magnetic fields that come in dimensions of $1/L^2$). In order to convert $B^A$ into a quantity having dimensions of a magnetic field (Gauss or Tesla), we will need to divide it by the Bohr magneton, expressed in GeV per Tesla (or GeV per Gauss) \cite{Bluhm}. Doing this, it can be checked that a magnetic field of $10^{-12}$ Gauss translates to a value of $B^A \sim 10^{-29}~{\rm GeV}$. This is the limit of measurability of the magntic field, as of now. In the analysis that follows, we will use energy units only, for convenience. Let us first consider the case of radial geodesics and consider the case $k=0$, i.e the first expression given in Eq.(\ref{B3onetwo}). Since this has dimension $L^{-1}$, we convert this to energy units by multiplying with $\hbar c_L$, where $c_L$ is the speed of light $(=3 \times 10^8~{\rm m/s})$. Thus we have $B^3 (k=0) = - r_h^2 y\hbar c_L/(24 r^4)$. In order to get an estimate for the coordinate $y$, we use the uncertainty relation. Remembering that the quasiparticles are dressed Helium-3 atoms of mass $m^*$, moving with speed $c = 3 {\rm~cm/s}$, we have $y \sim \hbar/p = \hbar / (m^*c)$. Plugging this in, we have in electron-volts, \begin{equation} |B^3 (k=0)| = \frac{1}{24}\frac{r_h^2\hbar^2c_L}{r^4 m^* c\times q_e}~~{\rm eV} \label{Beq} \end{equation} where $q_e$ is the electron charge. We now use the fact that $m^*$ is 3 times the mass of Helium-3 atoms, which is given by $3.016$ atomic mass units. A numerical estimate of $|B^3 (k=0)|$ is obtained by putting in this value of $m^*$. Now we note that the theory of low energy quasiparticles is valid for \begin{equation} E \ll \frac{\Delta_v^2}{p_Fv_F}~, {\rm i.e}~~ E \ll 10^{-10}~{\rm eV} \label{Eeq} \end{equation} Since $B^A$ appears in the interaction energy term in the Lagrangian of Eq.(\ref{Lag1}), as an estimate we equate $B^3 (k=0) \sim 10^{-10}$, to obtain $r = 0.067\sqrt{r_h}$ (in metres). For $E \ll 10^{-10} {\rm eV}$, we therefore require $r \gg 0.067\sqrt{r_h}$. A typical value of $r_h \sim 1~\mu {\rm m}$ will thus ensure that our results are valid for $r \gg 0.0067~{\rm cm}$. Hence on a radial geodesic, at say $r \sim 1~{\rm cm}$, our results should be robust, and at this radial distance, we have $B^3 (k=0) \sim 10^{-28}~{\rm GeV}$. Smaller values of the radial distance may push up this value to $\sim 10^{-27} - 10^{-26}~{\rm GeV}$, while respecting the energy condition. This discussion was for $k=0$. For $k=1$, the analysis of the second expression of Eq.(\ref{B3onetwo}) is qualitatively similar, and yields similar numerical estimates. We will now turn to circular geodesics. An analysis similar to the one outlined above shows that in this case, setting $y=0$ in Eq.(\ref{an}) implies that $B^3 = (1/r_h^2)(5.6 \times 10^{-15}\phi + 1.4 \times 10^{-15} \phi^3)~{\rm eV}$. If we now set a typical value of $\phi = \pi$, then in order to satisfy $E \ll 10^{-10} {\rm eV}$, we require $r_h \gg 2.4~{\rm cm}$. As an estimate, if we set $r_h = 10~{\rm cm}$, we obtain $B^3 \sim 10^{-21}~{\rm GeV}$. Our numerical analysis above establishes the fact that the effective magnetic field that is seen by an inner observer in superfluid $\text{He}^3$-A are withing bounds reachable by present experiments, i.e these are not vanishingly small. Hence, such an observer should measure effects that are known in quantum mechanics regarding the interaction of spins with such magnetic fields. In this case, however, the effective magnetic field is non-uniform. For radial null geodesics, this falls off as the fourth power of the radial distance, while for circular null geodesics, it is explicitly dependent on the angular variable. From Eq.(\ref{B3onetwo}), using the condition $r \gg 0.067\sqrt{r_h}$, it is seen that for relatively large values of $r$ (compared to $r_h$), $B^3$ varies slowly. Hence, if we approximate $B^3$ by a uniform (average) value, for such large $r$, one should expect the inner observer to see oscillations between a spin up and a spin down state of the massless fermionic quasiparticles when the system evolves from a general spin state. Similar analysis holds for null circular geodesics, for small values of the angular coordinate. The external observer (moving at $3$ cm per sec) along a radial coordinate or moving at a fixed radius, however, perceives these quasiparticles as dressed Helium-3 atoms. It would therefore seem that such an observer is likely to see the spin of the quasiparticles to also oscillate between an up-spin state and a down-spin state. In fact, this can be quantified further. The time difference between two events for the external observer $\Delta t$ is related to that for the inner observer $\Delta\tau$ by \cite{VolovikBookHelium} $\Delta t = \Delta\tau/\sqrt{1-r_h^2/r^2}$. Hence, the characteristic frequency of oscillation (assuming a uniform magnetic field) for the external observer is dilated by a factor of $\sqrt{1-r_h^2/r^2}$, as compared to the inner observer. For null radial geodesics, for small values of $r_h/r$, $\Delta t \sim \Delta\tau$. This is relevant for us, as we have already seen that the energy condition demands that here, $r \gg 0.067\sqrt{r_h}$, and that the effective magnetic field can be approximated to a constant for small values of $r_h/r$. For circular geodesics, since $r = \sqrt{2}r_h$, the dilation factor is $\sqrt{2}$. Possible experimental signatures of this might be an interesting issue to pursue, modulo a limitation of our analysis that we point out in the next section. \section{Discussions, conclusions and limitations} In this paper, we started with a $ \text{He}^3 $-superfluid system where the vacuum excitations are Bogoliubov fermions, which are dressed $ \text{He}^3 $ atoms and see an effective curved space-time with moving superfluid vacuum in the background. We first established the Fermi coordinates along a null radial as well as a null circular geodesics, and calculated the components of the Riemann curvature tensor in these coordinates. Having obtained the curvature tensor, we determined an effective magnetic field due to curvature coupling. The whole analysis was done in the analog black hole draining bath-tub geometry of fig.(\ref{fg1}) discussed by Volovik in \cite{5}. As we have mentioned in the beginning of this paper, spinning particles do not follow exactly geodesic trajectories, but the difference of the latter from their actual paths is small. We can however envisage an internal null observer (feeling the analog metric), on such a geodesic trajectory, who makes a measurement on the fermionic system. This internal null observer however moves with a finite speed ($\sim 3~{\rm cm/s}$) and sees the non-trivial effect of curvature coupling to the fermionic quasiparticles. It is well known (see, e.g \cite{Sakurai}) that spin half fermions that interact with a constant external magnetic field oscillates between the up and down states as it evolves from a general spin state. The frequency of this oscillation is proportional to the Bohr magneton. In this case, we obtain an effective interaction term that is similar to the former, and if we approximate the magnetic field to a uniform value assuming that it varies slowly, similar effects of oscillation between the up and down spin states should result from a standard quantum mechanical analysis. It is not difficult to identify an external observer who moves in a circular or radial trajectory in the given geometry, and coincides with the internal observer. Our analysis would imply that this external observer should see the same effects on the spin state of the dressed Helium-3 atom that she perceives, and this might be a measurable phenomenon in futuristic experiments. Before ending this paper, we should point out that the analysis that we have presented here is limited by the fact that it is applicable only to two special class of geodesics, i.e radial or circular. A generic geodesic path may be neither of these. However, this last case is difficult to analyse analytically, and we leave a study of such a situation for a future publication. \begin{center} {\bf Acknowledgements} \end{center} \noindent It is a pleasure to thank K. Bhattacharya and V. Subhrahmanyam for useful discussion. \bibliographystyle{utphys}
{ "timestamp": "2017-12-20T02:08:20", "yymm": "1712", "arxiv_id": "1712.05230", "language": "en", "url": "https://arxiv.org/abs/1712.05230" }
\section{Introduction} Effective interactions between nano-/macromolecular bodies in aqueous solutions can broadly be decomposed into two equally important contributions: The static or equilibrium forces, and the dynamic forces, arising when the system is driven out of equilibrium, such as when the bodies are in relative motion \cite{Israela,Butt}. While the former originate in the disjoining pressure due to direct and/or solvent-mediated surface forces, operating primarily at the nanoscale, the latter depend on dynamic molecular processes in the solvent, and also on (slow) hydrodynamic stresses as the intervening solvent is drained/sheared from the liquid film separating the interacting surfaces. The dynamic forces can thus be relevant over a much wider range of nano-/microscale separations \cite{Brenner,Butt,Israela,Dhont}. The extended Derjaguin-Landau-Verwey-Overbeek theory of colloidal stability identifies three types of static surface interactions \cite{Israela,Butt}: The electrostatic interactions depending on the specific nature of mobile and fixed molecular charges \cite{Elec}, the ubiquitous van-der-Waals (vdW) interactions, depending on the dielectric response of molecular materials \cite{Woods}, and solvent-mediated interactions stemming from the hydrophobic and/or hydration forces between solvent-exposed surfaces \cite{Hydration}. The dynamic forces are, on the other hand, much more difficult to classify unequivocally. Some of the equilibrium forces, as is the case with, for instance, the vdW interaction itself, can display an inherent dynamic component \cite{Kardar,Mkrtchian}. Others may exhibit no equilibrium counterpart as are the hydrodynamic interactions, having significant impact on dynamic properties (e.g., spatiotemporal correlations) of colloids in bulk \cite{Brenner,Butt,Israela,Dhont} or strongly confined fluids \cite{Keyser,Diamant}. Recent advances in surface-force techniques, such as surface forces apparatus (SFA) and atomic force microscope (AFM) \cite{Israela,Butt}, have enabled high-precision determination of both static and dynamic forces acting between contact surfaces across an intervening layer of simple or complex fluid (for recent reviews of relating techniques and applications, see Refs. \cite{Butt-rev,Israelachvili_ROPP,Korea,Haviland,Ellis3,Neto,Bocquet2010,Wang2017}). Dynamic SFA usually incorporates two apposed, molecularly smooth, flattened or curved surfaces of relatively large radii of curvature, with one of the surfaces driven in controlled three-dimensional (linear/oscillating) motion \cite{Israelachvili_ROPP}. This allows for measuring various (generally frequency-dependent) rheological properties of thin fluid films and gives direct access to shear/compressional forces exerted on the bounding surfaces by the intervening fluid over a wide range of surface separations and velocities/frequencies of the imposed surface motion \cite{Granick1991b,Klein1998,Klein2007,CottinBizonne2008,Bureau2010,Leroy2012,Steinberger2008}. Dynamic AFM has, on the other hand, emerged as an important tool for probing the local response of hard or soft material interfaces in liquid media \cite{Butt-rev,Haviland,Wang2017}. In colloidal-probe AFM \cite{Butt-rev}, a relatively large, cantilever-mounted colloidal particle oscillates in proximity to an interface, with the power spectrum of the oscillations providing information on the hydrodynamic/viscoelastic properties of the surrounding liquid and the probe-surface interactions. The effects of oscillatory external forcing as well as thermal noise in dynamic AFM, involving colloidal probes or flat microlevers, have been analyzed on various levels of approximation \cite{Sader3,Alcaraz,Benmouna,Butt-rev,Clarke,Siria2009}. In very recent works, Maali et al. have first analyzed the hydrodynamics of a vibrating microsphere \cite{Maali1} and then generalized the methodology to a thermally driven vibrating sphere yielding a thermal-noise AFM probe, where the sub-nanometer thermal motion of the sphere, coupled to a spring and dashpot mathematical model, reveals an (elasto)hydrodynamic coupling between the sphere and a vicinal, hard (mica) or soft (air-bubble), substrate in water \cite{Maali2}. While shear/drainage thin-film flows, caused by small-amplitude oscillatory or stochastic surface forcing \cite{Israelachvili_ROPP,Korea,Butt-rev,Haviland,CottinBizonne2008,Leroy2012,Steinberger2008,Wang2017,Maali2,Maali1,Butt-rev,Alcaraz,Benmouna,Clarke,Sader3,Siria2009}, have been a common motif in the SFA/AFM contexts, other techniques for generating such flow patterns have been developed based on quartz crystal resonators (QCRs) to probe near-surface fluid properties \cite{Butt}, such as boundary slippage effects \cite{Ellis3,Neto,Bocquet2010}. QCRs are used (also in combination with the dynamic SFA \cite{Berg2003}) as vibrating fluid substrates, driven at their resonance frequency to produce unsteady thin-film flows at high frequencies and shear rates (see the review in Ref. \cite{Neto}). In the aforementioned contexts, it is important to analyze first the well-defined limits and only then proceed to more advanced models to account for the various couplings and feedbacks \cite{Grier}. It is also important to realize that forced motion is in general incompatible with the assumptions of weak acceleration, requiring one to account for finite compressibility effects, which makes the underlying hydrodynamic problem more difficult to tackle \cite{Netz}. Motivated by these advances, we formulate a general framework for surface-driven hydrodynamic interactions across a compressible and viscous fluid film, mechanically driven, in transverse/longitudinal directions, at one of its two rigid boundaries using an arbitrary external forcing. We focus on a fluid film with plane-parallel bounding surfaces, which is more yielding towards systematic calculations. Our primary interest is in the fluctuational behavior of shear/compressional hydrodynamic stresses generated across the film and on the boundaries, when the surface forcing is stochastic and of arbitrary (thermal or non-thermal) spectral density. The concrete example of temporally uncorrelated (white-noise) forcing will be analyzed numerically in detail. We show that the compressible, viscous hydrodynamic coupling between the mobile (forced) and the fixed bounding surfaces of the film leads to a complicated dependence of the same-plate and cross-plate stress correlation functions on the inter-surface separation (film thickness). This includes decaying power-laws with universal exponents and {\em non-decaying} stress variances on the fixed plate due to the acoustic resonances originating in the compressional modes. This problem in some sense represents an inverse one with respect to the recently analyzed case of thermal fluctuating hydrodynamics \cite{LL} between two rigid plates, geared towards elucidating the possible role of the {\em hydrodynamic Casimir-like effects} (in analogy with other examples of non-equilibrium fluctuation-induced forces involving fluctuating classical fields \cite{Jones,Chan, Monahan, Bartolo,Dean,Ajay1, Antezza, Kruger, Kirkpatrick13, Kirkpatrick14, Kafri-Kardar}). The hydrodynamic Casimir-like phenomenon was in fact shown to exist only in its indirect ``secondary" form \cite{Monahan}: While the average stress on either of the bounding surfaces is zero, its fluctuations indicate long-range correlations, as a result of near-equilibrium thermal fluctuations in the confined fluid film. The forced fluctuations of one of the bounding plates of a confined fluid layer analyzed below are rather different from the hydrodynamic Casimir-effect phenomenology, and are more closely related to the so-called {\em Bjerknes interactions} in driven acoustic resonators \cite{Bruus2,Leighton}. Contrary to the Bjerknes interactions though, the forcing in our case is not due to a volume-distributed external acoustic field, but is rather exerted on one (or both) of the rigid bounding plates. We introduce our framework in Section \ref{sec:formalism} and calculate the relevant hydrodynamic response/correlations for an arbitrary stochastic surface forcing in Section \ref{sec:response}. The numerical results are given for the special case of white-noise forcing in Sections \ref{sec:shear_corr} and \ref{sec:norm_corr}, followed by the conclusions in Section \ref{sec:conclusion}. \section{Model and Formalism} \label{sec:formalism} \subsection{Model geometry and physical description} \label{sec:model} Let us consider a classical, compressible, viscous fluid film confined between two rigid, parallel plates of infinite extent in the $x-y$ coordinate plane at vertical locations $z=0$ and $z=h>0$; see Fig. \ref{fig:schematic}. The upper plate is kept at rest, while an {external surface force {\em per unit area} $\mathbf{f}=\mathbf{f}(t)$, drives the lower plate in arbitrary, translational, rigid-body movements around its reference plane at $z=0$. As a consequence the mobile plate exhibits a time-dependent {\em surface velocity}, $\mathbf{u}(t)$, which is to be determined consistently and concurrently with the fluid velocity, density and pressure fields within the film, i.e., $\mathbf{v}=\mathbf{v}(\mathbf{r}; t)$, $\rho=\rho(\mathbf{r}; t)$, and $p=p(\mathbf{r}; t)$, respectively. In its general aspects, the present model is used to capture the elemental features of typical surface-driven flows in standard surface-force experiments \cite{Israelachvili_ROPP,Korea,Butt-rev,Haviland,CottinBizonne2008,Leroy2012,Steinberger2008,Wang2017,Maali2,Maali1,Butt-rev,Alcaraz,Benmouna,Clarke,Sader3,Siria2009,Ellis3,Neto,Bocquet2010,Granick1991b,Klein1998,Klein2007,Bureau2010,Berg2003,Grier}, but it is also designed as a first-step model to facilitate an unequivocal elucidation of the basic physics of the problem using direct analytical calculations. It is nevertheless useful to detail the simplifying assumptions involved. First, we note that while flattened/planar contact surfaces, as in our model, have been used in dynamic SFA/QCRs (see, e.g., Refs. \cite{Butt,Israela,Granick1991b,Klein1998,Berg2003,Israelachvili_ROPP,Bureau2010}), and also in dynamic AFM with wide flat microlevers \cite{Siria2009}, experiments often rely on cross-cylindrical or sphere-plane geometries. The radii of curvature in these applications are however very large (around a few mm/cm in SFA and tens of $\mu$m in AFM) as compared to the film thickness (varied in the sub-nm to $\mu$m range); see, e.g., Refs. \cite{Israelachvili_ROPP,Granick1991b,Klein1998,Klein2007,CottinBizonne2008,Steinberger2008,Bureau2010,Korea,Butt-rev,Haviland,Wang2017,Alcaraz,Benmouna,Maali1,Maali2,Leroy2012}. For such weakly curved surfaces, the Derjaguin approximation \cite{Israela,Butt} can be used to predict the interaction forces between curved boundaries based merely on the results obtained in the plane-parallel geometry, or vice versa. Secondly, the amplitude of displacements (especially in $z$ direction) of the mobile plate is assumed here to be much smaller than the film thickness or, equivalently, $|\mathbf{u}(t)|$ is taken be sufficiently small \cite{Note_small_parameter}. This is in fact the typical situation also for the perpendicular (compressional), oscillatory or noise-driven, surface motions utilized in dynamic SFA/AFM experiments \cite{Israelachvili_ROPP,Korea,Butt-rev,Haviland,CottinBizonne2008,Leroy2012,Steinberger2008,Wang2017,Maali2,Maali1,Alcaraz,Benmouna,Clarke,Sader3,Siria2009}. It enables one to assume that the inter-plate separation is fixed on the leading order and also allows for a linearized treatment of the full Navier-Stokes equations \cite{Alcaraz,Benmouna,Clarke,Sader3} by setting $\mathbf{v} = \mathbf{v}^{(1)}$, $p = p_0+p^{(1)}$ and $\rho = \rho_0+\rho^{(1)}$, where the superscript $(1)$ denotes the first-order fluctuations around the rest values $\mathbf{v}=\mathbf{0}$, $p=p_0$ and $\rho =\rho_0$. Thirdly, we neglect possible boundary slippage effects \cite{Ellis3,Neto,Bocquet2010,CottinBizonne2008,Maali1,Maali2,Siria2009,Steinberger2008} by taking no-slip boundaries with $ \mathbf{v}(x, y, z=0; t) = \mathbf{u}(t)$ and $ \mathbf{v}(x, y, z=h; t) = \mathbf{0}$. Finally, we ignore local temperature variations and heat transfer processes in the film \cite{Note_T} (to be considered elsewhere \cite{Chris2017}), the (nonlinear) viscous dissipation \cite{Klein2007}, and the relaxation effects that can formally be accounted for by taking frequency-dependent viscosities \cite{LL}. \begin{figure}[t!] \begin{center} \includegraphics[width=7cm]{schematic_v6.pdf} \caption{Sideview of a plane-parallel film of a compressible, viscous fluid, driven at its lower boundary ($z=0$) with an external forcing, giving the uniform surface velocity $\mathbf{u}(t)$. } \label{fig:schematic} \end{center} \end{figure} \subsection{Linearized fluid-film hydrodynamics} \label{sec:linhydro} The surface-driven flow problem described above is governed by the following set of equations to the first order in field fluctuations \cite{LL} \begin{align} &\!\!\!\!\eta \nabla^2 \mathbf{v}^{(1)}\! +\! \left( \frac{\eta}{3} + \zeta\right)\! \nabla\left(\!\nabla\cdot \mathbf{v}^{(1)}\!\right)\! -\! \nabla p^{(1)}\!-\! \rho_0\partial_t \mathbf{v}^{(1)} \! = 0, \label{eq:lns1}\\ &\quad\,\, \partial_t\rho^{(1)}+ \rho_0\nabla \cdot \mathbf{v}^{(1)} {} = 0, \qquad \,\, p^{(1)} = c_0^2\rho^{(1)}, \label{eq:prho} \end{align} where $ c_0$ is the isothermal speed of sound \cite{Note_typo}. These equations are supplemented by the no-slip boundary conditions $\mathbf{v}^{(1)} (x,y, z=0; t) = \mathbf{u}(t)$, $\mathbf{v}^{(1)} (x, y, z=h; t) = 0$. The first-order hydrodynamic stress tensor is \begin{equation} \label{eq:sigma} \sigma_{jk}^{(1)} \!= \eta \big[\nabla_j v_k^{(1)} + \nabla_k v_j^{(1)}\big] - \delta_{jk}\!\left[\left(\frac{2\eta}{3} - \zeta\right) \!\nabla_l v_l^{(1)} +c_0^2 \rho^{(1)}\right], \end{equation} with $j, k, l=x, y, z$ denoting the Cartesian components. We drop the superscript $(1)$ for notational simplicity; thus, $\mathbf{v}(\mathbf{r}; t)$, $\rho(\mathbf{r}; t)$, and $p(\mathbf{r}; t)$ hereafter denote {\em only} the first-order field fluctuations around the given stationary values. Due to the one-dimensional nature of the flow, we drop the variables $x$ and $y$, and use the notation $\mathbf{v} = (\mathbf{v_{\!_\parallel}}, v_z)$, $\mathbf{v_{\!_\parallel}} = (v_x, v_y)$ to simplify Eqs. (\ref{eq:lns1}) and (\ref{eq:prho}) as \begin{align} &\rho_0 \partial_t \mathbf{v_{\!_\parallel}} = \eta \partial_z^2 \mathbf{v_{\!_\parallel}}, \label{eq:linvx}\\ &\rho_0 \partial_t v_z = -c_0^2 \partial_z \rho + \left(\frac{4 \eta}{3} + \zeta\right) \partial_z^2 v_z, \label{eq:linvz}\\ &\partial_t \rho + \rho_0 \partial_z v_z=0. \label{eq:linrho} \end{align} The different components of the velocity field are thus decoupled, as can be seen by combining Eqs. (\ref{eq:linvz}) and (\ref{eq:linrho}) to obtain the standard, attenuated wave equation \begin{equation} \label{eq:zwave} \partial^2_t v_z = c_0^2 \partial_z^2 v_z + \nu_{\!_\perp} {\partial_z^2 \partial_t v_z}. \end{equation} Due to the symmetry upon interchanging $v_x$ and $ v_y$, we restrict our discussions of $\mathbf{v_{\!_\parallel}}$ only to its $v_x$ component. The relevant non-zero components of the stress tensor include only the transverse and longitudinal ones \begin{align} \label{eq:sigmax_0} \sigma_{xz}(z;t)&= \rho_0\nu_{\!_\parallel} \partial_z v_x\\ \label{eq:sigmaz_0} \sigma_{zz}(z;t)&= \rho_0 \nu_{\!_\perp} \partial_z v_z - c_0^2 \rho (z;t), \end{align} where $\nu_{\!_\parallel} = \eta/\rho_0$ and $\nu_{\!_\perp} = \left(4\eta/3+\zeta\right)/\rho_0$ are the corresponding transverse (shear) and longitudinal (compressional) kinematic viscosities, respectively. Since our formulation is linear, the solutions reported below can linearly be superposed to study the case with {\em both} lower and upper plates undergoing small-amplitude displacements. This is a straightforward generalization, which we shall not discuss any further. \subsection{Equation of motion for the mobile surface} \label{subsec:EOM} Since the lower (mobile) plate is driven by the external force per unit area, $\mathbf{f}=\mathbf{f}(t)$, causing it to move with the velocity $\mathbf{u}=\mathbf{u}(t)$, we can write Newton's second law for its motion in the frequency (Fourier) domain as \begin{equation} \label{eq:eom} - {\dot\iota}%other notation: \dot\iota, \imath, \dot\imath, {\mathrm{i}} m \omega \widetilde{u}_j(\omega) = \widetilde{f}_j(\omega) - \widetilde{\sigma}_{jz}\big(z=0; \omega\big) n_z,\quad j=x, z, \end{equation} where $m$ is the plate mass {\em per unit area}, and $\widetilde{f}_j$ and $\widetilde{u}_j$ are the frequency-domain components of $\mathbf{f}$ and $\mathbf{u}$, respectively. Here, $n_z=-1$ is the $z$ component of the unit vector along the inward normal to the mobile plate, making $-\widetilde{\sigma}_{jz}\left(z=0; \omega\right) n_z$ the force component per unit area acting on the plate due to hydrodynamic stresses \cite{LL}. We solve Eqs. (\ref{eq:linvz}) and (\ref{eq:linrho}) with the required boundary conditions to find the solutions for the fluid velocity/density fields in terms of the surface velocity $\mathbf{u}(t)$, which can itself be determined as a function of $\mathbf{f}(t)$ by inserting those solutions into the stress term in Eq. (\ref{eq:eom}). This gives the the desired final forms of the fluid velocity/density fields as functions of the external forcing $\mathbf{f}(t)$. Our formulation can be implemented with any surface forcing model of either deterministic or stochastic origins. Stochastic forcing is particularly relevant to the thermal-noise AFM probe \cite{Maali2}, resulting, e.g., from Brownian fluctuations in the setup or ambient fluid. We shall adopt a Gaussian-distributed stochastic forcing with mean $\langle \widetilde{f}_j(\omega) \rangle = 0$ and two-point correlation function $\langle \widetilde{f}_j(\omega) \widetilde{f}_k(\omega') \rangle = 4\pi {\widetilde {\mathcal G}_j}(\omega) \delta_{jk}\delta(\omega+\omega')$, where ${\widetilde {\mathcal G}_j}(\omega)$ are the real-valued and positive forcing spectral densities. In our numerical analysis later, we shall adopt the {\em white-noise Ansatz} with ${\widetilde {\mathcal G}_j}(\omega) = {\mathcal G}_j$ (of dimension $[{\textrm{pressure}}]^2\cdot [{\textrm{time}}]$) taken as constants. Note that the {\em zero mean} taken for the surface forcing implies that the hydrodynamic stresses acting on the plates due to the fluctuations in the film will be zero on average, $\langle \sigma_{xz}(z;t)\rangle = \langle\sigma_{zz}(z;t) \rangle = 0$, where the brackets $\langle\cdots\rangle$ denote ensemble averaging over various realizations of the external forcing. Yet, the variance and correlation functions (or correlators) of instantaneous stresses can be finite, characterizing the measurable fluctuation-induced forces mediated by hydrodynamic correlations between the plates on the leading order. Although, these forces resemble the secondary hydrodynamic Casimir-like forces arising from near-equilibrium thermal fluctuations \cite{Monahan}, it is important to note that the stochastic forcing here can generally be non-thermal, in which case it can produce far-from-equilibrium stress fluctuations and correlations in the fluid film. To evaluate these correlations, we first give the solutions to the velocity and density fields in terms of the stress response functions of the fluid film. \section{Response to surface forcing} \label{sec:response} \subsection{Velocity and density fields} \label{subsec:vel_dens_fields} The velocity and density field fluctuations can be obtained by transforming Eqs.~\eqref{eq:linvx} and \eqref{eq:zwave} to the frequency domain. The governing equations for transverse and longitudinal modes read, respectively, as \begin{align} \label{eq:vxeq} &\partial_z^2 \widetilde{v}_x (z;\omega) = -\alpha^2(\omega)\,\widetilde{v}_x (z;\omega), \\ &\partial_z^2 \widetilde{v}_z (z;\omega) = -\kappa^2(\omega)\,\widetilde{v}_z(z;\omega), \label{eq:vzeq} \end{align} where $\alpha^2(\omega) = {\dot\iota}%other notation: \dot\iota, \imath, \dot\imath, {\mathrm{i}}{\omega}/{\nu_{\!_\parallel}}$ and $\kappa^2(\omega) = {\omega^2}/{\left(c_0^2 - {\dot\iota}%other notation: \dot\iota, \imath, \dot\imath, {\mathrm{i}} \omega \nu_{\!_\perp}\right)}$. Here, $\alpha$ and $\kappa$ give the frequency-dependent (screening) length-scales associated with the shear and compressional modes, respectively. There will be two equivalent sets of solutions for $\alpha$ and $\kappa$, fulfilling the relations $\alpha^\ast(\omega)=\pm\alpha(-\omega)$, $\kappa^\ast(\omega)=\pm\kappa(-\omega)$. One can conveniently choose the solutions satisfying these relations with plus signs, giving real (R) and imaginary (I) parts \begin{align} \label{eq:alphaRI} &\alpha_{\mathrm{R}}(\omega) = \alpha_{\mathrm{I}}(\omega)\operatorname{sgn}(\omega) = \pm \sqrt{\frac{|\omega|}{2 \nu_{\!_\parallel}}},\\ &\kappa_{\mathrm{R}}(\omega) = \pm\frac{|\omega|}{\sqrt{2}} \sqrt{\frac{\sqrt{c_0^4 + \omega^2 \nu_{\!_\perp}^2}+c_0^2}{c_0^4 + \omega^2 \nu_{\!_\perp}^2}}, \label{eq:kappaR}\\ &\kappa_{\mathrm{I}}(\omega) = \pm\frac{\omega}{\sqrt{2}} \sqrt{\frac{\sqrt{c_0^4 + \omega^2 \nu_{\!_\perp}^2}-c_0^2}{c_0^4 + \omega^2 \nu_{\!_\perp}^2}}, \label{eq:kappaI} \end{align} where $\operatorname{sgn}(\cdot)$ is the sign function. Now, solving Eqs.~(\ref{eq:vxeq}) and~(\ref{eq:vzeq}) with the required boundary conditions gives \begin{align} v_x (z;t) &= \int \frac{\mathrm{d}\omega}{2\pi}\,e^{-{\dot\iota}%other notation: \dot\iota, \imath, \dot\imath, {\mathrm{i}}\omega t}\,\frac{\sin\left[\alpha(\omega) (h-z)\right]}{\sin\left[\alpha(\omega) h \right]}\,\widetilde{u}_x(\omega),\label{eq:vxsol}\\ v_z (z;t) &= \int \frac{\mathrm{d}\omega}{2\pi}\,e^{-{\dot\iota}%other notation: \dot\iota, \imath, \dot\imath, {\mathrm{i}}\omega t}\,\frac{\sin\left[\kappa(\omega) (h-z)\right]}{\sin\left[\kappa(\omega) h\right]}\,\widetilde{u}_z(\omega). \label{eq:vzsol} \end{align} The density field fluctuations can be obtained using Eqs.~(\ref{eq:linrho}) and Eq. \eqref{eq:vzsol}, yielding \begin{align} \label{eq:rhointegral} \rho(z;t) &= \rho_0\!\int \frac{\mathrm{d}\omega}{2\pi}\,e^{-{\dot\iota}%other notation: \dot\iota, \imath, \dot\imath, {\mathrm{i}}\omega t} \left[\frac{{\dot\iota}%other notation: \dot\iota, \imath, \dot\imath, {\mathrm{i}}\kappa(\omega)}{\omega}\right] \frac{\cos\left[\kappa(\omega) (h-z)\right]}{\sin\left[\kappa(\omega) h\right]}\,\widetilde{u}_z(\omega). \end{align} It should be noted that the homogeneous parts of the solutions (in the time domain) are discarded as they depend on initial conditions, being irrelevant in the long-time stationary state to be studied here. \subsection{Response and correlation functions} \label{subsec:response_functions} Plugging Eqs. \eqref{eq:vxsol}-\eqref{eq:rhointegral} into Eqs. \eqref{eq:sigmax_0} and \eqref{eq:sigmaz_0}, we can write the two components of the surface stress tensor, $\widetilde{\sigma}_{jz}(z=0; \omega)$, in terms of the surface velocity components $\widetilde{u}_j(\omega)$ for $j=x, z$. Inserting the results into Eq. (\ref{eq:eom}), we find $\widetilde{u}_j(\omega)$ in terms of the surface forcing components, $\widetilde{f}_j(\omega)$ and, then, using Eqs. \eqref{eq:vxsol} and \eqref{eq:vzsol}, find the velocity field components $v_j (z;t)$ in terms of $\widetilde{f}_j(\omega)$ as \begin{align} v_j (z;t) &= \int \frac{\mathrm{d}\omega}{2\pi}\,e^{-{\dot\iota}%other notation: \dot\iota, \imath, \dot\imath, {\mathrm{i}}\omega t}\, {\widetilde R}_j(z; \omega)\widetilde{f}_j(\omega), \label{eq:vxsol1} \end{align} where the {\em velocity response functions} are given by \begin{widetext} \begin{align} {\widetilde R}_x(z; \omega) &= \frac{\sin{[\alpha(\omega)(h-z)]}}{\rho_0\nu_{\!_\parallel} \alpha(\omega) \left( \cos{[\alpha(\omega)h]} - \frac{m}{\rho_0}\alpha(\omega) \sin{[\alpha(\omega)h]}\right)}, \label{eq:vxsol2}\\ {\widetilde R}_z(z; \omega) &= \frac{\sin{[\kappa(\omega)(h-z)]}}{\rho_0\nu_{\!_\perp} \left( 1 + {\dot\iota}%other notation: \dot\iota, \imath, \dot\imath, {\mathrm{i}}\frac{c_0^2}{\nu_{\!_\perp}\omega}\right)\kappa(\omega) \left( \cos{[\kappa(\omega)h]} - \frac{m}{\rho_0}\kappa(\omega) \sin{[\kappa(\omega)h]}\right)}. \label{eq:vzsol2} \end{align} \end{widetext} These play the role of the Oseen tensor components in the considered geometry. We can now express the stress tensor components in terms of the external forcing as \begin{align} \label{eq:sigmaxf1} \sigma_{jz}(z;t)&=\int\frac{\mathrm{d}\omega}{2\pi}\,e^{-{\dot\iota}%other notation: \dot\iota, \imath, \dot\imath, {\mathrm{i}}\omega t} \widetilde{\chi}_{jz}(z; \omega)\widetilde{f}_{j}(\omega), \end{align} where the {\em stress response functions} (analogous to the pressure vector for the Oseen problem \cite{Dhont}) are given by \begin{align} \label{eq:sigmaxf} \widetilde{\chi}_{xz}(z; \omega) & = \rho_0\nu_{\!_\parallel} \partial_z {\widetilde R}_x(z; \omega),\\ \label{eq:sigmazf} \widetilde{\chi}_{zz}(z; \omega) & = \rho_0\nu_{\!_\perp} \left( 1 + {\dot\iota}%other notation: \dot\iota, \imath, \dot\imath, {\mathrm{i}} \frac{c_0^2}{\nu_{\!_\perp}\omega}\right) \partial_z {\widetilde R}_z(z; \omega). \end{align} Equations \eqref{eq:sigmaxf1}-\eqref{eq:sigmazf} can be used to evaluate the desired two-point correlators of the stresses across the fluid film ($0\leq z, z'\leq h$) defined as \begin{equation} \label{eq:csheardef_0} {\mathcal C}_{jz} (z,z'; t-t') = \langle \sigma_{jz}(z;t)\sigma_{jz}(z';t') \rangle, \end{equation} for $ j=x,z$, where the time homogeneity of the correlators in the stationary state is also explicitly indicated. The average in Eq. \eqref{eq:csheardef_0} can be evaluated (Section \ref{subsec:EOM}), yielding the transverse/longitudinal stress correlators in the frequency domain in terms of the stress response functions \eqref{eq:sigmaxf} and \eqref{eq:sigmazf} as \begin{equation} \label{eq:sigmaxcorr} {\widetilde{\mathcal C}}_{jz} (z,z'; \omega) = 2{\widetilde {\mathcal G}_j}(\omega)\widetilde{\chi}_{jz}(z; \omega) \widetilde{\chi}_{jz}(z'; -\omega). \end{equation} Other relevant quantities include the two-point correlators of density and pressure fluctuations expressed as \begin{align} \label{eq:density_corr_def} &{\mathcal C}_{\rho\rho} (z,z'; t-t') = \langle \rho(z;t)\rho(z';t') \rangle, \\ \label{eq:pressure_corr_def} &{\mathcal C}_{pp} (z,z'; t-t') = \langle p(z;t)p(z';t') \rangle. \end{align} These quantities are related through ${\mathcal C}_{pp} = {\mathcal C}_{\rho\rho}/c_0^4$ (see Eq. \eqref{eq:prho}). In the present context, the frequency-domain forms (or, the spectral densities) of density correlator, ${\widetilde{\mathcal C}}_{\rho\rho} (z,z'; \omega)$, and the corresponding pressure correlator, ${\widetilde{\mathcal C}}_{pp} (z,z'; \omega)$, are found to be related directly to that of the {\em longitudinal} stress correlator as \begin{equation} \label{eq:density_corr} {\widetilde{\mathcal C}}_{\rho\rho} (z,z'; \omega) = c_0^{-4}\, {\widetilde{\mathcal C}}_{pp} (z,z'; \omega) =\frac{{\widetilde{\mathcal C}}_{zz} (z,z'; \omega)}{c_0^4+\nu_{\!_\perp}^2\omega^2}. \end{equation} The two-point density correlator is of special interest in the context of thermal (near-equilibrium) hydrodynamic fluctuations in bulk fluids, in which case one uses its frequency/wavevector representation to obtain the spectral density of density fluctuations \cite{Boon-Yip,Berne-Pecora}. This latter quantity can be measured through inelastic (polarized) light scattering methods, enabling experimental determination of hydrodynamic-fluctuation spectra as well as various thermodynamic quantities and transport coefficients of bulk fluids. Our formulation directly relates the spectral density of hydrodynamic stress fluctuations on the film boundaries to the spectral density of fluid density fluctuations within the film. As such, it suggests an alternative method to probe the density fluctuations through the measurements of surface forces in confined fluids, where standard light scattering methods may be less suitable. \subsection{Dimensionless representation} \label{subsec:dimless} In our later treatment of shear ($\parallel$) and compressional ($\perp$) modes in dimensionless units, we shall make use of the rescaled variables \begin{equation}\label{eq:omegasdef} z_{\!_\parallel} = \frac{z}{\nu_{\!_{\parallel,\perp}}/c_0},\quad \tau_{\!_\parallel} = \frac{t}{\nu_{\!_{\parallel,\perp}}/c_0^2} \quad \mathrm{and} \quad \omega_{\!_\parallel} = \frac{\omega}{c_0^2/\nu_{\!_{\parallel,\perp}}}, \end{equation} as well as the rescaled inter-plate separation (rescaled film thickness) and the dimensionless mass per unit area of the mobile plate, respectively, as \begin{equation} h_{\!_\parallel}=\frac{h}{\nu_{\!_{\parallel,\perp}}/c_0}\qquad\mathrm{and}\qquad\gamma_{\!_\parallel} = \frac{m}{\rho_0 \nu_{\!_{\parallel,\perp}}/c_0}. \end{equation} Using the definitions in Eqs. \eqref{eq:alphaRI}-\eqref{eq:kappaI}, the parameter $\alpha(\omega)$ can be rescaled as ${\nu_{\!_\parallel}}\alpha(\omega)/{c_0}\rightarrow \xi(\omega_{\!_\parallel})$, where \begin{equation} \xi(\omega_{\!_\parallel}) =\pm\sqrt{\frac{|\omega_{\!_\parallel}|}{2}}\left(1+{\dot\iota}%other notation: \dot\iota, \imath, \dot\imath, {\mathrm{i}}\operatorname{sgn}(\omega_{\!_\parallel})\right), \label{eq:xidef} \end{equation} while $\kappa(\omega)$ can be rescaled as $ \nu_{\!_\perp}\kappa(\omega)/c_0\rightarrow \ell(\omegan) = {\ell_\mathrm{R}}(\omegan) + {\dot\iota}%other notation: \dot\iota, \imath, \dot\imath, {\mathrm{i}}{\ell_\mathrm{I}}(\omegan)$, with real and imaginary parts \begin{align} {\ell_\mathrm{R}} (\omegan) = \pm\frac{|\omegan|}{\sqrt{2}} \sqrt{\frac{\sqrt{1+\omegan^2}+1}{1+\omegan^2}}, \label{eq:ellRdef}\\ {\ell_\mathrm{I}} (\omegan) = \pm\frac{\omegan}{\sqrt{2}} \sqrt{\frac{\sqrt{1+\omegan^2}-1}{1+\omegan^2}}. \label{eq:ellIdef} \end{align} The spectral densities of external surface forcing are rescaled as ${\widetilde {\mathcal G}_x}(\omega)/{\mathcal G}_x\rightarrow \widetilde {\mathcal G}_{\!_\parallel}(\omega_{\!_\parallel})$ and ${\widetilde {\mathcal G}_z}(\omega)/{\mathcal G}_z\rightarrow \widetilde {\mathcal G}_{\!_\perp}(\omegan)$. The transverse and longitudinal stress response functions (\ref{eq:sigmaxf}) and (\ref{eq:sigmazf}) are dimensionless and can be re-expressed in rescaled coordinates immediately as $\widetilde{\chi}_{xz} (z; \omega)\rightarrow {\wchi_{\!_\parallel}} (z_{\!_\parallel}; \omega_{\!_\parallel})$ and $\widetilde{\chi}_{zz} (z; \omega)\rightarrow {\wchi_{\!_\perp}} (z_{\!_\perp}; \omegan)$ with explicit forms given by \begin{align} \label{eq:chi_s_omega_final} &{\wchi_{\!_\parallel}} (z_{\!_\parallel}; \omega_{\!_\parallel}) = - \frac{\cos[\xi(\omega_{\!_\parallel})(h_{\!_\parallel}-z_{\!_\parallel})]}{ \cos[\xi(\omega_{\!_\parallel})h_{\!_\parallel}] - \gamma_{\!_\parallel} \xi(\omega_{\!_\parallel}) \sin[\xi(\omega_{\!_\parallel})h_{\!_\parallel}] }, \\ \label{eq:chi_n_omega_final} &{\wchi_{\!_\perp}} (z_{\!_\perp}; \omegan) = - \frac{\cos[\ell(\omegan)(h_{\!_\perp}-z_{\!_\perp})]} {\cos[\ell(\omegan)h_{\!_\perp}] - \gamma_{\!_\perp} \ell(\omegan) \sin[\ell(\omegan)h_{\!_\perp}]}. \end{align} The transverse and longitudinal stress correlators in the time domain, Eq. \eqref{eq:csheardef_0}, are rescaled by the characteristic stress variances $2{\mathcal G}_x c_0^2 / \nu_{\!_\parallel}$ and $2{\mathcal G}_z c_0^2 / \nu_{\!_\perp}$ as \begin{align} \label{eq:cshear_def} \frac{{\mathcal C}_{xz} (z,z'; t-t')}{2{\mathcal G}_x c_0^2 / \nu_{\!_\parallel}}&\rightarrow \mathcal{C}_{\!_\parallel} (z_{\!_\parallel},z_{\!_\parallel}'; \Delta \tau_{\!_\parallel}), \\ \frac{{\mathcal C}_{zz} (z,z'; t-t')}{2{\mathcal G}_z c_0^2 / \nu_{\!_\perp}}&\rightarrow \mathcal{C}_{\!_\perp} (z_{\!_\perp},z_{\!_\perp}'; \Delta \tau_{\!_\perp}), \label{eq:cnorm_def} \end{align} where we have defined $\Delta \tau_{\!_\perp} = \tau_{\!_\perp} - \tau_{\!_\perp}'$ and $\Delta\tau_{\!_\parallel} = \tau_{\!_\parallel}-\tau_{\!_\parallel}'$. These time-dependent (real-valued) stress correlators are to be calculated from the Fourier-transform relations \begin{align} \label{eq:cshearintegral} &\mathcal{C}_{\!_\parallel}(z_{\!_\parallel}, z_{\!_\parallel}'; \Delta \tau_{\!_\parallel}) = \int \frac{\mathrm{d}\omega_{\!_\parallel}}{2\pi}\, e^{-{\dot\iota}%other notation: \dot\iota, \imath, \dot\imath, {\mathrm{i}}\omega_{\!_\parallel} \Delta \tau_{\!_\parallel}}{\widetilde{\mathcal{C}}}_{\!_\parallel}(z_{\!_\parallel}, z_{\!_\parallel}'; \omega_{\!_\parallel}),\\ \label{eq:cnormintegral} &\mathcal{C}_{\!_\perp}(z_{\!_\perp}, z_{\!_\perp}'; \Delta \tau_{\!_\perp}) = \int \frac{\mathrm{d}\omegan}{2\pi}\, e^{-{\dot\iota}%other notation: \dot\iota, \imath, \dot\imath, {\mathrm{i}}\omegan \Delta \tau_{\!_\perp}}{\widetilde{\mathcal{C}}}_{\!_\perp}(z_{\!_\perp},z_{\!_\perp}'; \omegan), \end{align} and also by making use of the frequency-domain expressions for ${\widetilde{\mathcal{C}}}_{\!_\parallel}(z_{\!_\parallel}, z_{\!_\parallel}'; \omega_{\!_\parallel})$ and ${\widetilde{\mathcal{C}}}_{\!_\perp}(z_{\!_\perp},z_{\!_\perp}'; \omegan)$, themselves following from the dimensionless form of Eq. \eqref{eq:sigmaxcorr} as \begin{align} \label{eq:wcshear_nondim} &{\widetilde{\mathcal{C}}}_{\!_\parallel}(z_{\!_\parallel}, z_{\!_\parallel}'; \omega_{\!_\parallel}) = \widetilde {\mathcal G}_{\!_\parallel}(\omega_{\!_\parallel}){\wchi_{\!_\parallel}}(z_{\!_\parallel}; \omega_{\!_\parallel}) {\wchi_{\!_\parallel}}(z_{\!_\parallel}'; -\omega_{\!_\parallel}),\\ \label{eq:wcnorm_nondim} &{\widetilde{\mathcal{C}}}_{\!_\perp}(z_{\!_\perp},z_{\!_\perp}'; \omegan) = \widetilde {\mathcal G}_{\!_\perp}(\omegan){\wchi_{\!_\perp}}(z_{\!_\perp}; \omegan) {\wchi_{\!_\perp}}(z_{\!_\perp}'; -\omegan). \end{align} Since $\xi(-\omega_{\!_\parallel}) = \xi^\ast(\omega_{\!_\parallel})$ and $\ell(-\omegan)=\ell^\ast(\omegan)$ (Section \ref{subsec:vel_dens_fields}) and, hence, ${\wchi_{\!_\parallel}}(z_{\!_\parallel}; -\omega_{\!_\parallel}) = {\wchi_{\!_\parallel}}^\ast(z_{\!_\parallel}; \omega_{\!_\parallel})$ and ${\wchi_{\!_\perp}}(z_{\!_\parallel}; -\omega_{\!_\parallel}) = {\wchi_{\!_\perp}}^\ast(z_{\!_\parallel}; \omega_{\!_\parallel})$, the stress correlators are found to be symmetric w.r.t. interchanging their spatial (first and second) arguments and concurrently reversing the sign of their frequency/time (third) argument (or, by taking their complex conjugates); i.e., ${\widetilde{\mathcal{C}}}_{\!_\parallel}(z_{\!_\parallel}, z_{\!_\parallel}'; \omega_{\!_\parallel}) = {\widetilde{\mathcal{C}}}_{\!_\parallel}(z_{\!_\parallel}', z_{\!_\parallel}; -\omega_{\!_\parallel}) = {\widetilde{\mathcal{C}}}_{\!_\parallel}^\ast(z_{\!_\parallel}', z_{\!_\parallel}; \omega_{\!_\parallel})$, and ${\widetilde{\mathcal{C}}}_{\!_\perp}(z_{\!_\perp},z_{\!_\perp}'; \omegan) = {\widetilde{\mathcal{C}}}_{\!_\perp}(z_{\!_\perp}',z_{\!_\perp}; -\omegan) = {\widetilde{\mathcal{C}}}_{\!_\perp}^\ast(z_{\!_\perp}',z_{\!_\perp}; \omegan)$. Formally similar symmetry relations hold in the time domain for $\mathcal{C}_{\!_\parallel}(z_{\!_\parallel}, z_{\!_\parallel}'; \Delta \tau_{\!_\parallel})$ and $\mathcal{C}_{\!_\perp}(z_{\!_\perp}, z_{\!_\perp}'; \Delta \tau_{\!_\perp})$, and also for the density and pressure correlators as implied by Eq. \eqref{eq:density_corr}. It should also be noted that the {\em same-point} correlators, ${\widetilde{\mathcal{C}}}_{\!_\parallel}(z_{\!_\parallel}, z_{\!_\parallel}; \omega_{\!_\parallel})$ and ${\widetilde{\mathcal{C}}}_{\!_\perp}(z_{\!_\perp},z_{\!_\perp}; \omegan)$, are nothing but the local {\em variances} of stress fluctuations at a given frequency. They are evidently real-valued and positive and, as such, ensure the stability of our linear-fluctuation analysis. For the sake of concreteness in our numerical analysis below, we set the forcing spectral densities equal to one, $\widetilde {\mathcal G}_{\!_\parallel}(\omega_{\!_\parallel})= \widetilde {\mathcal G}_{\!_\perp}(\omegan)=1$, equivalent to adopting the {\em white-noise Ansatz} for the surface forcing as noted before. \begin{comment} \subsection{Cutoff and other parameter values} \label{subsec:cutoff} Although the frequency integrals appearing in the analytical results in the previous sections are originally defined over the whole real axis, in our following numerical analyses, we impose a frequency cutoff $\omega^\infty$ and take the integrals over the range $|\omega| \leq \omega^\infty$. Such a cutoff serves as an additional system parameter. It may be associated with a typical microscopic scale, $a$, at which the continuum hydrodynamic description breaks down, thus also limiting the range of film thicknesses to be studied to $h>a$. For the most part, we shall set the length-scale cutoff $a$ equal to $\nu_{\!_\parallel}/c_0$ ($\nu_{\!_\perp}/c_0$) with the corresponding frequency cutoff $\omega^\infty$ set equal to $c_0^2/\nu_{\!_\parallel}$ ($c_0^2/\nu_{\!_\parallel}$) for the shear (compression) modes. In dimensionless units, we have $a_{\!_\parallel} = a_{\!_\perp} = 1$ and $\omega_{\!_\parallel}^\infty=\omegan^\infty=1$ (Section \ref{subsec:dimless}). In the case of gases, $\nu_{\!_\parallel}/c_0$ is proportional (up to a factor including the ratio of specific heats) to the molecular mean free path, making $h_{\!_\parallel}$ proportional to the inverse Knudsen number \cite{LL_physkinetics}. For the parameter values relevant to water, e.g., $\eta\simeq 8.9\times10^{-4}\,{\textrm{Pa}}\cdot{\textrm{s}}$, $\rho_0\simeq 997\,{\textrm{Kg}}\cdot{\textrm{m}}^{-3}$, $c_0\simeq 1496.7\,{\textrm{m}}\cdot{\textrm{s}}^{-1}$ at $25^{\circ}{\textrm{C}}$ \cite{CRC-Handbook} and assuming $\zeta\simeq 3\eta$ \cite{Dukhin2009}, the length-scale and frequency cutoffs mentioned above for the shear (compression) modes are found approximately $a\simeq 0.6\,{\textrm{nm}}$ ($2.6\,{\textrm{nm}}$) and $\omega^\infty\simeq 2.5\times10^{12}\,{\textrm{s}}^{-1}$ ($0.6\times10^{12}\,{\textrm{s}}^{-1}$), respectively. These values agree with recent studies indicating validity of continuum hydrodynamic description in water down to scales of around one nanometer \cite{Bocquet}. \end{comment} \section{Transverse stress correlators} \label{sec:shear_corr} We start our analysis by focusing first on the transverse hydrodynamic stresses produced by the white noise external surface forcing. The two-point correlator in the frequency domain, ${\widetilde{\mathcal{C}}}_{\!_\parallel}(z_{\!_\parallel}, z_{\!_\parallel}'; \omega_{\!_\parallel})$, can be shown to be only a function of the redefined dimensionless frequency and coordinate variables $\omega_{\!_\parallel}h_{\!_\parallel}^2$ and $z_{\!_\parallel}/h_{\!_\parallel}$ and the rescaled parameter $h_{\!_\parallel}/\gamma_{\!_\parallel}$. By setting $z_{\!_\parallel}, z_{\!_\parallel}'=\{0, h_{\!_\parallel}\}$, we obtain the {\em same-plate} and {\em cross-plate} correlators, which we shall refer to as {\em self-} and {\em cross-correlators}, respectively, as \begin{align} &{\widetilde{\mathcal{C}}}_{\!_\parallel}(0, 0; \omega_{\!_\parallel})=\vert{\wchi_{\!_\parallel}}(0; \omega_{\!_\parallel})\vert^2,\,\,\, {\widetilde{\mathcal{C}}}_{\!_\parallel}(h_{\!_\parallel}, h_{\!_\parallel}; \omega_{\!_\parallel})= \vert{\wchi_{\!_\parallel}}(h_{\!_\parallel}; \omega_{\!_\parallel})\vert^2, \nonumber \\ \label{eq:sameplate_shear_w_cross} &{\widetilde{\mathcal{C}}}_{\!_\parallel} (0,h_{\!_\parallel}; \omega_{\!_\parallel}) = {\widetilde{\mathcal{C}}}_{\!_\parallel}^\ast (h_{\!_\parallel}, 0; \omega_{\!_\parallel}) ={\wchi_{\!_\parallel}}(0; \omega_{\!_\parallel}){\wchi_{\!_\parallel}}^\ast(h_{\!_\parallel}; \omega_{\!_\parallel}). \end{align} In Fig. \ref{fig:spectra_shear}, we show the self-correlators, ${\widetilde{\mathcal{C}}}_{\!_\parallel}(0, 0; \omega_{\!_\parallel})$ and ${\widetilde{\mathcal{C}}}_{\!_\parallel}(h_{\!_\parallel}, h_{\!_\parallel}; \omega_{\!_\parallel})$, and the real part of the cross-correlator, $\operatorname{Re} \, {\widetilde{\mathcal{C}}}_{\!_\parallel} (0,h_{\!_\parallel}; \omega_{\!_\parallel})$ (being the only part that contributes to the cross-correlator in the time domain) as functions of $\omega_{\!_\parallel}h_{\!_\parallel}^2$ for $h_{\!_\parallel}/\gamma_{\!_\parallel}=5$. As seen, the three quantities exhibit rapid monotonic decrease with the frequency from their maximum value of one at zero frequency fulfilling the inequalities $\operatorname{Re} \, {\widetilde{\mathcal{C}}}_{\!_\parallel} (0,h_{\!_\parallel}; \omega_{\!_\parallel})\leq{\widetilde{\mathcal{C}}}_{\!_\parallel}(h_{\!_\parallel}, h_{\!_\parallel}; \omega_{\!_\parallel})\leq {\widetilde{\mathcal{C}}}_{\!_\parallel}(0, 0; \omega_{\!_\parallel})$ across the frequency domain. \begin{figure}[t!] \begin{center} \includegraphics[width=5.65cm]{Mpcstildemixgamma2-eps-converted-to.pdf} \caption{The self- and cross-correlators of the transverse stresses acting on the plates as functions of the redefined dimensionless frequency, $\omega_{\!_\parallel}h_{\!_\parallel}^2$, for fixed $h_{\!_\parallel}/\gamma_{\!_\parallel}=5$. } \label{fig:spectra_shear} \end{center} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[width=6.cm]{plotprszzMix-eps-converted-to.pdf} \caption{Profiles of the rescaled, equal-time, transverse stress correlators $h_{\!_\parallel}^2\mathcal{C}_{\!_\parallel} (z_{\!_\parallel},z_{\!_\parallel})$ and $h_{\!_\parallel}^2\mathcal{C}_{\!_\parallel} (0,z_{\!_\parallel})$ across the fluid film ($0\leq z_{\!_\parallel}\leq h_{\!_\parallel}$) for fixed $h_{\!_\parallel}/\gamma_{\!_\parallel}=5$. } \label{fig:stress_profiles_shear} \end{center} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=5.65cm]{cshhh-eps-converted-to.pdf} \hspace{-5mm} (a) \includegraphics[width=5.75cm]{cs00h-eps-converted-to.pdf} \hspace{-5mm} (b) \includegraphics[width=5.65cm]{cs0hh-eps-converted-to.pdf} \hspace{-5mm} (c) \caption{Log-log plots of equal-time transverse stress (self- and cross-) correlators $\mathcal{C}_{\!_\parallel}(h_{\!_\parallel}, h_{\!_\parallel})$, $\Delta\mathcal{C}_{\!_\parallel}(0, 0)$ and $\mathcal{C}_{\!_\parallel}(0, h_{\!_\parallel})$, in panels (a), (b) and (c), respectively, as functions of the dimensionless inter-plate separation, $h_{\!_\parallel}$, for $\gamma_{\!_\parallel}=2, 10, 30$ and $100$. \label{fig:equaltimeCs} } \end{figure*} These behaviors can be understood by noting that the characteristic frequencies of the transverse modes occur at the poles of the transverse response function ${\wchi_{\!_\parallel}}$ over the complex frequency ($\varsigma$) plane. These poles are found to fall onto the lower-half imaginary axis, reflecting the diffusive nature of the shear modes in the hydrodynamic regime (Appendix \ref{app:poles}). The singular part of the transverse response functions in the proximity of a given pole $\varsigma_{\!_\parallel}^{(n)}$ (see Eq. \eqref{eq:sol_roots_shear_app2}) behaves as ${\wchi_{\!_\parallel}}\sim 1/\big(\omega_{\!_\parallel}^2+[\operatorname{Im}\,\varsigma-\operatorname{Im}\,\varsigma_{\!_\parallel}^{(n)}]^2\big)$. Thus, along the real-valued frequency axis $\omega_{\!_\parallel} = \operatorname{Re}\,{\varsigma}$, ${\wchi_{\!_\parallel}}$ takes a Lorentzian form peaked around $\omega_{\!_\parallel}=0$ as all of the poles are purely imaginary, explaining the monotonically decreasing behaviors in Fig. \ref{fig:spectra_shear}. \subsection{Equal-time transverse stress correlators} \label{subsec:shear_corr} The equal-time transverse correlators are obtained by setting $\Delta \tau_{\!_\parallel}=0$ in Eq. \eqref{eq:cshearintegral}, in which case we drop the time (third) argument by redefining our notation as $\mathcal{C}_{\!_\parallel} (z_{\!_\parallel},z_{\!_\parallel}') \equiv\mathcal{C}_{\!_\parallel} (z_{\!_\parallel},z_{\!_\parallel}'; \Delta \tau_{\!_\parallel}=0)$. Hence, \begin{align} \label{eq:shear_equal_time_redef} \mathcal{C}_{\!_\parallel} (z_{\!_\parallel},z_{\!_\parallel}') = \int \frac{\mathrm{d}\omega_{\!_\parallel}}{2\pi}\,{\wchi_{\!_\parallel}}(z_{\!_\parallel}; \omega_{\!_\parallel}){\wchi_{\!_\parallel}}^\ast(z_{\!_\parallel}'; \omega_{\!_\parallel}). \end{align} When rescaled with the inter-plate separation as $h_{\!_\parallel}^2 \mathcal{C}_{\!_\parallel} (z_{\!_\parallel},z_{\!_\parallel})$ and $h_{\!_\parallel}^2 \mathcal{C}_{\!_\parallel} (0,z_{\!_\parallel})$, these correlators will be functions of $z_{\!_\parallel}/h_{\!_\parallel}$ and $h_{\!_\parallel}/\gamma_{\!_\parallel}$ only. As seen in Fig. \ref{fig:stress_profiles_shear}, both of these correlators vary within a relatively small range of values across the film. Although this may naively appear as indicating an approximate power-law behavior of $\mathcal{C}_{\!_\parallel} \sim h_{\!_\parallel}^{-2}$ at any point across the film, we find a more diverse set of power-law behaviors for the self- and cross-correlators of the stresses acting on the plates. \begin{figure}[t!] \begin{center} \includegraphics[width=5.75cm]{csuniversalplots-eps-converted-to.pdf} \caption{The rescaled universal forms of the three equal-time transverse stress correlators, as indicated on the graph (see also Figs. \ref{fig:equaltimeCs}a-c), as functions of $h_{\!_\parallel}/\gamma_{\!_\parallel}$, demonstrating the crossover between the two power-law regimes found at small and large values of $h_{\!_\parallel}/\gamma_{\!_\parallel}$ in each case. } \label{fig:stress_profiles_univ} \end{center} \end{figure} It should be noted that the frequency integrals discussed above converge sufficiently rapidly and, therefore, remain finite. For numerical expediency, and as a check on the self-consistency of the continuum approach used here (see Appendix \ref{app:cutoff} for details), it is convenient to introduce a high frequency cutoff $\omega_{\!_\parallel}^\infty$, which (as discussed in the appendix) can conveniently be taken as $\omega_{\!_\parallel}^\infty=1$. In general, the numerical outcomes for $\mathcal{C}_{\!_\parallel}(h_{\!_\parallel}, h_{\!_\parallel})$ and $\mathcal{C}_{\!_\parallel}(0, h_{\!_\parallel})$ remain unaffected by the precise choice of the frequency cutoff, when the latter is sufficiently large. The self-correlator on the {\em mobile} (lower) plate, $\mathcal{C}_{\!_\parallel}(0, 0)$, however tends to a constant predominantly determined by the forcing spectral density as the inter-plate separation is increased. To eliminate such spurious effects, we subtract this limiting value that represents the self-interaction of the mobile plate in the bulk by defining the {\em excess} correlator as $\Delta\mathcal{C}_{\!_\parallel}(0, 0) = \mathcal{C}_{\!_\parallel}(0, 0) - \lim_{h_{\!_\parallel}\rightarrow\infty} \mathcal{C}_{\!_\parallel}(0, 0)$. In all cases, our numerical results remain independent of the choice of the frequency cutoff, when the latter is large enough, but it is still kept within the regime consistent with the continuum hydrodynamic description (Appendix \ref{app:cutoff}). \begin{figure*}[t!] \centering \includegraphics[width=5.8cm]{logMpcntilde00gamma2b-eps-converted-to.pdf} \hspace{-5mm} (a) \includegraphics[width=5.55cm]{Mpcntilde00gamma2b-eps-converted-to.pdf} \hspace{-5mm} (b) \includegraphics[width=5.85cm]{Mpcntilde0hgamma2b-eps-converted-to.pdf} \hspace{-5mm} (c) \caption{Log-log plot of the longitudinal stress self-correlator ${\widetilde{\mathcal{C}}}_{\!_\perp}(h_{\!_\perp},h_{\!_\perp}; \omegan)$ in panel (a), and linear plots of the excess longitudinal stress (self- and cross-) correlators $\Delta {\widetilde{\mathcal{C}}}_{\!_\perp}(0,0; \omegan)$ and $\operatorname{Re}\,{\widetilde{\mathcal{C}}}_{\!_\perp}(0,h_{\!_\perp}; \omegan)$ in panels (b) and (c), respectively, as functions of the dimensionless frequency, $\omegan$, for fixed $\gamma_{\!_\perp}=2$ and $h_{\!_\perp}=10, 30$ and $50$ as indicated on the graphs. \label{fig:Cn_freq} } \end{figure*} As the log-log plots in Fig. \ref{fig:equaltimeCs} show, the stress correlators fall off rapidly as power-laws of the rescaled inter-plate separation, $h_{\!_\parallel}$, in all cases (panels a to c), when $h_{\!_\parallel}$ is sufficiently large. In fact, one can discern distinct power-law regimes for both small and large values of $h_{\!_\parallel}$ as $\gamma_{\!_\parallel}$ is varied. As noted before, the key parameter here is the ratio $h_{\!_\parallel}/\gamma_{\!_\parallel}$. We find the power-law behaviors as \begin{align} \label{eq:powerlaw_s_hh} \mathcal{C}_{\!_\parallel}(h_{\!_\parallel}, h_{\!_\parallel})\sim\left\{\begin{array}{l l} h_{\!_\parallel}^{-2} & \quad h_{\!_\parallel}/\gamma_{\!_\parallel}\gg 1, \\ h_{\!_\parallel}^{-1} & \quad h_{\!_\parallel}/\gamma_{\!_\parallel}\ll 1, \end{array}\right. \end{align} \begin{align} \label{eq:powerlaw_s_00} \Delta\mathcal{C}_{\!_\parallel}(0, 0)\sim\left\{\begin{array}{l l} h_{\!_\parallel}^{-4} & \quad h_{\!_\parallel}/\gamma_{\!_\parallel}\gg 1, \\ h_{\!_\parallel}^{-1} & \quad h_{\!_\parallel}/\gamma_{\!_\parallel}\ll 1, \end{array}\right. \end{align} \begin{align} \label{eq:powerlaw_s_0h} \mathcal{C}_{\!_\parallel}(0, h_{\!_\parallel})\sim\left\{\begin{array}{l l} h_{\!_\parallel}^{-3} & \quad h_{\!_\parallel}/\gamma_{\!_\parallel}\gg 1, \\ h_{\!_\parallel}^{-1} & \quad h_{\!_\parallel}/\gamma_{\!_\parallel}\ll 1. \end{array}\right. \end{align} Because of the cumbersome form of the integrand in Eq. \eqref{eq:cshearintegral} (involving factors with implicit and complex-valued dependence on $\omega_{\!_\parallel}$), we were not able to derive the scaling forms analytically. They have rather been confirmed numerically, with the reported {\em universal} exponents being accurate within a margin of error $<5\%$. These power-laws reflect the more fundamental scale-invariant forms admitted in general by each of the above transverse stress correlators in terms of their two main parameters as ${\mathcal C}_{\!_\parallel} = h_{\!_\parallel}^{-2}{\mathcal F}\left({\gamma_{\!_\parallel}}/{h_{\!_\parallel}}\right)$, where ${\mathcal C}_{\!_\parallel}$ stands for either of the quantities $\mathcal{C}_{\!_\parallel}(h_{\!_\parallel}, h_{\!_\parallel})$, $\Delta\mathcal{C}_{\!_\parallel}(0, 0)$ or $\mathcal{C}_{\!_\parallel}(0, h_{\!_\parallel})$, and ${\mathcal F}(\cdot)$ is the corresponding {\em universal} function. For these three cases, the universal functions can be calculated numerically for a wide range (several decades) of their arguments with the results, shown in the log-log plot of Fig. \ref{fig:stress_profiles_univ}, clearly demonstrating the crossover between two distinct power-law regimes: These regimes appear as straight lines for small and large $h_{\!_\parallel}/\gamma_{\!_\parallel}$ for each one of the three plotted curves. The numerical values of the slopes match the corresponding exponents given in Eqs. \eqref{eq:powerlaw_s_hh}-\eqref{eq:powerlaw_s_0h} plus two. \section{Longitudinal stress correlators} \label{sec:norm_corr} The longitudinal stress correlators show distinct features as compared with their transverse counterparts. As seen from the frequency-domain plots in panels a to c in Fig. \ref{fig:Cn_freq}, the longitudinal self- and cross-correlators exhibit well-developed peaks, representing acoustic resonances due to the compressional modes excited by the external surface forcing in the fluid film. These modes are associated with the poles of the corresponding response functions in the complex frequency plane (see Appendix \ref{app:poles}). The peaks observed along the real-frequency axis, $\omegan$, in Fig. \ref{fig:Cn_freq} are indeed produced by the first few compression poles (for instance, the four peaks seen for $h_{\!_\perp}=50$, blue curve, in ${\widetilde{\mathcal{C}}}_{\!_\perp}(h_{\!_\perp},h_{\!_\perp}; \omegan)$, panel a, coincide with the loci of the first four poles with $0\leq n \leq 3$). The number of peaks and their heights grow and their loci shift to smaller frequencies as the inter-plate separation, $h_{\!_\perp}$, is increased (and/or as $\gamma_{\!_\perp}$ is increased). At a given inter-plate separation, the longitudinal stress correlators take their largest values at the first peak, which gives the dominant contribution to the frequency integrals. This signifies the prevalence of low-frequency acoustic resonances at larger inter-plate separations and the major role of the corresponding acoustic modes propagating across the film in intensifying fluctuations and correlations of the longitudinal stresses exerted on the confining plates (the higher-order acoustic modes are more strongly attenuated as higher-order peak heights are suppressed by one or even few orders of magnitude as seen in panel a). \begin{figure}[t!] \begin{center} \includegraphics[width=5.5cm]{plotprnzzMix-eps-converted-to.pdf} \caption{Profiles of the equal-time longitudinal stress correlators $\mathcal{C}_{\!_\perp} (z_{\!_\perp},z_{\!_\perp})$ and $\mathcal{C}_{\!_\perp} (0,z_{\!_\perp})$ across the fluid film ($0\leq z_{\!_\perp}\leq h_{\!_\perp}$) for fixed $h_{\!_\perp}=50$ and $\gamma_{\!_\perp}=2$. } \label{fig:stress_profiles_norm} \end{center} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=5.65cm]{Mpcnhhgamma2103-eps-converted-to.pdf} \hspace{-5mm} (a) \includegraphics[width=5.65cm]{Mpcn00gamma2103-eps-converted-to.pdf} \hspace{-5mm} (b) \includegraphics[width=5.65cm]{Mpcn0hgamma2103-eps-converted-to.pdf} \hspace{-5mm} (c) \caption{Log-log plots of equal-time longitudinal stress (self- and cross-) correlators $\mathcal{C}_{\!_\perp}(h_{\!_\perp}, h_{\!_\perp})$, $\Delta\mathcal{C}_{\!_\perp}(0, 0)$ and $\mathcal{C}_{\!_\perp}(0, h_{\!_\perp})$, in panels (a), (b) and (c), respectively, as functions of the dimensionless inter-plate separation, $h_{\!_\perp}$, for $\gamma_{\!_\perp}=2, 10, 30$ and $100$. \label{fig:equaltimeCn} } \end{figure*} Although, such acoustic resonances strengthen the longitudinal stress correlator on the upper (fixed) plate, ${\widetilde{\mathcal{C}}}_{\!_\perp}(h_{\!_\perp},h_{\!_\perp}; \omegan)$, which thus remains consistently above its zero-frequency value of one for a wide range of frequencies, sound absorption becomes gradually dominant (as the imaginary parts of the compression poles become large; Appendix \ref{app:poles}), causing the stress correlator to fall off to zero at sufficiently high frequencies (for $h_{\!_\perp}=50$ in panel a, this occurs on approach, but well before, the chosen upper frequency cutoff of $\omegan^\infty=1$; see Appendix \ref{app:cutoff}). In the case of $\Delta{\widetilde{\mathcal{C}}}_{\!_\perp}(0,0; \omegan)$ and ${\widetilde{\mathcal{C}}}_{\!_\perp}(0,h_{\!_\perp}; \omegan)$ in panels b and c in Fig. \ref{fig:Cn_freq}, the correlators can take both positive and negative values. The negative values of these quantities represent out-of-phase (or anti-) correlations occurring in certain intervals along the real-frequency axis. \subsection{Equal-time longitudinal stress correlators} \label{subsec:normal_corr} The equal-time longitudinal stress correlators defined through Eq. \eqref{eq:cnormintegral}, and denoted more compactly as $\mathcal{C}_{\!_\parallel} (z_{\!_\perp},z_{\!_\perp}') \equiv\mathcal{C}_{\!_\parallel} (z_{\!_\perp},z_{\!_\perp}'; \Delta \tau_{\!_\perp}=0)$, can be evaluated from \begin{align} \label{eq:norm_equal_time_redef} \mathcal{C}_{\!_\perp} (z_{\!_\perp},z_{\!_\perp}') = \int \frac{\mathrm{d}\omegan}{2\pi}\,{\wchi_{\!_\perp}}(z_{\!_\perp}; \omegan){\wchi_{\!_\perp}}^\ast(z_{\!_\perp}'; \omegan). \end{align} Unlike their transverse counterparts, these correlators do not in general admit scale-invariant forms. In Fig. \ref{fig:stress_profiles_norm}, we show $\mathcal{C}_{\!_\perp} (z_{\!_\perp},z_{\!_\perp})$ and $\mathcal{C}_{\!_\perp} (0,z_{\!_\perp})$ as functions of $z_{\!_\perp}/h_{\!_\perp}$ for fixed $h_{\!_\perp}=50$ and $\gamma_{\!_\perp}=2$. It is interesting to note that while the different-point ($z_{\!_\perp}\neq z_{\!_\perp}'$) correlator levels off rapidly to a limiting value smaller than the reference value of $\mathcal{C}_{\!_\perp} (0,0)$, the same-point correlator of longitudinal stresses increases almost linearly as one moves away from the lower (mobile) plate toward the upper (fixed) plate, where it takes its largest value. This indicates stronger hydrodynamic stress fluctuations closer to the fixed plate and pronounced, {\em non-decaying}, r.m.s. values for the longitudinal stresses acting on it even at relatively large separations. This non-decaying behavior is more clearly seen from the plot in panel a in Fig. \ref{fig:equaltimeCn}. Here, the equal-time longitudinal correlator, $\mathcal{C}_{\!_\perp}(h_{\!_\perp},h_{\!_\perp})$, is shown to increase to a saturated maximum level, depending on the dimensionless mass parameter $\gamma_{\!_\perp}$. The reason for this behavior is that as $h_{\!_\perp}$ is gradually increased (starting from its minimum base value of $h_{\!_\perp}=1$), the characteristic frequency corresponding to the first compressional mode decreases and falls below the cutoff frequency of $\omegan^\infty=1$, making it realizable and relevant in the hydrodynamic domain, manifesting itself also as a gradual increase in $\mathcal{C}_{\!_\perp}(h_{\!_\perp},h_{\!_\perp})$. As $h_{\!_\perp}$ is further increased, the first characteristic frequency (whose locus over the real-frequency axis scales as $h_{\!_\perp}^{-1}$, as is characteristic to acoustic modes) decreases further toward the low-frequency regions, where sound attenuation is subdominant. As such, the corresponding acoustic resonance leads to a strongly propagating acoustic mode across the fluid film, creating a pronounced first peak in the stress-correlator plots in the frequency domain (panel a in Fig. \ref{fig:Cn_freq}), bringing $\mathcal{C}_{\!_\perp} (h_{\!_\perp},h_{\!_\perp})$ up to a saturation level (panel a in Fig. \ref{fig:equaltimeCn}; see also panel b in Fig. \ref{fig:cutoff_n} in Appendix \ref{app:cutoff}). The crossover to the saturation regime in $\mathcal{C}_{\!_\perp} (h_{\!_\perp},h_{\!_\perp})$ occurs roughly at $h_{\!_\perp}\sim \gamma_{\!_\perp}$. In the case of the longitudinal stress correlators $\Delta\mathcal{C}_{\!_\perp}(0, 0)$ and $\mathcal{C}_{\!_\perp}(0, h_{\!_\perp})$, shown in panels b and c, and in analogy with our discussion of the power-law behaviors in the case of transverse correlators, we find power-law behaviors in the regime of large inter-plate separations, $h_{\!_\perp}/\gamma_{\!_\perp}\gg 1$, with universal exponents as \begin{align} &\Delta\mathcal{C}_{\!_\perp}(0, 0)\sim h_{\!_\perp}^{-3/2}, \\ &\mathcal{C}_{\!_\perp}(0, h_{\!_\perp})\simh_{\!_\perp}^{-1}. \end{align} The excess stress correlator on the lower (mobile) plate is defined here as $\Delta\mathcal{C}_{\!_\perp}(0, 0) = \mathcal{C}_{\!_\perp}(0, 0) - \lim_{h_{\!_\perp}\rightarrow\infty} \mathcal{C}_{\!_\perp}(0, 0)$. In all cases discussed above, the main contribution to the longitudinal correlators comes from the fluctuations and correlation produced in the film through the pressure (second) term in Eq. \eqref{eq:sigmaz_0} rather than the viscous stress (first) terms. \section{Discussion and conclusion} \label{sec:conclusion} We have studied hydrodynamic correlations and fluctuation-induced interactions mediated between the no-slip bounding surfaces of a planar film of a compressible and viscous fluid, driven externally at one of its boundaries (lower plate) by a stochastic surface forcing of arbitrary spectral density, while the other (upper) plate is kept fixed. We develop general analytical results within the linear hydrodynamic scheme and numerically analyze the outcomes for the special case of a Gaussian white-noise forcing. The stochastic surface forcing leads to fluctuating transverse (shear) and longitudinal (compressional) hydrodynamic stresses within the film and on the bounding surfaces. To bring out the hydrodynamic fluctuation-induced effects more clearly, we conveniently assume that the external forcing has a zero mean; hence, the resulting hydrodynamic stresses also vanish on average, and their two-point correlators embody the hydrodynamic correlation effects. We show that the same-plate (self-) and cross-plate (cross-) correlators of the transverse stress exhibit two distinct regimes of power-law behaviors at small and large inter-plate separations $h$, with different, yet universal, scaling exponents. In the case of longitudinal stress correlators, the power-law dependencies are obtained only for the large-separation behavior of the (excess) self-correlator on the mobile plate, $\Delta\mathcal{C}_{\!_\perp} (0,0)$, and the cross-correlator, $\mathcal{C}_{\!_\perp} (0,h)$, with power-law decays being expressively weaker than those of the transverse correlators. The spectral analysis of the longitudinal stress correlators reveals distinct underlying differences with the transverse ones due to the prevalence of propagating, underdamped, acoustic modes in the confined geometry \cite{Diamant}. The longitudinal stress self-correlator at the {\em fixed} plate, $\mathcal{C}_{\!_\perp} (h,h)$, thus displays a thoroughly different behavior: It increases with $h$ and saturates at a finite value, representing a constant, longitudinal, r.m.s. stress $\sigma^{\mathrm{rms}}_{\!_{\perp}}= \sqrt{\mathcal{C}_{\!_\perp} (h,h)}$ at sufficiently large $h$. This feature indicates the existence of {\em non-decaying}, longitudinal, hydrodynamic fluctuation forces, acting on the fixed plate. This can be contrasted with, e.g., the transverse stresses on the fixed plate, whose r.m.s. decays with the inter-plate separation as $\sigma^{\mathrm{rms}}_{\!_{\parallel}}\sim h^{-1}$ for large $h$. The non-decaying behavior of $\mathcal{C}_{\!_\perp} (h,h)$ emanates directly from the excitation of the acoustic modes (acoustic resonances) and sets in at the appearance of the first peak in the corresponding spectral representation (Section \ref{sec:norm_corr}). Indeed, one can directly verify that the dominant contribution to $\mathcal{C}_{\!_\perp} (h,h)$ comes from the compressional term in the longitudinal stress (second term in Eq. \eqref{eq:sigmaz_0}). Such long-ranged, sound-mediated, hydrodynamic correlations have also been found in the context of the correlations between Brownian particles (colloids) in strongly confined quasi-one/two-dimensional geometries (see Ref. \cite{Diamant} and references therein; see also Ref. \cite{Keyser} for recent experiments on non-decaying colloid-colloid correlations based on the displacement of the intervening fluid column between the colloids in narrow channels). It should be further noted that our analysis is focused on the stationary-state behavior of the system, implying that the film thickness is traversed by recurring propagations of underdamped sound waves of varying (random) amplitude, continually excited by the longitudinal component of the external forcing applied to the mobile plate. This process generates and maintains a finite and stationary (non-decaying) compressional r.m.s. stress on the fixed plate. The power-law relations obtained here for the (generally non-thermal) surface-driven stress correlators assimilate to the ``secondary" hydrodynamic Casimir-like forces, analyzed previously in the context of thermal fluctuations in a fluid film with fixed boundaries \cite{Monahan}. Such non-equilibrium fluctuation-induced, or Casimir-like effects have been of considerable interest in other soft-matter contexts in recent years \cite{Jones,Chan,Monahan,Bartolo,Dean,Ajay1,Antezza,Kruger,Kirkpatrick13,Kirkpatrick14,Kafri-Kardar}. It is also important to note that the r.m.s. of the fluctuation-induced forces predicted here exhibits a stronger long-ranged character than the standard electromagnetic vdW-Casimir forces \cite{Kardar,Woods}. In the case of the longitudinal stresses on the fixed plate, our results (panel a of Fig. \ref{fig:equaltimeCn}) give a rescaled r.m.s. stress of the order one over the range of rescaled separations $h_{\!_\perp}\sim 10-10^2$ for a wide range of values for $\gamma_{\!_\perp}$. Thus, for the parameter values relevant to water (see Appendix \ref{app:cutoff}), we predict a longitudinal r.m.s. stress (pressure) of the order of the r.m.s. surface forcing, $\sigma^{\mathrm{rms}}_{\!_{\perp}}/\sqrt{\sigma_{f}}\sim 1$, over the range of inter-plate separations, or film thicknesses, $h\sim 2.6\times(10-10^2)$~nm. Here, $\sigma_{f}$ stands for the surface forcing variance in the time domain (related to the forcing spectral density as $\sigma_{f}=2{\mathcal G}_z \omega^\infty$; see Eq. \eqref{eq:cnorm_def}, Section \ref{subsec:EOM} and Appendix \ref{app:cutoff}). Being an externally controlled quantity, $\sigma_{f}$ can be adjusted arbitrarily to obtain the experimental resolution required for the verification of our predictions. The force-measurement precision within the dynamic SFA and AFM techniques can be better than $10$~nN and $10$~pN, respectively \cite{Israela,Butt,Korea,Butt-rev}; hence, in the later case (of thermal-noise AFM), even the effects due to ambient thermal noise have been detectable \cite{Maali2,Butt-rev,Butt,Haviland,Benmouna,Siria2009,Clarke,Sader3}. Despite its geometric simplicity, our model captures the essential features of typical dynamic SFA/AFM setups \cite{Israelachvili_ROPP,Korea,Butt-rev,Haviland,CottinBizonne2008,Leroy2012,Steinberger2008,Wang2017,Maali2,Maali1,Butt-rev,Alcaraz,Benmouna,Clarke,Sader3,Siria2009,Ellis3,Neto,Bocquet2010,Granick1991b,Klein1998,Klein2007,Bureau2010,Berg2003,Grier} (see Section \ref{sec:model} for further details). While these techniques have widely been used in the study of the rheological properties of fluid films, substantial focus has been on their utility in high-precision determination of shear/compressional forces produced on the bounding surfaces confining a fluid film, where one of the surfaces is forced in linear or oscillatory motion. In dynamic SFA, the bounding surfaces are usually taken as two apposed, weakly curved, crossed cylinders (sometimes with large flattened contact areas exposed to the intervening fluid \cite{Butt,Israela,Granick1991b,Klein1998,Berg2003,Israelachvili_ROPP,Bureau2010}), or a sphere and a plane, with the radii of curvature in either case (of the order of a few mm or cm) being much larger than the film thickness (in the range of sub-nm to $\mu$m, or larger) \cite{Israelachvili_ROPP,Granick1991b,Klein1998,Klein2007,CottinBizonne2008,Steinberger2008,Bureau2010,Korea,Leroy2012,Wang2017}. In dynamic AFM, wide flat microlevers \cite{Siria2009}, or relatively large, cantilever-mounted spheres (of radius up to tens of $\mu$m) \cite{Butt,Korea,Butt-rev,Haviland,Wang2017,Alcaraz,Benmouna,Maali1,Maali2} are used in forced oscillations or in spontaneous (thermal) stochastic motions next to a planar substrate, probing the local surface properties, the hydrodynamic/viscoelastic properties of the surrounding fluid, and also the hydrodynamic interactions mediated between the probe and the substrate. The typical film geometries and modes of surface motions employed in the aforementioned setups therefore appear well-suited for testing our theoretical predictions. The main assumptions made within our analytical approach (e.g., using small-amplitude oscillations and linearized hydrodynamic treatment) are also directly relevant to the mentioned experimental setups and also agree with previous modeling approaches \cite{Israelachvili_ROPP,Korea,Butt-rev,Haviland,CottinBizonne2008,Leroy2012,Steinberger2008,Wang2017,Maali2,Maali1,Butt-rev,Alcaraz,Benmouna,Clarke,Sader3,Siria2009}. To the best our knowledge, however, the surface-force experiments have so far been focused merely on the net force mediated between the fluid film boundaries as opposed to force fluctuations and correlations. The predicted behaviors for these latter quantities can be examined by scrutinizing the readily available r.m.s. of the experimentally detected forces as a function of the film thickness. Our work thus lays out a self-consistent and systematic hydrodynamic-fluctuations approach, incorporating the often-ignored finite compressibility of the fluid film, which is important in the understanding of the sound-mediated effects. It also places the study of hydrodynamic surface forces induced by externally driven thin films within the newly emerging context of non-equilibrium Casimir-type phenomena. By connecting the spectral density of hydrodynamic stresses acting on the surface boundaries and the spectral density of fluid density fluctuations in the film (see Eq. \eqref{eq:density_corr}), our analysis also suggests an alternative method to probe the density fluctuations through the measurements of surface forces in confined fluid films, where standard light scattering methods may be less suitable. Finally, we note that our numerical results are given here only for the case of white-noise forcing. Other examples of external forcing (such as colored or $1/f$ noises) can straightforwardly be analyzed using the more general analytical formulas presented (Section \ref{sec:response}). Other possible avenues that can be explored within the present context include the surface curvature effects, the fluid thermal conductivity \cite{Chris2017} and possibly also the role of nonlinear effects such as viscous dissipation \cite{Klein2007}. \section{Acknowledgements} M.M.-A. and S.M. acknowledge funding from the School of Physics, Institute for Research in Fundamental Sciences (IPM), where the research leading to this publication was performed. A.N. acknowledges partial support from Iran Science Elites Federation (ISEF) and the Associateship Scheme of The Abdus Salam International Centre for Theoretical Physics (Trieste, Italy). R.P. gratefully acknowledges support from the ``1000 Talents Program" of China and from the University of Chinese Academy of Sciences (UCAS).
{ "timestamp": "2018-07-03T02:04:09", "yymm": "1712", "arxiv_id": "1712.05051", "language": "en", "url": "https://arxiv.org/abs/1712.05051" }
\section{Introduction} The neutrino-plasma coupling in magnetized media is a relevant issue in diverse situations, as near the core of proto-neutron stars, where it is a source of the free energy behind the stalled supernova shock \cite{Bludman}--\cite{Bethe2}. Neutrino-driven wakefields and neutrino effective charge in magnetized electron-positron plasma \cite{mag1, mag2}, the magnetized Mikheilev-Smirnov-Wolfenstein effect of neutrino flavor conversion \cite{mag3}, spin waves coupled to neutrino beams \cite{Semikoz}, neutrino cosmology and the early universe \cite{mag5}, neutrino emission and collective processes in magnetized plasma, and neutrino-driven nonlinear waves in magnetized plasmas \cite{mag8, mag9}, are examples of neutrino influenced plasma phenomena. The existence of intense neutrino beams in general astrophysical plasma is well documented \cite{Tajima}. The coupling between neutrino flavor oscillations and plasma waves has been also reported \cite{m1}--\cite{h2}. One of the most popular approaches to plasma astrophysics in the presence of magnetic fields is magnetohydrodynamics (MHD), which usually does not account for neutrino species not even in any approximate way. Actually, neutrino studies in a material medium are more frequently pursued within the framework of particle physics, which in terms of language is somewhat far from the majority of the plasma community. This has motivated the creation of neutrino-magnetohydrodynamics (NMHD), where the interaction between neutrinos and electrons is forwarded in terms of a coupling between the MHD and neutrino fluids \cite{NMHD}. As a first application, NMHD proved the destabilization of the magnetosonic wave by neutrino beams, yielding a plausible mechanism for type II supernova explosion. However, the magnetosonic wave supposes a very particular geometry, where the wave propagation is perpendicular to the ambient magnetic field. Therefore, it is advisable to perform a more general linear stability analysis, allowing for arbitrary orientations. This is the purpose of the present work, namely, the study of the impact of a neutrino beam on the stability of general MHD waves. Namely, in the case of an ideally conducting fluid and using simplified MHD assumptions, these are the shear Alfv\'en wave, and fast and slow magnetosonic waves. Therefore, the present work removes the orthogonality condition of \cite{NMHD}, to obtain instability growth-rates of simplified and ideal NMHD for arbitrary oblique angles between wave propagation and equilibrium magnetic field. Similarly, the instability analysis of general electrostatic perturbations in magnetized electron plus neutrino plasmas in an ionic background was recently carried on \cite{PRD}. It can be justifiable argued that the NMHD model as it stands underestimates other important quantum effects in dense plasmas, such as relativistic degeneracy effects, particle dispersive effects and exchange effects \cite{book}. The basic reason for our choice is that the original quantum magnetohydrodynamics was derived starting from a quantum kinetic model, the non-relativistic Wigner-Maxwell system, not including neutrino coupling \cite{qmhd}. Therefore the insertion of relativistic corrections and extra terms of exchange and quantum dispersion would be {\it ad hoc} in the present state of the art. On the other hand, for very dense white dwarfs, degeneracy comes together with relativistic effects in view of a Fermi momentum $p_F$ of the order of $mc$, where $m$ is the mass of the charge carriers and $c$ the speed of light. Hence for strongly degenerate-relativistic plasmas a more advanced theory would be necessary from the beginning. This work is organized as follows. Section II reviews the basic equations and validity conditions of NMHD. Section III obtains the general linear dispersion of waves, where a few extra details (not explicitly shown in \cite{NMHD}) of the algebra are provided. Section IV derives the instability growth-rate in general, discussing it in the significant particular cases: fast magnetosonic wave; slow magnetosonic wave; perpendicular wave propagation (with respect to the ambient magnetic field); parallel wave propagation. The shear Alfv\'en wave is found to be unaffected by neutrinos. The strong growth-rate is estimated in a typical case of type II supernova parameters. Section V is reserved to the conclusions. \section{Neutrino-magnetohydrodynamics model} For completeness, we briefly review the NMHD model derived in \cite{NMHD}, comprising the following set of equations, namely, the continuity equations for the neutrinos, \begin{equation} \frac{\partial n_\nu}{\partial t} + \nabla \cdotp (n_\nu \textbf{u}_\nu) = 0 \,, \label{eq32} \end{equation} and for the MHD fluid, \begin{equation} \frac{\partial \rho_m}{\partial t} + \nabla \cdot (\rho_m \textbf{U}) = 0 \,, \label{eq26} \end{equation} the momentum transport equations for the neutrinos, \begin{equation} \frac{\partial \textbf{p}_\nu}{\partial t} + \textbf{u}_\nu \cdotp \nabla \textbf{p}_\nu = - \frac{\sqrt{2}\,G_F}{m_i} \nabla \rho_m \,, \label{nf} \end{equation} and for the MHD fluid, \begin{equation} \frac{\partial \textbf{U}}{\partial t} + \textbf{U} \cdot \nabla \textbf{U} = - \frac{V_{S}^2 \nabla \rho_m}{\rho_m} + \frac{(\nabla\times\textbf{B}) \times \textbf{B}}{\mu_0 \,\rho_m} + \frac{\textbf{F}_\nu}{m_i} \,, \label{eq34} \end{equation} as well as the dynamo equation modified by the electroweak force, \begin{equation} \frac{\partial\textbf{B}}{\partial t} = \nabla\times\left(\textbf{U}\times\textbf{B} - \frac{\textbf{F}_{\nu}}{e}\right) \,. \label{eq37} \end{equation} Here, $n_\nu$ and $\rho_m$ are resp. the neutrino number density and the plasma mass density, ${\bf u}_\nu$ and ${\bf U}$ resp. the neutrino and plasma velocity fields, ${\bf B}$ the magnetic field, $G_F$ the Fermi constant, $m_i$ the ion mass, $V_S$ the adiabatic speed of sound, $\mu_0$ the free space permeability, $e$ the elementary charge and ${\bf F}_\nu$ the neutrino force, \begin{equation} \textbf{F}_\nu = \sqrt{2}\,G_F \left[\textbf{E}_\nu + \left(\textbf{U} - \frac{m_i \nabla\times{\bf B}}{e \mu_0 \rho_m}\right) \times \textbf{B}_\nu \right] \,, \label{eq29} \end{equation} where $\textbf{E}_{\nu}$ and $\textbf{B}_{\nu}$ are effective fields induced by the weak interaction, \begin{eqnarray} \textbf{E}_\nu = \nabla n_{\nu} - \frac{1}{c^2}\frac{\partial}{\partial t}(n_{\nu} \textbf{u}_{\nu}) \,, \quad \textbf{B}_\nu = \frac{1}{c^2}\nabla \times (n_{\nu}\textbf{u}_{\nu}) \,. \label{eq04} \end{eqnarray} Finally, the neutrino relativistic beam momentum is ${\bf p}_\nu = \mathcal{E}_{\nu} \textbf{u}_{\nu}/c^2$, with a neutrino beam energy $\mathcal{E}_{\nu}$. The assumptions behind the NMHD model are the same of the simplified and ideal MHD, namely, a highly conducting, strongly magnetized medium, and low frequency processes in a scale where electrons and ions couple so much as to be faithfully treated as a single fluid. The neutrinos influence the plasma by means of the charged weak current coupling electrons and electron-neutrinos, through the charged bosons $W_{\pm}$. In addition, implicitly in Eq. (\ref{eq34}) the displacement current was neglected, supposing wave phase velocities much smaller than $c$ - although such a restriction has no r\^ole in the results of the present work. In conclusion, Eqs. (\ref{eq32})-(\ref{eq37}) are a complete set of 11 equations and 11 variables, namely $n_\nu, \rho_m$ and the components of $\textbf{p}_\nu, \textbf{U}$ and $\textbf{B}$. A more detailed derivation is provided in \cite{NMHD}. For convenience, it is useful to reproduce here Eq. (28) of \cite{NMHD}, which collects the conditions of high collisionality and high conductivity of the plasma, supposing a wave with angular frequency $\omega$, \begin{equation} \frac{m_i |\omega|}{m_e \omega_{pe}} \ll \frac{2}{3}\,\frac{\ln\Lambda}{\Lambda} \ll \frac{\omega_{pe}}{|\omega|} \,, \quad \Lambda = \frac{4\pi n_0 \lambda_D^3}{3} \,, \quad \lambda_D = \frac{v_T}{\omega_{pe}} \,, \label{con} \end{equation} where $n_0$ is the equilibrium electron (and ion) number density, $m_e$ is the electron mass, $\omega_{pe} = [n_0 e^2/(m_e \varepsilon_0)]^{1/2}$ is the electron plasma frequency, $v_T = (\kappa_B T_e/m_e)^{1/2}$ is the electrons thermal velocity, $\kappa_B$ is the Boltzmann constant and $T_e$ the electron fluid temperature. The validity conditions of NMHD are essentially the same, since the neutrino component is a second order influence. The derivation of Eq. (\ref{con}) assumes the Landau electron-electron collision frequency, and non-degenerate and non-relativistic electrons. More details on the validity conditions of MHD can be found e.g. in \cite{Spitzer, Balescu}. \section{General dispersion relation} Starting from the homogeneous equilibrium \begin{eqnarray} \quad n_{\nu} = n_{\nu 0} \,, \quad \rho_m = \rho_{m0} \,, \quad \textbf{p}_\nu = \textbf{p}_{\nu 0} \,, \quad \textbf{U} = 0 \,, \quad \textbf{B} = \textbf{B}_0 \,, \end{eqnarray} and supposing plane wave perturbations proportional to $\exp[i({\bf k}\cdot{\bf r} - \omega\,t)]$, it is possible to obtain the dispersion relation for small amplitude waves. Here we provide a few more details on the necessary algebra, in comparison with \cite{NMHD}. The idea is to express all perturbations in terms of $\delta{\bf U}$, the first-order plasma fluid correction. For instance, the linear correction to the neutrino fluid velocity becomes \begin{eqnarray} \delta{\bf u}_\nu &=& \frac{c^2}{\mathcal{E}_{\nu 0}}\left(\delta{\bf p}_\nu - {\bf u}_{\nu 0}\,{\bf u}_{\nu 0}\cdot\delta{\bf p}_\nu/c^2\right) \label{eqax} \\ &=& \frac{\sqrt{2} G_F \rho_{m\,0} c^2}{m_i \mathcal{E}_{\nu 0}\,\omega}\,\frac{\left({\bf k} - {\bf k}\cdot{\bf u}_{\nu 0}\,{\bf u}_{\nu 0}/c^2\right)}{(\omega - {\bf k}\cdot{\bf u}_{\nu 0})}\,\,{\bf k}\cdot\delta{\bf U} \,, \label{eqxx} \end{eqnarray} where ${\bf u}_{\nu 0}$ and $\mathcal{E}_{\nu 0}$ are resp. the equilibrium neutrino beam velocity and energy, viz. ${\bf p}_{\nu 0} = \mathcal{E}_{\nu 0} \textbf{u}_{\nu 0}/c^2$. Equation (\ref{eqax}) can be operationally found using the relation between neutrino momentum and neutrino velocity and the energy-momentum relation $\mathcal{E}_{\nu} = (p_\nu^2 c^2 + m_\nu^2 c^4)^{1/2}$, where the neutrino mass $m_\nu$ is eliminated at the end. The step from Eq. (\ref{eqax}) to Eq. (\ref{eqxx}) is made using the linearized plasma continuity equation (\ref{eq26}) and the linearized neutrino momentum transport equation (\ref{nf}). To proceed, in view of Eq. (\ref{eq29}) the linearized neutrino force becomes $\delta{\bf F}_\nu = \sqrt{2} G_F \delta{\bf E}_\nu$ since the term containing the effective neutrino magnetic field ${\bf B}_\nu$ is of second order. The perturbed effective neutrino electric field $\delta{\bf E}_\nu$ can be found from Eq. (\ref{eq04}), together with the neutrino continuity equation (\ref{eq32}) and Eq. (\ref{eqxx}). The result is \begin{eqnarray} \delta{\bf F}_\nu &=& \frac{2 i G_F^2 n_{\nu 0}\rho_{m0}\,({\bf k}\cdot\delta{\bf U})}{m_i \mathcal{E}_{\nu 0}\,\omega (\omega - {\bf k}\cdot{\bf u}_{\nu 0})^2} \times \nonumber \\ &\times& \Bigl[\Bigl(({\bf k}\cdot{\bf u}_{\nu 0})^2 - c^2 k^2 - \omega ({\bf k}\cdot{\bf u}_{\nu 0}) + \omega^2\Bigr)\,{\bf k} + \omega\,\Bigl(k^2 - \frac{\omega}{c^2}\,{\bf k}\cdot{\bf u}_{\nu 0}\Bigr)\,{\bf u}_{\nu 0}\Bigr] \,. \end{eqnarray} As could have been expected, the neutrino force is enhanced for $\omega \approx {\bf k}\cdot{\bf u}_{\nu 0}$, so that the wave resonates with the neutrino beam. The remaining straightforward steps allow to express the linearized plasma momentum transport equation (\ref{eq34}) in terms of $\delta{\bf U}$ only, \begin{eqnarray} \omega^2\delta\textbf{U} &=& \left(V^2_A + V^2_S + V^2_N \, \Bigl(\frac{c^2k^2 - (\textbf{k}\cdot\textbf{u}_{\nu 0})^2 + \omega ({\bf k}\cdot{\bf u}_{\nu 0}) - \omega^2}{(\omega- \textbf{k}\cdot \textbf{u}_{\nu 0})^2}\Bigr)\right)\!(\textbf{k}\cdot\delta\textbf{U})\textbf{k} \nonumber \\ &+& (\textbf{k} \cdot \textbf{V}_A)\Bigl((\textbf{k} \cdot \textbf{V}_A)\delta\textbf{U} - (\delta\textbf{U}\cdot\textbf{V}_A)\textbf{k} \nonumber - (\textbf{k}\cdot\delta\textbf{U})\textbf{V}_A\Bigr) \nonumber \\ &-& \frac{\omega\, V_N^2 \Bigl(k^2 - \omega {\bf k}\cdot{\bf u}_{\nu 0}/c^2\Bigr)({\bf k}\cdot\delta{\bf U})\,{\bf u}_{\nu 0}}{(\omega- \textbf{k}\cdot \textbf{u}_{\nu 0})^2} \nonumber \\ &+& \frac{i V_N^2 V_A (\textbf{k}\cdot\delta\textbf{U})}{\Omega_i (\omega- \textbf{k}\cdot \textbf{u}_{\nu 0})^2}\,\Bigl(k^2 - \frac{\omega\, {\bf k}\cdot{\bf u}_{\nu 0}}{c^2}\Bigr)\,{\bf V}_A \times \Bigl({\bf k}\times ({\bf k}\times{\bf u}_{\nu 0})\Bigr) \,, \label{ccc} \end{eqnarray} where the vector Alfv\'en velocity ${\bf V}_A$ and $V_N$ are given by \begin{eqnarray} \textbf{V}_A = \frac{\textbf{B}_0}{(\rho_{m0} \mu_0)^{1/2}} \,, \quad V_N = \left(\frac{2G^2_F \rho_{m0} n_{\nu0}}{m^2_i \mathcal{E}_{\nu 0}}\right)^{1/2} \,, \label{eq53} \end{eqnarray} while $\Omega_i = e B_0/m_i$ is the ion cyclotron frequency. As apparent, the characteristic neutrino-plasma speed $V_N$ contains both MHD and neutrino variables, emphasizing the mutual coupling. The somewhat formidable expression can be considerably simplified for low frequency waves such that $\omega/k \ll c$, allowing to disregard the terms containing $\omega$ in the numerators of the right-hand side of Eq. (\ref{ccc}), as deduced from appropriated order of magnitude estimates. In the same trend, the very last term proportional to $\Omega_{i}^{-1}$ can be discarded, provided $k V_A/\Omega_i \ll c/V_A$, or equivalently $c k/\omega_{pe} \ll \omega_{pe}/\Omega_e$, where $\Omega_e = e B_0/m_e$ is the electron cyclotron frequency. Such a condition tend to be easily satisfied wavelengths much larger than the plasma skin depth $c/\omega_{pe}$, and large enough densities so that $\omega_{pe} \gg \Omega_e$. Finally, Eq. (\ref{ccc}) reduces to \begin{eqnarray} \omega^2\delta\textbf{U} &=& \left(V^2_A + V^2_S + V^2_N \frac{(c^2k^2 - (\textbf{k}\cdot\textbf{u}_{\nu 0})^2)}{(\omega- \textbf{k}\cdot \textbf{u}_{\nu 0})^2}\right)\!(\textbf{k}\cdot\delta\textbf{U})\textbf{k} \nonumber \\ &+& (\textbf{k} \cdot \textbf{V}_A)\Bigl((\textbf{k} \cdot \textbf{V}_A)\delta\textbf{U} - (\delta\textbf{U}\cdot\textbf{V}_A)\textbf{k} - (\textbf{k}\cdot\delta\textbf{U})\textbf{V}_A\Bigr) \,, \label{eq51} \end{eqnarray} which is shown in \cite{NMHD}. In \cite{NMHD}, for simplicity it was supposed that ${\bf k}\cdot{\bf V}_A = 0$, which allows to discard several terms of Eq. (\ref{eq51}). This corresponds to the magnetosonic wave modified by the neutrino component, for which $\delta{\bf U} \parallel {\bf k}$ as seen from inspection. The corresponding instability due to the neutrino beam was then evaluated. Our goal now is to consider the general situation, where the wavevector and the ambient magnetic field have an arbitrary orientation, as shown in Fig. 1. \begin{figure}[h] \begin{center} \includegraphics[width=2.5in]{fig1.eps} \caption{Wave vector and ambient magnetic field.} \label{fig1} \end{center} \end{figure} It turns out that Eq. (\ref{eq51}) is formally the same as the one for linear waves in simplified ideal MHD, provided the adiabatic sound speed $V_S$ is replaced by $\tilde V_S(\omega,{\bf k})$ defined by \begin{equation} \widetilde{V}_S^2(\omega,{\bf k}) = V_S^2 + V_N^2 \frac{(c^2k^2 - (\textbf{k}\cdot \textbf{u}_{\nu 0})^2)}{(\omega - \textbf{k} \cdot \textbf{u}_{\nu 0})^2} \,, \end{equation} so that \begin{eqnarray} \omega^2\delta\textbf{U} &=& \left(V^2_A + \tilde{V}^2_S(\omega,{\bf k})\right)\!(\textbf{k}\cdot\delta\textbf{U})\textbf{k} \nonumber \\ &+& (\textbf{k} \cdot \textbf{V}_A)\Bigl((\textbf{k} \cdot \textbf{V}_A)\delta\textbf{U} - (\delta\textbf{U}\cdot\textbf{V}_A)\textbf{k} - (\textbf{k}\cdot\delta\textbf{U})\textbf{V}_A\Bigr) \,, \label{eqq51} \end{eqnarray} which is exactly the same as the well known simplified and ideal MHD system for linear waves, with the replacement $V_S \rightarrow \tilde{V}_S(\omega,{\bf k})$. Hence, the usual procedure applies, as follows. Assuming the geometry of Fig. 1, where without loss of generality the $y-$component of ${\bf k}$ and ${\bf V}_A$ is set to zero, and from the characteristic determinant of the homogeneous system (\ref{eqq51}) for the components of $\delta{\bf U}$, the result is \begin{equation} (\omega^2 - k^2\,V_A^{~2}\cos^2\theta)\left[ \omega^4 - k^2 \left(V_A^2+ \widetilde{V}_S^{~2}(\omega,{\bf k})\right)\omega^2 + k^4\,V_A^{~2}\,\widetilde{V}_S^{~2}(\omega,{\bf k})\,\cos^2\theta \right] = 0 \,. \label{sa} \end{equation} As apparent from the factorization, one root is $\omega = k\,V_A\,\cos\theta$, which is the shear Alfv\'en wave, unaffected by the neutrino beam. This happens because ${\bf k}\cdot\delta{\bf U} = 0$ for the shear Alfv\'en wave, which eliminates the neutrino contribution in Eq. (\ref{eqq51}). Presently, the more interesting modes comes from the second bracket in Eq. (\ref{sa}), to be discussed in the next Section. \section{Instabilities} Ignoring the shear Alfv\'en wave, the general dispersion relation (\ref{sa}) yields \begin{equation} \label{new} \omega^4 - k^2 (V_A^2+ V_S^{~2})\omega^2 + k^4\,V_A^{~2}\,V_S^{~2}\,\cos^2\theta = \frac{V_N^2 k^2 \left(c^2 k^2 - ({\bf k}\cdot{\bf u}_{\nu 0})^2\right) (\omega^2 - k^2 V_A^2 \cos^{2}\theta)}{(\omega - {\bf k}\cdot{\bf u}_{\nu 0})^2} \,, \end{equation} where the neutrino term was isolated in the right-hand side. Due to the small value of the Fermi constant, the neutrino contribution is always a perturbation, even for the neutrino-beam mode. The natural approach to Eq. (\ref{new}) is then to set \begin{equation} \label{xx} \omega = \Omega + \delta\omega \,, \quad \Omega \gg \delta\omega \,, \quad \Omega = {\bf k}\cdot{\bf u}_{\nu 0} \,, \end{equation} where $\Omega$ is the classical (no neutrinos) solution, \begin{equation} \label{nwew} \Omega^4 - k^2 (V_A^2+ V_S^{~2})\Omega^2 + k^4\,V_A^{~2}\,V_S^{~2}\,\cos^2\theta = 0 \,, \end{equation} and where in Eq. (\ref{xx}) the neutrino-beam mode was selected in order to enhance the neutrino contribution. Therefore, the zeroth-order solution gives the fast (+) and slow (-) magnetosonic waves, \begin{equation} \Omega = \Omega_{\pm} = k V_{\pm} \,, \quad V_\pm = \left[\frac{1}{2}\left(V_A^{~2} + V_S^{~2} \pm \sqrt{(V_A^2 - V_S^2)^2 + 4\, V_A^{~2}\,V_S^{~2}\,\sin^2\theta} \right)\right]^{1/2} \,. \label{vpm} \end{equation} Taking into account Eq. (\ref{new}) and Eq. (\ref{xx}) as well as the expression of the unperturbed frequency, we get \begin{eqnarray} (\delta\omega)^3 &=& \frac{\pm V_N^2 \left(c^2 k^2 - ({\bf k}\cdot{\bf u}_{\nu 0})^2\right) \left(V_\pm^2 - V_A^2 \cos^{2}\theta\right) k}{2 V_\pm \, \sqrt{(V_A^2 - V_S^2)^2 + 4\, V_A^{~2}\,V_S^{~2}\,\sin^2\theta} } \nonumber \\ &\approx& \frac{\pm V_N^2 c^2 \left(V_\pm^2 - V_A^2 \cos^{2}\theta\right) k^3}{2 V_\pm \, \, \sqrt{(V_A^2 - V_S^2)^2 + 4\, V_A^{~2}\,V_S^{~2}\,\sin^2\theta}} \,, \end{eqnarray} where in the last step $\Omega = {\bf k}\cdot{\bf u}_{\nu 0}$ and $V_{\pm}^2 \ll c^2$ were used. The unstable root with $\gamma = {\rm Im}(\delta\omega) > 0$ yields the growth-rate \begin{equation} \gamma = \gamma_\pm = \frac{\sqrt{3}\,k}{2^{4/3}} \left(\frac{\Delta c^4 |V_{\pm}^2 - V_A^2 \cos^{2}\theta|}{V_{\pm} \, \sqrt{(V_A^2 - V_S^2)^2 + 4\, V_A^{~2}\,V_S^{~2}\,\sin^2\theta} }\right)^{1/3} \,, \label{gamma} \end{equation} introducing the dimensionless quantity \begin{equation} \Delta = \frac{V_N^2}{c^2} = \frac{2G_F^2n_0n_{\nu 0}}{m_i c^2E_{\nu 0}} \,, \end{equation} using $\rho_{m0} \approx n_0 m_i$. The parameter $\Delta$ is endemic in neutrino-plasma problems, as in the neutrino and anti-neutrino effective charges in magnetized plasmas \cite{mag1} or in the expression of the neutrino susceptibility \cite{Silva}. The weak beam condition $\gamma/\Omega \ll 1$ can be worked out as \begin{equation} \frac{\Delta c^4 |V_{\pm}^2 - V_A^2 \cos^{2}\theta|}{V_{\pm}^4 \,\sqrt{(V_A^2 - V_S^2)^2 + 4\, V_A^{~2}\,V_S^{~2}\,\sin^2\theta} } \ll 1 \,, \label{weak} \end{equation} which is independent of the magnitude $k$ of the wavenumber. In the unlikely cases where Eq. (\ref{weak}) is not satisfied, one must go back to the sixth-order polynomial equation (\ref{new}), to be numerically solved. The growth-rate (\ref{gamma}) is completely general, in the sense that it is valid for arbitrary geometries of the wave propagation, as long as the weak beam assumption holds, and is the main result of this work. It is interesting to evaluate the instability in the separate fast and slow magnetosonic cases, as well as for perpendicular (${\bf k} \perp {\bf V}_A$) and parallel (${\bf k} \parallel {\bf V}_A$) to the magnetic field wave propagation. \subsection{Destabilization of the fast magnetosonic wave} The choice of the plus sign in Eq. (\ref{gamma}) corresponds to the fast magnetosonic wave, with a growth-rate $\gamma \equiv \gamma_+$. From now on, parameters of Type II core-collapse scenarios like for the supernova SN1987A will be applied. There one had neutrino bursts of $10^{58}$ neutrinos and energies of the order of $10-15$ MeV, strong magnetic fields $B_0 \approx 10^6 - 10^8 \,{\rm T}$ and neutrino beam densities $n_{\nu 0}$ between $10^{34} - 10^{37} \, {\rm m}^{-3}$ \cite{Hirata}. In the following estimates, we set ${\cal E}_{\nu 0} = 10 \,{\rm MeV}, n_0 = 10^{34}\,{\rm m}^{-3}$, $n_{\nu 0} = 10^{35} \,{\rm m}^{-3}$, $B_0 = 5 \times 10^7 \,{\rm T}$, and an electron fluid temperature $T_e = 0.1 \, {\rm MeV}$, appropriate for the slightly degenerate and mildly relativistic hydrogen plasma in the center of the proto-neutron star. In addition, we use $G_F=1.45 \times 10^{-62} \, {\rm J.m^3}$, $V_S = (\kappa_B T_e/m_i)^{1/2}$. For these parameters, one has $\Delta = 1.75 \times 10^{-33}$, $V_A/c = 3.64 \times 10^{-2}, V_S/c = 1.03 \times 10^{-2}$. We set $k = 10^{6} \,{\rm m}^{-1}$, which is fully consistent with the applicability condition (\ref{con}). Finally, the simplifying assumption of page 6, viz. $c k/\omega_{pe} \ll \omega_{pe}/\Omega_e$, becomes $k \ll 1.2 \times 10^{10}\,{\rm m}^{-1}$, which is obviously satisfied. From Eq. (\ref{gamma}), the result is then shown in Fig. 2, displaying the growth-rate as a function of the orientation angle. One has a fast instability with the estimate $1/\gamma_+ \approx 10^{-3} \,{\rm s}$, while the characteristic time of supernova explosions is $\sim$ 1 second. On the other hand, the weak beam assumption $\gamma_+ \ll \Omega_+$ (equivalent to Eq. (\ref{weak})) is fairly satisfied, since $\Omega_+ \approx 10^{13} \,{\rm rad/s}$ without much variation as a function of the angle. The conclusion from Fig. 2 is that the instability becomes stronger for more perpendicular waves. One could have even stronger instabilities for a denser plasma, but some of the above calculations, although remaining approximately accurate, would need to be slightly revised in view of stronger degeneracy and relativistic effects. \begin{figure}[h] \begin{center} \includegraphics[angle=0,scale=0.45]{fig2.eps} \caption{Growth-rate of the destabilized fast magnetosonic wave, for the set of parameters described in the text.} \label{fig2} \end{center} \end{figure} \subsection{Destabilization of the slow magnetosonic wave} Setting exactly the same parameters used for the fast magnetosonic wave and using Eq. (\ref{gamma}), one gets the growth-rate shown in Fig. 3 below, which is also such that $1/\gamma_- \approx 10^{-3} \,{\rm s}$. The weak beam condition (\ref{weak}) is satisfied except for $\theta \rightarrow \pi/2 \, {\rm rad}$, where both $\Omega_-$ and $\gamma_-$ go to zero. Contrarily to the fast magnetosonic wave, the slow magnetosonic wave becomes more unstable for parallel and anti-parallel propagation, while it stabilizes for perpendicular orientation between ${\bf k}$ and ${\bf B}_0$. \begin{figure}[h] \begin{center} \includegraphics[angle=0,scale=0.45]{fig3.eps} \caption{Growth-rate of the destabilized slow magnetosonic wave, for the set of parameters described in the text.} \label{fig3} \end{center} \end{figure} \subsection{Perpendicular wave propagation (${\bf k} \perp {\bf V}_A$)} It is useful to collect the special cases of Eq. (\ref{gamma}) for noteworthy orientations. For instance, when $\textbf{k} \perp \textbf{B}_0$, or $\theta = \pi/2 \, {\rm rad}$, it is found \begin{equation} \gamma_+ = \frac{\sqrt{3} \,\Delta^{1/3} c^{4/3} k}{2^{4/3} (V_A^2 + V_S^2)^{1/6}} \,, \quad \gamma_- = 0 \,. \label{gpl} \end{equation} At this point it is interesting to critically compare with the instability calculations from \cite{NMHD}, where ${\bf k} \perp {\bf B}_0$ from the beginning. There, the growth-rate was found as \begin{equation} \gamma = \frac{\Delta^{1/2} c^2 k}{\sqrt{V_A^2 + V_S^2}} \,, \label{gam} \end{equation} see Eq. (32) therein, in the case of almost perpendicular neutrino propagation (${\bf k}\cdot{\bf u}_{\nu 0} \approx 0$), which yields the larger instabilities. While Eqs. (\ref{gpl}) for $\gamma_+$ and (\ref{gam}) for $\gamma$ are similar, there are some decisive discrepancies, and effectively $\gamma_+ \gg \gamma$ by many orders of magnitude. This is because of the exceedingly small coupling in terms of $\Delta^{1/3} \sim G_F^{2/3}$ in Eq. (\ref{gpl}) and $\Delta^{1/2} \sim G_F$ in Eq. (\ref{gam}). What is the origin of the discrepancy? It happens that in \cite{NMHD} the neutrino-beam mode was selected with $\omega = {\bf k}\cdot{\bf u}_{\nu 0} + i \gamma$ and $\gamma \ll \Omega = (V_A^2 + V_S^2)^{1/2} k$, with wavevector almost perpendicular to neutrino beam velocity, but the resonance condition ${\bf k}\cdot{\bf u}_{\nu 0} = \Omega$ was not enforced. By definition, the resonance condition enhances the interaction between the wave and the neutrino beam, producing a larger instability. In this context the present findings are more appropriate. \subsection{Parallel wave propagation (${\bf k} \parallel {\bf V}_A$)} When $\textbf{k} \parallel \textbf{B}_0$, or $\theta = 0$, we get \begin{equation} \gamma_+ = 0 \,, \quad \gamma_- = \frac{\sqrt{3} \,\Delta^{1/3} c^{4/3} k}{2^{4/3} V_{S}^{1/3}} \,, \label{zz} \end{equation} where the result supposes $V_A > V_S$. Otherwise, if $V_S > V_A$, then $\gamma_+$ is interchanged with $\gamma_-$ in Eq. (\ref{zz}). The case of parallel propagation has two fundamental modes: the pure Alfv\'en wave $\Omega = k V_A$, which is unaffected by the neutrino beam, and the sonic mode $\Omega = k V_S$, which is destabilized according to Eq. (\ref{zz}). The anti-parallel case ($\theta = \pi \,{\rm rad}$) is similar. \section{Conclusion} The linear dispersion relation of simplified and ideal NMHD was examined in detail, together with the validity conditions of the theory. With the additional hypothesis of very subluminal waves ($V_\pm \ll c$) and wavelengths not very small compared to the plasma skin depth, the linear dispersion relation becomes formally the same as for usual simplified and ideal MHD, provided the adiabatic sound speed is replaced by a quantity $V_S(\omega,{\bf k})$ containing the neutrino beam contribution. Therefore the standard procedure for waves with an arbitrary orientation applies. Due to the small value of the Fermi coupling constant, the neutrino term is nearly always a perturbation, to be treated as a second order effect. Nevertheless, the corresponding instability growth-rate is found to be strong enough to be a candidate for triggering cataclysmic events in supernovae. The central result of the work is the growth-rate in Eq. (\ref{gamma}), valid for arbitrary geometries and considerably enlarging the results from \cite{NMHD}, which are restricted to perpendicular wave propagation (${\bf k}\cdot{\bf B}_0 = 0$). The particular cases of destabilized fast and slow magnetosonic waves, and perpendicular and parallel propagation have been discussed. It would be interesting to relax some of the assumptions behind Eq. (\ref{eq51}), e.g. the hypotheses of very subluminal waves, as well as the introduction of non-ideality effects. In this way, even more general (and more complicated) phenomena could be addressed. {\bf Acknowledgments}: F.~H.~ acknowledges the support by Con\-se\-lho Na\-cio\-nal de De\-sen\-vol\-vi\-men\-to Cien\-t\'{\i}\-fi\-co e Tec\-no\-l\'o\-gi\-co (CNPq), and K.~A.~P.~ack\-now\-ledges the support by Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'{\i}vel Superior (CAPES).
{ "timestamp": "2017-12-15T02:10:24", "yymm": "1712", "arxiv_id": "1712.05328", "language": "en", "url": "https://arxiv.org/abs/1712.05328" }
\section*{Abstract} We seek models for the genotype evolution of agricultural animals, animals involved in primary production processes. Classical models for genotype evolution have tended to be very simple in order that analytic methods may be employed in their study. Unfortunately these models fail to describe processes in artificially controlled populations including agricultural livestock. It is particularly important {to describe such processes} in order to make better use of {the} massive genotyping data becoming available. We describe an approach to stochastically modeling the {dynamics} of a biallelic polymorphism {herds} under conditions of controlled mating {and restriction of herds size from above}. The system of stochastic differential equations that we propose is based on jump diffusion processes to provide an effective platform for Monte Carlo simulation. Our choice of this modeling framework foreshadows the use of semi-analytic tools to complement simulation. {Another reason for adopting the framework is its flexibility in modeling different population management systems.} A feature of the model is the division of the {population} into a {\it main herd} comprised of animals involved in the production process and a {\it replacement herd} of {animals not currently in the production process, typically juvenile animals}. This feature allows for exploring different strategies for adding replacement animals to the main herd without altering the part of the model concerned with the dynamics of the main herd. A discrete-time version of the model has been developed which reflects the typical practice of New Zealand dairy herd management. Our Monte Carlo simulation has demonstrated that an isolated deme whose size is bounded above (by imposition of a fixed size control requirement) demonstrate size stabilization at a level less than the control limit, it is looks like partial extinction, the effect being well known in classic models. {Another interesting feature of the model with a size control rule is its sensitivity to a form of a control. We have found that even change a rule to different moment of choice of animal substitution ( from replacement herd to a main one) results in observable variation in herds' temporal characteristics.} {We demonstrate several simulation results under the condition of Mendelian inheritance and its corresponding rule of summation. We also propose a variant of the model taking into account animal inflows and outflows providing exchange through an external market.} For future work we consider the cooperative development of an open source platform for such modeling and for {\it in silico} experiments utilizing real genotyping data from the New Zealand dairy cow population. \section*{Introduction} The two seminal papers of G. Hardy \cite{Hardy} and W. Weinberg \cite{Weinberg} on the steady state distribution of alleles were based on the Mendelian law of genetics. The Mendelian law and {these papers} are the corner stones of practical genetics and strongly influenced later development in the field. One of the main results, called {\it Hardy-Weinberg equilibrium}, states that in a population {satisfying certain conditions} the observed frequencies of possible genotypes $AA, AB, BB$ in some locus of interest are $p^2, 2pq, q^2$, where $p$ and $q=1-p$ are the proportions of the alleles $A$ and $B$ at the locus. In reality, the conditions needed to ensure Hardy-Weinberg equilibrium often fail to be met. For example in the case of New Zealand dairy cows, the national herd has about $4\times 10^6$ cows distributed in about $1.1\times 10^4$ herds. There were just over 3700 bulls used for insemination in 2013-2014 season with under 100 top bulls used to mate 80\% of the whole national herd. Except by chance the genotype frequencies in whole population are far from Hardy-Weinberg proportions at any locus and the proportions vary by herd, region, and loci. Table~\ref{hwtab} compares the conditions needed to establish Hardy-Weinberg equilibrium with the production situation in the dairy industry in New Zealand and, we suspect, many other countries. In {reality} such steady-state results are not of central interest for animal breeding decisions as the intent is to change the properties of the production herd in a favored direction. Before considering a more realistic model for such controlled breeding we will set the scene by mentioning some classical dynamic models in genetics. One of the most influential such models was a genetic evolution model proposed by S. Wright \cite{Wright} and R. Fisher \cite{Fisher}. In this model, for a fixed size total population and binary alleles ($A,B$), the discrete time dynamics of relative frequencies of different types of individual is considered under the neutrality ({\it equal fitness}) and Markov assumptions. This allows a stochastic dynamic flow with rather good analytical properties. The model has the same probability structure that would result if {\em each of the $N$ individuals of the $n + 1^{\mbox{st}}$ generation picked their parents at random}, though of course this cannot literally happen. A prominent phenomenon of the neutral Wright-Fisher model without mutation is a ``fixation'' that is the extinction of all types but one at a finite but random time. The further development of the Fisher approach has been in ways of weakening the assumptions used. The two most evident ones are to introduce more types of alleles \cite{Feng} and to consider a variable resampling rate (floating total population size) \cite{Donnelly, Kaj-and-Krone}. More sophisticated approaches are to introduce a more complicated distribution law say, a Poissonian one) for a number of offspring of an individual \cite{Feng}, to take into account mutation and/or selection \cite{Steinsaltz}, to introduce a random process for mortality of individuals \cite{Moran}, or to work with diffusion-like models \cite{Feller}. Good introductions to some of these general topics can be found in \cite{Dawson, PPH} and the first chapter of \cite{Feng}. These models are relatively simple in that analytical results for variables of interest, or asymptotic expansions for them, may be derived within them (see for example \cite{DPP}). But to make the obtaining of analytical results a goal inevitably leads to a focus on mathematical tractability in the setting up of the model. This in turn encourages the avoidance of the complexities of real life problems, resulting in an oversimplified model. An important point is that classical models typically assume {\it infinitely large} population size while investigating the dynamics of allele frequencies. While these models have powerful asymptotic analysis techniques available to them they are not relevant to the situation of a typical farm running a herd of only a few hundred cows. So instead we will concentrate on accounting for the effects of a finite, stochastically driven, herd size. While in this paper we concentrate on cows on a particular farm it is important to note that these farm herds are strongly connected by the fact mentioned above that a few tens of top bulls fertilize millions of New Zealand cows. This focus on a small number of male animals is in further contrast to the situation envisioned by the classical population models. Another difficulty with the quest for analytic solutions is that often they are available only for the evolution of mean values of variables of interest and not for other properties of their distributions such as variance and shape, whose knowledge will be required in real industry applications. Thus it is necessary to put analytic methods to the side and develop realistic stochastic models allowing Monte-Carlo simulation of all required distributions. The chief tool available to the dairy industry for improvement of the genetic merit of the herd is the ability, via artificial insemination, to choose the parents of the next generation. So our concern is chiefly with genotype dynamics under {\it controlled mating}, often termed {\it artificial breeding} in the dairy industry. Genomic methods are becoming important in sire selection now that statistical genomic prediction methods are available \cite{Harris}. Hand in hand with this animal {evaluation} technology vast genotyping data sets are increasingly available. The model we develop in this paper is a rather general one in sense of being capable of a range of adjustments. For example other species could be considered, as might more sophisticated rules of genotype sum possibly addressing genotype dependent ({\it genomic}) selection, or other departures from the {\it equal fitness} assumption. {Our model simulates the dynamics of alleles at a single locus in each animal of a herd under controlled mating.} The model, in the discrete form given below, incorporates some features typical for New Zealand seasonal dairy herd management practice. This allows the model to be useful for estimating the time necessary to reduce the proportion of unwanted genomic variants in the population to an acceptably low value. Also it could be used to estimate the time to introduce some desirable genomic variant, for example one that influenced milk composition in a beneficial way. There are a growing number of genomic discoveries published concerning the importance of some particular single variants \cite{BernardGrisart}, \cite{Aurelie}, \cite{Morris}. It is obvious the number of known deleterious genes will increase as knowledge is gained and more is understood about common diseases. Every sire selected for AI carries some deleterious genes. The model developed may help in risk analysis by running different scenarios to optimize AI strategy in sense of performance merit vs deleterious gene carrying. \section*{Methods} Firstly we introduce a general stochastic model for genotype evolution in an isolated herd subjected to a maximum herd size. Secondly we present a discrete-time version of this model as well as a generalization which allows a limited inflow of animals from an external source such as the market. The discrete model is then used in few simulations to illustrate its use, obtaining some interesting results. Finally we outline directions for possible further development. Let us consider a herd as effectively comprising two sub-herds: the main ({\it production}) herd and a {\it replacement} herd. The main herd consists of adult animals providing the productive output of a herd e.g. cows in milk. The replacement herd includes mostly young animals from birth up to just before going into production. It can also include some (typically small) number of adult animals, each expected to be suitable when required to join the main herd as a replacement. We will {(for example in equations \ref{eqn-main} below)} use subscripts $i$ and $j$ to refer to individual members of each herd but these numbers will refer to a formal position in a herd, much as do the numbers on the shirts of football players in a team. We will use functions of the form $f_i(t)$ to refer to properties of the animal in position $i$ of the herd as a function of time, $t$. If a maximum herd size is imposed, a new animal can only be introduced as a replacement for a removed animal or if the herd size is below the limit. In the latter case it will take the first unoccupied position. If animal $i$ is replaced at time $t_*$ this will typically cause a discontinuity or `jump' in the function $f_i(t)$ at $t=t_*$. The presence of animal replacements means the stochastic processes in our model will have jumps. Such processes have found much application in financial modeling. It has been found that the best way to formulate these processes is to use the integral form of {stochastic differential equations} using {stochastic} Ito integration (\cite{Oksendal,Applebaum}). Jumps usually arise in financial modeling as the result of a real-world event changing the value of a stock. The jumps in our model will not be of this kind. An analog of our type of model in Finance might be where a portfolio of stocks corresponds to a herd of animals and a jump is caused by replacing one stock in the portfolio by a new stock. Our model {makes} a number of assumptions about {the main and replacement} herds and some parameters of animal movements in and out of the herd. \subsection*{List of assumptions for continuous time model} \begin{enumerate} \item The number of cows in the main herd is initialised to $N_0$ at the initial time $t_{0}=0$ and never exceeds this value subsequently. The corresponding dynamical variable is $N(t)$. \item The number of cows in the replacement herd is initialised to $M_0$ at the initial time $t_{0}=0$ and never exceeds this value subsequently. The corresponding dynamical variable is $M(t)$. \item At initial time $t_{0}=0$, the age of cows in the main herd is generated by a customized random number generator. \item The genotype of a progeny follows from that of its parents via a summation rule. In this article we consider the Mendelian only case expressed by equation \ref{eq:sum-rule}. \item\label{item-depart} The departure of an animal from the main herd is subject to a Poisson process with a rate parameter $\lambda_D$. This allows uniform accounting for different causes of animal departure ({\it animal fate}). \item\label{item-departure} The departure of an animal from the replacement herd is also subject to a Poisson process with another rate parameter $\lambda_d$. \item\label{item-movement} The animal movements between sub-herds and departures happen annually (once a year) and are simulated by the following scheme. Using assumptions 1 to 6 simulate this year's set of animals to depart. Then fill vacancies thus created in the main herd by random choice (variable $\xi$) from members of the replacement herd that have reached the age of $t_{min}$ years to maintain predefined size. If this turns to be impossible due to lack of heifers of proper age in the replacement herd, then replace as many as possible, the main herd now taking a smaller size. \item The replacement strategy in the replacement herd is annual (once a year) addition of newborn calves to maintain predefined size $M$. If it turns to be impossible due to lack of newborn calves, the replacement herd remains with this smaller size. \item One bull, or team of bulls, of known genotype sires the whole herd. A new generation appears every year. \item There is no in-flow of animals from outside. \end{enumerate} {Assumptions \ref{item-depart} and \ref{item-departure} are for simplicity and may later be replaced by other descriptions of these animal departure processes more closely reflecting actual herd management practice.} Assumption \ref{item-movement} {somewhat departs from the} common practice, which is to have some flow of animals from outside (say, from the market) but we leave this for a subsequent publication considering the modeling of multiple herds. There is some discussion of market influence below; see {\it Herd with a limited inflow of animals}. The parameters $\lambda_D, \lambda_d$ are chosen by estimating the mean of animal's life time in the main and replacement herds respectively either on common practice or detailed analysis of the survival curve. {In New Zealand practice actual mortality as a cause of departure from either herd would be rare but poor condition might cause a decision to remove an animal from either herd} Variants of the model with different strategy, distribution and parameter settings are possible but {are outside } the scope of this article. \subsection*{Rule of single genotype sum}\label{sec-rule-of-sum} We represent the genotype of an animal in the locus of interest as a number from $\{-1,0,1\}$. Where $-1$ and $1$ stand for homozygous and $0$ for heterozygous genotypes correspondingly. The mode of inheritance at a single locus assumed to be {\it Mendelian} leading to the following {\it rule of summation} (where $P$ gives the probability of each outcome): \begin{eqnarray} \label{eq:sum-rule} (-1)\dot +(-1)&=&-1,\ \ P=1 \nonumber \\ 0\dot +(-1)&=&\begin{cases} -1&P=0.5\\ 0&P=0.5 \end{cases} \nonumber \\ 1\dot +(-1)&=&0,\ \ P=1 \\ 0\dot +0&=&\begin{cases} -1, & P=0.25\\ 0, & P=0.5\\ 1, & P=0.25 \end{cases} \nonumber \\ 1\dot +0&=&\begin{cases} 0, & P=0.5\\ 1, & P=0.5 \end{cases} \nonumber \\ 1\dot +1 &=&1,\ \ P=1 \nonumber \end{eqnarray} Here $\dot +$ is a commutative infix operation giving a random value of a `child' genotype as a random function of two variables of corresponding parental genotypes. \subsection*{Continuous time model: integral form} {Our goal in this article has been to introduce a very general model with the flexibility to represent a variety of types of managed animal populations.} To express {such a} model in the language of Stochastic Differential Equations and so have access to that body of theory we need to represent time in a continuous manner. Models with continuous time can show closeness to the observable herd dynamics but require some modification to assumptions \ref{item-departure} and \ref{item-movement} in the list of model assumptions given above regarding the random jump time for the departure processes. We will use the index $j$ for the above-mentioned formal position in a main herd and the index $i$ for the same in the replacement one. To develop our stochastic model of two interacting herds we state two elementary evolution processes for every index. The first is the process of the changing genotype value in a position ($j$ or $i$), designated as $D_j(t)$ or $d_i(t)$. The second is the process of the changing animal age in a position, designated as $A_j$ and $a_i$. These processes are responsible for decisions on position characteristics such as whether or not to replace an animal. The control on animal departure from the herds will be based on additional independent processes $P_{D_j}$ and $P_{d_i}$ in such a way that changes of $D_j(t)$ and (or) $d_i(t)$ occur precisely at the time moment of a jump of the appropriate Poisson process $P$. Accounting for this and based on the previous discussion, we can write down a system of stochastic equations describing temporal evolution of ensemble of cows in main and replacement herds in the following form \begin{equation} \begin{array}{ll}\label{eqn-main} D_j(t) &=\ D_j(0) + \int_0^t \left( -D_j(s-) + d_{\xi_j} (s-) \right) d P_{Dj} (s+A_j(0)) \\ d_i(t) &=\ d_i(0) + \int_0^t \left( -d_i(s-) + f(D_{\eta_i} (s-), S_{\zeta_i}) \right) d P_{d_{i}} (s+a_i(0)) \\ A_j(t) &=\ A_j(0) + t - \int_0^t ( A_j(s-) - a_{\xi_j}(s-) ) dP_{Dj}(s+A_j(0)) \\ a_i(t) &=\ a_i(0) + t - \int_0^t a_i(s-) dP_{d_{i}}(s+a_i(0))\\ \end{array} \end{equation} where $t\in [0,T]$, and the other quantities in equation~(\ref{eqn-main}) are defined as follows \begin{itemize} \item $D_j$ is the value of the allele for a formal $j$-th cow in the main herd; \item $P_{D_j}$ is the Poisson process with a parameter $\lambda_D$ defining the elimination rate for a cow in the main herd; \item $A_j$ is the age of the $j$-th cow of the main herd; \item $d_i$ the value of the allele for a formal $i$-th cow of the replacement herd; \item $P_{d_i}$ is the Poisson process with a parameter $\lambda_d$ defining the elimination rate for a cow in the replacement herd; \item $a_i$ is the age of the $i$-th cow of the replacement herd; \item $f(\cdot,\cdot)$ is the female calf's genotype as a stochastic function of the parental genotypes. Here $f(\cdot,\cdot)$ is usually given by the rule of summation, equation~\ref{eq:sum-rule}, but other rules are possible. \item $\eta$ is a random variable corresponding to the random choice of a cow from the main herd to be used as a dam for the replacement herd and which subsequently gives birth to a female animal. \item $\{S_k\}$ is a set of values of alleles for sires; \item $\zeta$ is a rule for choosing the sire; it can be a random variable or a determined sequence. \end{itemize} We explain the correctness of the system (2) for a position $j$ in the main herd as follows. A jump of the Poisson process for the first equation occurs at a moment $s$. Due to the definition of the Ito integral the new value of the locus is formed by arithmetic summation with terms $-D_j(s-)+d_\xi(s-)$. The first term zeros the current position value and the second one establishes the new value (with a random choice $\xi$). Based on the same Poisson process jump the third equation in (2) reflects the change at position $j$ of the age variable $A_j$, taking into account that age should be increased if there is no jump (term $t$ in this equation). In a similar way we deal with replacement herd except for the fact that the new genotype value at the locus is defined by the summation rule (1). The connection of the replacement herd with a main one is fulfilled via the term $D_\eta(s-)$ as an argument in function $f$ defined above. At a given stage of the model construction we assume that all variables and processes are mutually independent{, for example that the loss of a cow from the main herd is not affected by losses in the replacement herd.} Integrals are in Ito's sense, see for example \cite{Gardiner}, p. 84. The definition of integrals over a Poisson process can be done by taking into account the fact that $P(t) = \widetilde{P}(t)+\lambda t$, where $P(t)$ is the Poisson process with a parameter $\lambda$, and $\widetilde{P}(t)$ is a martingale corresponding to $P(t)$ known as the {\it compensated Poisson process}. An important point to be mentioned is that the proposed model is a rigorously defined system of stochastic differential equations in integral form. One possible alternative approach to the modeling would be to proceed as is done in Evolutionary Game Theory \cite{MSmith} where one defines a stochastic dynamical flow by set of local ``game rules'', an approach which is also suitable for Monte-Carlo simulation of system evolution. Formulation in the form of stochastic differential equations also allows Monte-Carlo simulation but does not restrict itself to this. For example approximate methods of the so-called weak type \cite{Egorov,Zherelo} exist which allow (admittedly in a rather difficult way) direct estimates of functionals constructed on solutions of stochastic differential equations, such as the functionals for mean and variance of variables of interest. These direct estimates do not require exhaustive Monte-Carlo simulation to ensure proper statistic quality of simulation results. Leaving this possibility for future work, we turn to discrete-time methods. So now let us reformulate the model in the discrete form suitable for Monte-Carlo simulation. \subsection*{The model in discrete form }\label{sec-discrete} {Digital computers operate in a discrete world and so for simulation purposes, it is convenient to work with a model in discrete time. Note also that we would commonly lack precise information on the timing of events in a herd but may have this information on a monthly or annual basis. We} discretize the above model at a sequence of fixed time steps $0 = \tau_0 < \tau_1 < \ldots < \tau_L = T$ {}. { We suppose that in each of the $L$ intervals $(\tau_{l-1}, \tau_l]$ the probability of more than one Poisson process jump (control action) within the interval is negligible, or alternatively that multiple jumps can be replaced by a single jump of value equal to the sum of the individual jumps. } Then we arrive at the following discrete system of model equations: \begin{equation} \begin{array}{ll}\label{eqn-discrete} D_j(t) &= D_j(0) + \sum_{l=1}^L (-D_j(\tau_{l-1})+ d_\xi(\tau_{l-1})) \mathrm{sign} [ \mathcal{P}(\lambda_D A_j(\tau_{l-1})) ] I_{[0,t )}(\tau_{l}) \\ d_i(t) &= d_i(0) + \sum_{l=1}^L \left(-d_i(\tau_{l-1})+ f(D_\eta(\tau_{l-1}), S_\zeta) \right)\mathrm{sign} [ \mathcal{P}(\lambda_d a_i(\tau_{l-1})) ] I_{[0,t )}(\tau_{l}) \\ A_j(t) &= A_j(0) + \sum_{l=1}^L (1 + (a_\xi(\tau_{l-1}) - A_j(\tau_{l-1})) \mathrm{sign} [ \mathcal{P}(\lambda_D A_j(\tau_{l-1})) ])I_{[0,t )}(\tau_{l}) \\ a_i(t) &= a_i(0) + \sum_{l=1}^L (1 - a_i(\tau_{l-1}) \mathrm{sign} [ \mathcal{P}(\lambda_d a_i(\tau_{l-1})) ])I_{[0,t )}(\tau_{l}).\\ \end{array} \end{equation} In the system of equations~(\ref{eqn-discrete}), the $\mathcal{P}(\kappa)$ are all independent Poisson random values with rate parameters \subsection*{Herd with a limited inflow of animals}\label{sec-market} The model described in the previous section for a medium herd size can demonstrate a long period of animal deficit due to {an ``extinction effect'' which has been mentioned in the literature and which is} observable in our simulation study. (see the discussion of Figure~\ref{fig:4ab} below). Such a situation seems not to be a typical one as a farmer tends to fill the gap by animal purchase. To account for such {herd size control polices}, we will modify the model in the following way. We assume that a purchased animal is placed into the replacement herd first. Such a simple assumption, nevertheless allows us to incorporate a market inflow by reformulating the meaning of the $\eta$ variable in equation~(\ref{eqn-discrete}) only. Namely, {a} a zero value of this variable now will correspond to the {\em event of a male animal birth} whereas for born female the variable value is still the index of the dam in the main herd. Then we can rewrite the equations for the replacement herd in the following simple form \begin{align} \label{eqn-discrete-market} d_i(t) =& d_i(0) + \sum_{l=1}^L \big(-d_i(\tau_{l-1}) +\mathrm{sign}(\eta) f(D_\eta(\tau_{l-1}), S_\zeta) \nonumber\\ & + (1-\mathrm{sign}(\eta)) D_M \big)\mathrm{sign} [ P_d(\lambda_d a_i(\tau_{l-1})) ] I_{[0,t )}(\tau_{l}) \nonumber \\ a_i(t) =& a_i(0) + \sum_{l=1}^L \bigg(1 +\big( (1-\mathrm{sign}(\eta) )a_M \nonumber \\ & - \mathrm{sign}(\eta) a_i(\tau_{l-1})\big) \mathrm{sign} [ P_d(\lambda_d a_i(\tau_{l-1})) ] \bigg) I_{[0,t)}(\tau_{l}). \end{align} where $D_M$ is the random variable defining the distribution of the modeled allele in animals from the market, $a_M$ is the random variable for the cows age distribution at the market. As one can see from the equations~(\ref{eqn-discrete-market}), when $\eta=0$ (male is born) the $\mathrm{sign}$-function gives a non-zero contribution into terms with index $M$ and the last can be interpreted as incorporation of animal from market into the replacement herd. When $\eta>0$, the factor $(1-\mathrm{sign}(\eta))$ zeros the market contribution, restoring the original system (\ref{eqn-discrete}). One additional technical advantage of the last proposed model is that now the size of the replacement herd is constant, which simplifies working with the system, especially for the goals of numerical simulation. \section*{Simulation of genotype dynamics}\label{sec-sim} {In order to account for genomic selection models in animal breeding we} now show some simulation results under the rule of genotype sum and controlled mating. We continue using the $-1, 0, 1$ coding. In each example we begin with a dam of genotype $-1$ (homozygous with allele to be eliminated), the dam and her resulting progenies are then inseminated by a sequence of sires of known genotypes. {We choose the time points for the discretization with a constant one year spacing, that is we set $\tau_l - \tau_{l-1}=1$ (year), $l=0,1,\ldots,L$.} At the initial time {$\tau_0$} it is also assumed that the distribution of alleles in the replacement herd is the same as in the main one. This leads, as we see later, to a two-year lag in switching dependence. First, we consider the dynamics for unconditional switch of genotype into state $1$ (homozygous with allele to be introduced). We achieve the goal by the sequence of sires, where all sires have genotype $1$. In Figure~\ref{fig:1ab} we demonstrate two realizations of single trajectory in this case. It {is} worth mentioning that every plot shows the dependence of gene index in {a particular, say the j-th,} slot in the array of animals in the herd. Then in Figures~\ref{fig:1ab}a,b only jumps can be definitely interpreted as animal change, whereas horizontal lines could correspond, in principle, replacement at any time step an animal by another one with the same genotype. We stress that single trajectories ( Figure~\ref{fig:1ab}) and bundles of trajectories ( Figure~\ref{fig:2}) for the transition of an SNP from one state into another may differ quite markedly from the mean ( Figure~\ref{fig:4ab}(a)). \begin{figure}[h] \begin{center} \includegraphics[width=5.5in]{fig1.eps} \end{center} \caption{ {\bf Two randomly chosen genotype trajectories. The dependent variable is a genotype value at the fixed index slot of the array of main herd, see in text for details. } } \label{fig:1ab} \end{figure} Next, in Figure~\ref{fig:2}, we plot a bundle of 100 trajectories with jittering so that the probability of each route may be inferred from the plot. {The density of lines allows visual estimation of the number of animals in particular states {-1,0,1}.} \begin{figure}[h] \begin{center} \includegraphics[width=3in]{fig2.eps} \end{center} \caption{ {\bf Bundle of 100 genotype trajectories.} } \label{fig:2} \end{figure} Finally, in Figure~\ref{fig:3}, we illustrate the effect when one of the sires has the genotype $-1$ and {again} consider 100 individual trajectories. As one can see from this plot, a single fault (using a sire with $g=-1$), can seriously slow down the transition period. It needs to be pointed out that in Figures~\ref{fig:2}~and~\ref{fig:3} the near vanishing of genotype -1 at the end of the simulation period does not mean that it has been eliminated from the whole of the herd. \begin{figure}[h] \begin{center} \includegraphics[width=3in]{fig3.eps} \end{center} \caption{ {\bf Bundle of 50 genotype trajectories, one bull in sequence (third one) with $g=-1$.} } \label{fig:3} \end{figure} {Now we turn to the statistical characteristics of ensembles of simulated herds.} In Figure~\ref{fig:4ab} we show the temporal evolution of mean (a) and variance (b) for the case of 1000 herds, {each simulation} {starting} from a ``herd'' of 200 homozygous dams in the main herd (all initial genotypes are $-1$) and {then} developing independently. \begin{figure}[h] \begin{center} \includegraphics[width=5.5in]{fig4.eps} \end{center} \caption{ {\bf Mean (left) and variance of mean (right) of genotype value for 1000 herds, every main herd size is 200, replacement herd size is 100 at initial time, $\lambda_D=0.114, \lambda_d=0.25$.} } \label{fig:4ab} \end{figure} An important feature discovered in this simulation is that the herd size evolves in time in direction of stabilization at a level which is less than what was chosen as a upper bound in simulation. The appropriate plot for herd size mean value is shown in Figure~\ref{fig:5}. It looks like some sort of ``partial extinction'' and can be explained as follows. Any positive fluctuation in number of female born at definite time step is cut by application of the rule of the upper bound control policy of assumptions 1 and 2. \begin{figure}[h] \begin{center} \includegraphics[width=3in]{fig5.eps} \end{center} \caption{ {\bf The dependence of the mean main herd size upon time. All parameters are the same as in Figure~\ref{fig:4ab}. } } \label{fig:5} \end{figure} In contrast to this, rare strong negative fluctuations in number of females born, which produce deficit of cows in the replacement herd seriously influence subsequent dynamics and lead to a period of slow herd size restoration. Averaging over herds {does} not improve the results, as this would include more and more rare but strong fluctuations. As we see from the last plot, the dynamics of {the} replacement herd strongly influence on those of the main herd. In accord with this, the presented results on main herd should be considered as demonstrating only tendencies, because we did not seek to investigate the influence of replacement herd management nor try to optimize it somehow. As we have already mentioned the at first sight an unpleasant ``extinction effect'' can be eliminated by animal inflow from the market in a way been discussed above. But much more profitable seems to be the following point of view, namely that ``partial extinction'' is a key ingredient in faster switch to a given allele. In fact, the Poisson process constant $\lambda_D$ for the main herd is directly linked with ``partial extinction'' level as one can see from the following plots in Figures~ \ref{fig:mean-age}, \ref{fig:mean-freq-age}, \ref{fig:size-age}, \ref{fig:var-age}, \ref{fig:var-freq-age} where we demonstrate the rate of the switch from a definite sign (it was chosen as $-$) of the allele into opposite sign depended on values of $\lambda_D$. Values $\lambda_D$ and colors sequence (red, cyan, green, blue, black) corresponds to probability equals 0.8 for a cow to live in herd up to $4,6,8,10,12$ years correspondingly, it was used averaging over 10000 herds in this simulations. \begin{figure}[h] \begin{center} \includegraphics[width=4in]{fig6.eps} \end{center} \caption{ {\bf Mean genotype value time evolution for 10000 herds, every main herd size is 400, replacement herd size is 200 at initial time. Colors sequence corresponds to different values of the mean life constant in the main herd, see details in text.} } \label{fig:mean-age} \end{figure} \begin{figure}[p] \begin{center} \includegraphics[width=4in]{fig7.eps} \end{center} \caption{ {\bf Mean displayed allele frequency time evolution for 10000 herds, every main herd size is 400, replacement herd size is 200 at initial time. Colors sequence corresponds to different values of the mean life constant in the main herd, see details in text.} } \label{fig:mean-freq-age} \end{figure} \begin{figure}[p] \begin{center} \includegraphics[width=4in]{fig8.eps} \end{center} \caption{ {\bf Mean main herd size time evolution for 10000 herds, every main herd size is 400, replacement herd size is 200 at initial time. Colors sequence corresponds to different values of the mean life constant in the main herd, see details in text.} } \label{fig:size-age} \end{figure} \begin{figure}[p] \begin{center} \includegraphics[width=4in]{fig9.eps} \end{center} \caption{ {\bf Variance of mean (right) of genotype value evolution for 10000 herds, every main herd size is 400, replacement herd size is 200 at initial time. Colors sequence corresponds to different values of the mean life constant in the main herd, see details in text.} } \label{fig:var-age} \end{figure} \begin{figure}[p] \begin{center} \includegraphics[width=4in]{fig10.eps} \end{center} \caption{ {\bf Variance of the displaced allele frequency for 10000 herds, every main herd size is 400, replacement herd size is 200 at initial time. Colors sequence corresponds to different values of the mean life constant in the main herd, see details in text.} } \label{fig:var-freq-age} \end{figure} One of the interesting feature of the model is the sensitivity to a fine detail of the transition of animal from replacement herd to a main one. We consider two variants, in the first one we fill the main herd up to a limit size first from the replacement herd, then make a random choice of a way for rest animals in replacement hers to leave the herd. In the second variant, we make a random choice of leave/rest and then specify the variant of leave (if success). It turns out that at intermediate time these two slightly different procedure give observably different behaviour, as demonstrated in Figure~\ref{fig:2models-mean} for mean values, variances Figure~\ref{fig:2models-var} and mean size of the main herd Figure~\ref{fig:2models-size}. For gene index dynamics both curves are very near, variances differ slightly but the mean herd size is influence strongly by the control scheme. The last leads to conclusion that it is necessary to be very accurate when formulating any control scheme for such a dynamical system, schemes seems to be very near at first glance could produce significantly different results. \begin{figure}[p] \begin{center} \includegraphics[width=4in]{fig11.eps} \end{center} \caption{ {\bf The comparison of the mean main herd gene index upon time dependencies for two models. Thin red line is for model 1, thick-blue line is for model 2. All parameters are the same as in Figure~\ref{fig:4ab}. } } \label{fig:2models-mean} \end{figure} \begin{figure}[p] \begin{center} \includegraphics[width=4in]{fig12.eps} \end{center} \caption{ {\bf The comparison of the mean main herd gene index variance upon time dependencies for two models. Thin red line is for model 1, thick-blue line is for model 2. All parameters are the same as in Figure~\ref{fig:4ab}. } } \label{fig:2models-var} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=4in]{fig13.eps} \end{center} \caption{ {\bf The comparison of the mean main herd size upon time dependencies for two models. Thin red line is for model 1, thick-blue line is for model 2. All parameters are the same as in Figure~\ref{fig:4ab}. } } \label{fig:2models-size} \end{figure} \section*{Discussion} To summarize, we have constructed a system of stochastic differential equation that can model temporal evolution of biallelic polymorphism in a deme under conditions of controlled mating. The model incorporates peculiarities typical for New Zealand dairy herd management such as herd split into main and replacement herds, typical lifetime distribution, size control for main herd and rule of inflow from the replacement into main herd. Currently the model is implemented in R but an open source C++ version is under development \cite{Guy}. Simulations have demonstrated that when a maximum herd size is imposed local fluctuations of new born animals will strongly influence the system dynamics and lead to observable diminishing of the herd size (partial extinction). To suppress this feature, which is not observed in real farm situations, the model has been further adjusted to allow for an external inflow of animals from the market. Another important conclusion one can make is that the investigation of replacement herd management policy could be of great importance for reaching optimization goals. \section*{Acknowledgments} Our thanks go to Jack Hooper of Livestock Improvement Corporation for reading drafts of this article and for valuable comments on existing herd management practice in New Zealand. \newpage
{ "timestamp": "2017-12-15T02:06:40", "yymm": "1712", "arxiv_id": "1712.05177", "language": "en", "url": "https://arxiv.org/abs/1712.05177" }
\section{Introduction} \label{Introduction} SrRuO$_3$ (SRO) is a ferromagnetic metal oxide with a Curie temperature $T_c$ of 160 K \cite{j1}. It is often used as a gate electrode due to its high conductivity and ease of epitaxial growth \cite{j2}. SRO has attracted considerable attention because of its intriguing electronic behaviors, for example, SRO loses its itinerant ferromagnetism as the thickness approaches approximately four unit cells \cite{j3,j4,j5}. Such behavior, referred to as the metal-insulator transition (MIT) of SRO, is a stark deviation from the property of bulk SRO which is known to be only weakly correlated \cite{j6}. There have been many theoretical efforts to explain the origin of the MIT. \\ \indent In density functional theory (DFT) studies, the on-site Coulomb interaction parameter, $U$, has been used to model electronic correlations of $d$-orbitals. However, DFT+$U$ calculations of ultrathin SRO were inconsistent with experimental results. For instance, SRO remained metallic regardless of unrealistic $U$ values \cite{j23} and extreme (one-unit-cell) thickness. \cite{j24} Hence, in addition to $U$, extrinsic factors such as surface relaxation, in-plane strain, and disorder have been suggested as possible origins of the MIT. For instance, the effective Coulomb potential of about 2--3 eV in the presence of high surface relaxation \cite{j20,j21,j22} or DFT+$U$ under large tensile strain produced the insulating phase \cite{j9}. However, experimentally, SRO/ultrathin SRO/STO under compressive strain without surface relaxation or reconstruction also exhibited insulating behavior \cite{j13}. Hence, additional DFT+$U$ studies are required to demonstrate a clear description of the insulating SRO. \\ \indent The origin of such theoretical difficulty arises from two main factors. First, the physics of ultrathin SRO cannot be described precisely by only a few parameters, such as $U$ \cite{j23} and in-plane strain exerted by STO \cite{j9,j24}, and delicate structural alteration may result in drastic changes in physical properties. For these reasons, Hund's coupling \cite{j33}, dynamical correlation \cite{j23} and dimensionality reduction \cite{j3,j24}, as seen in the MIT of SrVO$_3$ \cite{j34}, have been proposed as possible causes of the MIT as well. Second, the electronic structure is expected to differ layer by layer within ultrathin systems, regardless of surface relaxation \cite{j32}. Such an exacting nature of the ultrathin system can exhibit unreported electronic or magnetic behaviors (e.g., comparing Ref. 8 with 11). Hence, to fully understand the MIT of SRO, we first need to carefully analyze the electronic structure of each atomic layer of ultrathin SRO.\\ \indent Experimentally, SRO exhibited electronic anomalies that could not be explained in terms of $U$ \cite{j49,j50,j51}. Furthermore, although photonic spectroscopy studies have reported a vestige of the lower Hubbard band \cite{j15,j16} and hard-gap originating from spectral incoherency in thin-film SRO \cite{j10}, some studies have criticized that the highly correlated spectra of surface SRO would contaminate and thus exaggerate such incoherency \cite{j17,j18}. These results imply that the MIT in SRO cannot be explained merely with Mott-Hubbard physics, which is consistent with what has been predicted in theoretical studies. \\ \indent On the other hand, the physics of ultrathin SRO significantly depended on substrates in experiments. For instance, SrTiO$_3$ (STO) substrates exert compressive in-plane strain, making rotations and tilts of oxygen octahedra of SRO energetically unfavorable. As a result, the lattice system of ultrathin SRO underwent orthorhombic-to-tetragonal phase transformation (Figure \ref{structure}(c)) and magnetization was significantly suppressed \cite{j28,j29,j30}. Furthermore, the tetragonality increased as the thickness of SRO film decreased \cite{j29}. Hence, we need to scrutinize not only the layer-by-layer dependence of the electronic states but also the effects of the tetragonality and substrates in ultrathin SRO. \\ \indent In this letter, we report electron energy-loss (EEL) spectroscopy of SRO with three-unit-cell (capped insulator) and 24 nm (metal) thicknesses near the O-$K$ edge ($1s\rightarrow2p$ transition). Interestingly, we find that the central layer of the insulating SRO exhibits distinct features from metallic and interfacial SRO. To identify whether the features originate from in-plane strain and the corresponding strong tetragonality, we perform DFT calculations of the STO/SRO/STO superlattice with highly suppressed tilts and rotations of oxygen octahedra \cite{j48}, and compare them with SRO bulk with a tetragonal crystal-field. By analyzing the characteristics of the spectra and computational results, we provide comments on the possible origins of the MIT. \begin{figure}[t]% \begin{center} \includegraphics*[width=0.9\linewidth]{a.png} \end{center} \caption{% (a) High-angle annular dark-field (HAADF) image of capped SrRuO$_3$ (SRO) with three-unit-cell thickness. (b) Schematic diagram of capped SRO. (c) Pseudo-cubic (pc) and tetragonal (t) lattice structure of ultrathin SRO grown on a SrTiO$_3$ (STO) substrate without capping. Purple, yellow, green, and red circles indicate Sr, Ru, Ti, and O atoms, respectively, as indicated in (a). } \label{structure} \end{figure} \section{Experimental} \label{Theoretical} SRO films were grown on a (001) TiO$_2$-terminated STO substrate. On the substrate, we deposited SRO films using pulsed laser deposition with an oxygen pressure of 0.1 Torr and a laser fluence of 1.5 J/cm$^2$ at 700 \textdegree{}C. The growth rate of SRO films was approximately 0.013 nm/s. A focused ion beam was used to prepare specimens for transmission electron microscopy (TEM) analysis. A JEOL-ARM200F scanning TEM (STEM) provided high-angle annular dark-field (HAADF) images and EEL spectra near the O-$K$ edge. The scanning rate of the STEM-EELS detector was 0.1 s/pixel. The electrical resistivity of SRO films was measured using the standard four-probe method.\\ \indent To calculate the density of states (DOS) of the (STO)$_3$/(SRO)$_3$/(STO)$_3$ superlattice (S3) and tetragonally elongated SRO (ST), we adopted computational procedures similar to those used in Ref. 19. The calculations were performed using the plane-wave basis set and the projector-augmented wave method implemented in the Vienna ab initio simulation package \cite{j45}. We used the generalized gradient approximation with a PBEsol functional \cite{j46}. Starting from ferromagnetic configuration, we chose a weak correlation $U_{\text{eff}}$ = 1 eV, which is suitable to approximate the experimental results \cite{j22,j23}. A plane-wave energy cutoff of 500 eV was used with Monkhorst-Pack mesh $k$-point sampling of 21 $\times$ 21 $\times$ 1 for S3, and 21 $\times$ 21 $\times$ 21 for ST. The samplings were checked up to 41 $\times$ 41 $\times$ 1 and 61 $\times$ 61 $\times$ 61, respectively. From the Poisson effect, the pseudo-cubic out-of-plane parameter of SRO on a STO substrate was estimated to be approximately 3.9635 {\AA} \cite{j25,j27,j36,j37,j38}. Hence, for S3 and ST, we fixed the pseudo-cubic in-plane lattice parameter of SRO to be 3.905 {\AA}, which is a lattice parameter of cubic STO, and the out-of-plane parameter to be 3.9635 {\AA}. \begin{figure}[t]% \begin{center} \includegraphics*[width=0.7\linewidth]{res.png} \end{center} \caption{% Temperature-dependent resistivity of SrRuO$_3$ (SRO) specimens with thicknesses of six and three unit cells, measured using the standard four-probe method. } \label{res} \end{figure} \section{Results and Discussion} In Figure \ref{structure}(a), SRO with a three-unit-cell thickness is capped by STO. In this environment, large surface relaxation cannot occur. Figure \ref{res} presents the resistivity data of the capped three- and six-unit-cell SRO specimens, which exhibited similar tendencies to those reported previously \cite{j3,j10,j39}. The resistivity of SRO with a six-unit-cell thickness was proportional to temperature, meaning that it was metallic. In addition, at approximately 160 K, a slope change occurred, which is a typical behavior of ferromagnetic materials near the Curie temperature. On the other hand, at a three-unit-cell thickness, SRO lost both of its ferromagnetic and metallic behaviors (SRO MIT). Hence, we reconfirmed that surface reconstruction is not a generic origin of the MIT. \begin{figure}[t]% \begin{center} \includegraphics*[width=0.8\linewidth]{b.png} \end{center} \caption{(a) Electron energy-loss (EEL) spectra of capped SrRuO$_3$ (SRO) with three-unit-cell thickness near O-$K$ edge. (b) EEL spectra of SRO (24 nm)/SrTiO$_3$ (STO) near O-$K$ edge. Distances indicated in the legend correspond to the separation between the SRO/STO interface and the position of the EEL probe as indicated in the inset of (b).} \label{3uc} \end{figure} To scrutinize the electronic structure of SRO under the MIT, we measured EEL spectra of both STO and SRO regions in our three-unit-cell SRO specimen (Figure \ref{3uc}(a)). Central and interfacial layers in the three-unit-cell SRO indicate specific regions shown in Figure \ref{structure}(b). The overall shapes of EEL spectra of STO were in agreement with a previous report \cite{j40}, showing a three-peak feature with peaks located at approximately 529, 534, and 542 eV. Based on a previous study \cite{j41}, the onset peak near 529 eV in the STO region is associated with the $t_{2g}$ states of Ti, and the second peak at 534 eV is related to the $e_g$ states. EEL spectra of SRO also displayed the three-peak feature. Considering that the $4d$ orbitals of Ru and $2p$ orbitals of O are highly hybridized \cite{j35}, the first two peaks are designated as $t_{2g}$- and $e_g$-related states. Note that the O-$K$ spectrum of SRO adjacent to STO is very similar to that of STO, in both metallic and insulating SRO films. More importantly, the O-$K$ edge at the central layer of insulating SRO is clearly distinct from the others: the central layer of metallic SRO, two-unit-cell off (same distance from the interface) the interface of metallic SRO, and insulating SRO near the interface. For instance, the intensity of $t_{2g}$-related states significantly increases at the central SRO in Figure \ref{3uc}. At the SRO/STO interface, SRO with the thickness of 24 nm (Figure \ref{3uc}(b)) displayed the STO-overlapped three-peak features, analogous to Figure \ref{3uc}(a); however, as it was far from the interface, intermediate signals emerged between 529 and 534 eV. Furthermore, the peak of the $t_{2g}$-related states became ambiguous in the metallic region, which is distinguishable from the features of the central spectra of the three-unit-cell SRO. These results show that we cannot ignore the influences of the STO substrates near the interfaces regardless of the thickness of the SRO film and the electronic states of the central layer of the insulating SRO cannot be explained merely by the interfacial effects. The in-plane strain and strong (tetragonal) crystal-field significantly modify the electronic states of ultrathin SRO \cite{j8,j9,j20}. The STO substrates suppress the rotations and tilts of the oxygen octahedra of SRO \cite{j28,j29,j30,j48}. The resultant lattice system is tetragonal, as shown in Figure \ref{structure}(c). In this manner, unlike other uncapped SRO thin films, the tetragonal lattice system induced by the suppression of rotations along in-plane axes would be further suppressed in our capped SRO. Hence, the strain and strong tetragonality can be correlated with the intriguing behaviors of the spectra. To theoretically analyze the spectra and effects of the strain and tetragonality, we used DFT to obtain the projected density of states of our system and adopted compressively strained SRO structures (S3 and ST) without the rotations and tilts of the oxygen octahedra. Figure \ref{dos}(a) shows the projected densities of the $p$-states of oxygen atoms in S3; as expected from the EEL spectra, clear differences were observed among the central and interfacial layers. Particularly, the central layer had a more pronounced DOS at the Fermi level compared to the interfacial layers. At approximately 6 eV above the Fermi level, we can see $e_g$-related DOS. However, none of the layers displayed insulating behavior (Figure \ref{dos}), which is inconsistent with the experimental results. \begin{figure}[t]% \includegraphics*[width=\linewidth]{c.png}% \caption{(a) Projected density of $p$-states of oxygen atoms and (b) projected density of $d$-states of Ru atoms in ST and S3 calculated by density functional theory with $U_{\text{eff}}=1$ eV. The Fermi level is at 0 eV. Positions of the first, second, and third layers are shown in (a). Original densities of states were convoluted by a Gaussian function with a width of 0.1 eV.} \label{dos} \end{figure} In the EEL spectra, the interfacial regions were significantly influenced by STO irrespective of the thickness of SRO (Figure \ref{3uc}(a) and (b)), while the central layer retained the originality with respect to both the interfacial and bulk regions. In other words, the electronic states of the capped SRO are highly overlapped with those of STO, and the originality of the central layer is probably due to the relatively weak overlap compared to the interfacial regions. Hence, the effective crystal-field splitting may not be the critical component of the system. Rather, the overlap with STO is important. In fact, although the calculation results for S3 were not perfectly consistent with the experimental results, they also showed the potential influences of the hybridization with STO. The electronic states of the central layer of S3 differed from those of ST, i.e., SRO under a high tetragonal crystal-field and without STO substrates (Figure \ref{dos}). This indirectly suggests that hybridization cannot be overlooked. On the other hand, it is also possible that the STO substrates induce dimensionality reduction (i.e., abrupt truncation of wave function of SRO). In this case, van Hove singularity results in distinct electronic states irrespective of the hybridization with STO \cite{j3,j24}. However, we find no signs of low dimensionality in either the experimental or the computational results. For instance, the projected DOS of $d$-states differed significantly layer by layer (Figure \ref{dos}(b)). Furthermore, as mentioned earlier, the interfacial regions were highly overlapped with STO, meaning that the wave function may not be truncated at the SRO/STO interface. Lastly, although we adopted a lower correlation ($U_{\text{eff}}$) compared to some studies (e.g., Ref. 17, 18, 21), merely increasing $U_{\text{eff}}$ would not guarantee the successful reproduction of real SRO \cite{j42}. To confirm whether a high correlation results in the insulating state, we produced the DOS of S3 with an unrealistic correlation (i.e., $U_{\text{eff}}=6$ eV). However, it was neither insulating nor more consistent with the experimental results than S3 with $U_{\text{eff}}=1$ eV (see Supporting Information). In other words, simply adopting the high localization and strong tetragonality would not lead to the display of the MIT of SRO. Hence, it is possible that dynamic correlation is important in this system \cite{j23}. With such dynamic effects, the relationship between the hybridization with STO and the formation of the insulating SRO should be checked. \section{Summary and Conclusion} We fabricated a STO/SRO (three-unit-cell)/STO system to identify the electronic characteristics of ultrathin SRO. HAADF-STEM images showed an atomistically sharp interface and EEL spectra revealed that the electronic state of the central SRO differs from that of interfacial and bulk SRO even at a three-unit-cell thickness. Particularly, the $t_{2g}$-related states of central SRO are suspected to represent distinct physics. To theoretically analyze the EEL spectra, we performed DFT calculations. However, even if we highly constrained the rotational degrees of freedom and artificially maintained the tetragonality of the superlattice, the calculation did not sufficiently reflect the experimental results. Based on our theoretical and experimental results, we expect that consideration of extra degrees of freedom, other than the effective crystal-field, high localization, and van Hove singularities, may be required to explain the original features of the central layer. \newpage \section{Supporting Information} To confirm whether an unrealistically high $U$ and strong tetragonality produces the insulating phase, we calculated the DOS of S3 with $U_{\text{eff}}=6$ eV (Figure \ref{u6}). Although the DOS of Ru $d$-states near the Fermi level was significantly decreased compared to Figure \ref{dos} (b), our system did not exhibit the insulating state. Furthermore, at the Fermi level, the DOS of O $p$-states of the central layer was lowered, which is inconsistent with our experimental results (Figure \ref{3uc} (a)). \begin{figure}[ht]% \includegraphics*[width=\linewidth]{U6.png}% \caption{(a) Projected density of $p$-states of oxygen atoms and (b) projected density of $d$-states of Ru atoms in ST and S3 calculated by density functional theory with $U_{\text{eff}}=6$ eV. The Fermi level is at 0 eV. Positions of the first, second, and third layers are shown in (a). Original densities of states were convoluted by a Gaussian function with a width of 0.1 eV.} \label{u6} \end{figure}
{ "timestamp": "2017-12-15T02:05:25", "yymm": "1712", "arxiv_id": "1712.05137", "language": "en", "url": "https://arxiv.org/abs/1712.05137" }
\section{Introduction}\label{sec1} Gamma-ray bursts (GRBs) are traditionally classified in short GRBs, with a total duration $\lesssim 2$~s, and long GRBs, lasting $\gtrsim 2$~s \citep{1981Ap&SS..80....3M,Dezalay1992,Klebesadel1992,Kouveliotou1993,Tavani1998}. Large majority of long bursts is spatially correlated with bright star-forming regions in their host galaxies \citep{Fruchter2006,Svensson2010}. {For this reason the long GRBs have been traditionally associated with the collapse of the core of a single massive star to a black hole (BH), surrounded by a thick massive accretion disk: the \textit{collapsar} \citep{1993ApJ...405..273W,1998ApJ...494L..45P,1999ApJ...524..262M,2004RvMP...76.1143P,2013ApJ...764..179B}. In this traditional picture the GRB dynamics follows the ``fireball'' model, which assumes the existence of a single ultra-relativistic collimated jet \citep[see e.g.][]{1976PhFl...19.1130B,1990ApJ...365L..55S,1993MNRAS.263..861P,1993ApJ...415..181M,1994ApJ...424L.131M}. The structures of long GRBs were described either by internal or external shocks \citep[see][]{1992MNRAS.258P..41R,1994ApJ...430L..93R}. The emission processes were linked to the occurrence of synchrotron and/or inverse-Compton radiation coming from the single ultrarelativistic jetted structure, characterized by Lorentz factors $\Gamma \sim 10^2$--$10^3$.} {Such a \textit{collapsar} model does not address some observational facts: 1) most massive stars are found in binary systems \citep{Smith2014}, 2) most type Ib/c SNe occur in binary systems \citep{Smith2011} and 3) the SNe associated to long GRBs are indeed of type Ib/c \citep{DellaValle2011}. These facts motivated us to develop the binary-driven hypernova (BdHN) model.} Recently we have found evidence for multiple components in long GRB emissions, indicating the presence of a sequence of astrophysical processes \citep{2012A&A...543A..10I,2012A&A...538A..58P}, which have led to formulate in precise terms the sequence of events in the Induced Gravitational Collapse (IGC) paradigm \citep{Ruffini2001c,Ruffini2007,Rueda2012,Fryer2014} making explicit the role of binary systems as progenitors of the long GRBs. Within the IGC scenario the long bursts originate in tight binary systems composed of a carbon-oxygen core (CO$_{\rm core}$) undergoing a SN explosion and a companion neutron star (NS) \citep{Becerra,2016ApJ...833..107B,2018arXiv180304356B}. The SN explosion triggers a hypercritical accretion process onto the companion NS: photons are trapped in the infalling material and the gravitational energy gained by accretion is carried out through an efficient neutrino emission \citep{Zeldovich1972,RRWilson1973,Fryer2014}. Depending on the CO$_{\rm core}$-NS binary separation/period two outcomes may occur. For widely separated ($a\gtrsim10^{11}$~cm) CO$_{\rm core}$-NS binaries, the hypercritical accretion rate is $<10^{-2}~M_\odot$~s$^{-1}$ and it is insufficient to induce gravitational collapse of the NS to a BH. Instead, the NS just increases its mass becoming a massive NS. This process leads to the emission of the so-called X-ray flashes (XRFs) with a typical X-ray emission $\lesssim 10^{52}$~erg. For more tightly bound ($a\lesssim10^{11}$~cm) CO$_{\rm core}$-NS binaries the hypercritical accretion rate of the SN ejecta can be as large as $\gtrsim10^{-2}$--$10^{-1}~M_\odot$~s$^{-1}$, leading the companion NS to collapse to a BH. This process leads to the occurrence of {the BdHN} which exhibits a more complex structure than XRFs and an emission $\gtrsim 10^{52}$~erg \citep{2016ApJ...832..136R}. The opportunity of introducing the BdHN model, based on binary progenitors, exhibiting a large number of new physical process and admitting a theoretical treatment by detailed equations whose corresponding solutions are in agreement with the observations, has been presented in a large number of publications, recently summarized in \citet{2018ApJ...852...53R}. There we performed an extensive analysis using 421 BdHN all with measured redshift, observed till the end of 2016, and described in their cosmological rest frame \citep{2016ApJ...833..159P}. The large variety of spectra and light curves has allowed the introduction of seven different GRBs subclasses, see e.g. \citet{2016ApJ...832..136R} and \citet{2016arXiv160203545R}. We recalled that since 2001 we fit the Ultra-relativistic Prompt emission (UPE) light curve and spectra solving the equations of the dynamics of the $e^+e^-$ baryon plasma and of {its slowing down due to the interaction} with the circumburst medium \citep[CBM, see e.g.][]{1999A&A...350..334R,2002ApJ...581L..19R,2000A&A...359..855R}. This treatment allows to evaluate the ultra-relativistic gamma factor of the UPE, exhibited in hundreds of short and long GRBs. Some underluminous GRBs may well have a non-ultrarelativistic prompt emission (Rueda et al., in preparation). { Attention was then directed to examine the Flare-Plateau-Aftergolw phase (FPA) following the UPE.} { We identified among the BdHNe \textit{\textbf{all}} the ones with soft X-ray flare in the $0.3$--$10$~keV rest-frame energy range in the FPA phase. In view of the excellent data and complete light-curves we could identify in them a thermal component, see Fig.~32 and Table.~7 in \citet{2018ApJ...852...53R}, essential in measuring the mildly relativistic} expansion velocity of $v = c \beta \sim 0.8 c$, see section 9 in \citet{2018ApJ...852...53R}. { In addition we then followed, through an hydrodynamical description, the propagation {and the slowing down} inside the SN ejecta of the $e^+e^-$ plasma generated in the BH formation, in order to explain the mildly relativistic nature of the soft X-ray flares expansion velocity, see section 10 in \citet{2018ApJ...852...53R}.} { Obviously these considerations cannot be repeated here.} {We only recall a few points of the conclusions of \citet{2018ApJ...852...53R}, e.g. a) The data of the soft X-ray flare have determined its mildly relativistic expansion velocity already $\sim 100$~s after the UPE, in contrast to the traditional approach; b) the role of the interaction of the $e^+ e^-$ GRB emission in SN ejecta in order to explain the astrophysical origin of soft X-ray flare; c) the determination of the density profile of the SN ejecta derived from the simulation of the IGC paradigm.} { In this article we apply our model to study a multiple component in the UPE phase observed in the range of $10$--$1000$ keV as well as the Hard X-ray Flares observed in the range of $0.3$--$150$ keV, the extended-thermal-emission (ETE), and finally the soft X-ray flare observed in the range of $0.3$--$10$ keV using GRB151027A as a prototype. The aim is to identify the crucial role of the SN and of its binary NS companion in the BdHN model, to analyze the interaction of the $e^+ e^-$ plasma generating the GRB with the SN ejecta via 3D simulations, and to compare and contrast the observational support of the BdHN model with the other traditional approaches. For facilitating the reader we have made a special effort in giving reference to the current works, in indicating new developments and their observational verifications, and finally in giving references for the technical details in the text.} { In section~\ref{sec:progress_of_bdhn} we outline the new results motivating our paper: 1) Three thermal emissions processes in GRBs compared and contrasted. Particularly relevant for our article is the relativistic treatment relating the velocity of expansion of the hard X-ray flare, of the soft X-ray flare, and of the ETE to the observed fluxes and temperatures. 2) The 3D simulations of the hypercritical accretion in a BdHN, essential for obtaining the density profiles of the SN ejecta recently submitted for publication in \citet{2018arXiv180304356B}. 3) The generalization of the space-time representation of the BdHN. These are some useful conceptual tools needed to create a viable GRB model. } { In section~\ref{sec2} we refer to GRB 151027A as a prototype example of high quality data, enabling the detailed time-resolved analysis for the UPE phase, with its thermal component, as well as the first high quality data for studying the hard X-ray flare, and especially the clear evolution of the ETE. We perform the time-integrated analysis for the UPE, we further analyze the two ultra-relativistic gamma-ray spikes in the UPE, and apply to the first spike the fireshell model and identify the P-GRB, the baryon load $B=(1.92\pm0.35)\times10^{-3}$ and an average CBM density of $(7.46\pm1.2)$~cm$^{-3}$ which are consistent with our numerical simulation presented in section~\ref{sec5}. We determine {an initial Lorentz factor of the UPE} $\Gamma_0=503\pm76$ confirming the clearly observed ultra-relativistic nature of the UPE. } { In section~\ref{sec3} we perform the time-resolved analysis for the hard X-ray flare and the soft X-ray flare, comparing and contrasting our results with the ones in the literature by \cite{2017A&A...598A..23N}. The hard X-ray flare is divided into 8 time intervals and we find a high significant thermal component existing in all time intervals (see Fig.~\ref{fig4}). We report the results of our time-resolved spectral analysis in the first five columns of Table~\ref{tab1}. Using the best-fit model for non-thermal component in the time interval $95$--$130$~s we determine a Lorentz factor $\Gamma = 3.28 \pm 0.84$ for the hard X-ray flare duration. The soft X-ray flare is analyzed in 4 time intervals, in which spectra are best fitted by a single power-law. } { In section~\ref{sec4} we turn to the thermal component evolving across the hard X-ray flare by adopting the description in the GRB laboratory frame. Following our recent works \citep{2018ApJ...852...53R}, we determine the expansion velocity evidencing the transition from an initial velocity $\approx0.38~c$ and increasing up to $0.98~c$ in the late part, see column 6 of Table~\ref{tab1}. This is the first relativistic treatment of the hard X-ray flare and its associated thermal emission clearly evidencing the transition from a SN to an HN, first identified in GRB 151027A. We compare and contrast our results with the current ones in the literature. } { In section~\ref{sec5} we proceed to the hard X-ray flare and the soft X-ray flare theoretical explanation from the analysis of the $e^+e^-$ plasma propagating {and slowing down} within the SN ejecta. The simulated velocity and radius of the hard X-ray flare and the soft X-ray flare are consistent with the observations. We visualize all these results by direct comparison of the observational data by Swift, INTEGRAL, Fermi and Agile, in addition to the optical observations, with the theoretical understanding of the 3D dynamics of the SN recently jointly performed by our group in collaboration with the Los Alamos National Laboratory \citep{2018arXiv180304356B}. This visualization is particularly helpful in order to appreciate the novel results made possible by the BdHN paradigm and also by allowing the visualization of a phenomena observed today but occurred 10 billion light years away in our past light cone. The impact of the $e^+e^-$ plasma on the entire SN ejecta gives origin to the thermal emission from the external surface of the SN ejecta and, equally, we can therefore conclude that the UPE, the hard X-ray flare and the soft X-ray flare are not a causally connected sequence (see Figs.~\ref{fig:Carlo2}, \ref{fig:model1}, \ref{fig:gamma_ray_flare}, \ref{fig:x_ray_flare} and Tab.~\ref{tab1}): Within our model they are the manifestation of the same physical process of the BH formation as seen through different viewing angles, implied by the morphology and by the $\sim 300$~s rotation period of the HN ejecta. } { In section~\ref{sec6} we proceed to the summary, discussion and conclusions: \begin{itemize} \item In the summary we have recalled the derived Lorentz gamma factor and the detailed time resolved analysis of the light curves and spectra of UPE, hard X-ray flare, ETE and soft X-ray flare. We mention a double spike structure in the UPE and in the FPA, which promises to be directly linked to the process of the BH formation. We have equally recalled our relativistic treatment of the ETE, which has allowed to observe for the first time the transition of a SN into a HN: the main result of this paper. \item In the discussions we have recalled, using specific examples in this article, that our data analysis is performed within a consistent relativistic field-theoretical treatment. In order to be astrophysically significant, it needs the identification of the observed astrophysical components, including: the binary nature of the progenitor system, the presence of a SN component and it needs as well a 3-dimensional simulation of the process of hypercritical accretion in the binary progenitors. We have also recalled the special role of the rotation by which phenomena, traditionally considered different, are actually the same phenomenon as seen from different viewing angles. \item In the conclusions, looking forward, three main implications follows from the BdHN model which are now open to further scrutiny: 1) only $10$\% of the BdHNe whose line of sight lies in the equatorial plane of the progenitor binary system are actually detectable, in the other $90$\% the UPE is not detectable due to the morphology of the SN ejecta (see Fig.~\ref{fig:cc}) and therefore the \textit{Fermi} and \textit{Swift} instruments are not triggered; 2) the $E_\mathrm{iso}$, traditionally based on a spherically symmetric equivalent emission, has to be replaced by an $E_\mathrm{tot}$ duly taking into account the contributions of the UPE, hard X-ray flare, ETE and soft X-ray flare; 3) when the BdHNe are observed normally to the orbital plane, the GeV emission from the newly formed BH becomes observable and also this additional energy should be accounted for. \end{itemize} } \begin{table} \centering \begin{tabular}{lc} \hline\hline Extended wording & Acronym \\ \hline Binary-driven hypernova & BdHN \\ Black hole& BH \\ Carbon-oxygen core& CO$_{\rm core}$ \\ Circumburst medium& CBM \\ Extended thermal emission &ETE\\ Flare-Plateau-Afterglow & FPA \\ Gamma-ray burst& GRB \\ Gamma-ray flash& GRF \\ Induced gravitational collapse & IGC \\ Massive neutron star& MNS \\ Neutron star& NS \\ New neutron star& $\nu$NS \\ Ultra-relativistic prompt emission & UPE \\ Proper gamma-ray burst & P-GRB \\ Short gamma-ray burst& S-GRB \\ Short gamma-ray flash& S-GRF \\ Supernova& SN \\ Ultrashort gamma-ray burst & U-GRB \\ White dwarf& WD \\ X-ray flash& XRF \\ \hline \end{tabular} \caption{{Alphabetic ordered list of the acronyms used in this work.}} \label{acronyms} \end{table} { We summarize in Table~\ref{acronyms} the list of acronyms introduced in the present paper.} \section{Recent Progress on BdHNe} \label{sec:progress_of_bdhn} {We address three progresses obtained in the last year in the theory of BdHNe: 1) the identification of three different thermal emission processes; 2) the visualization of the IGC paradigm; and 3) an extended space-time diagram of BdHN with viewing angle in the equatorial plane of the binary progenitors.} { One of the first examples of a thermal emission has been identified in the early seconds after the trigger of some long GRBs \citep{Ryde2004,Rydeetal2006,Ryde2009}. This emission has been later identified in the BdHN model with the soft X-ray emission occurring in the photosphere of convective outflows in the hypercritical accretion process from the newly born SN into the NS binary companion. Additional examples have been given in BdHNe \citep{Fryer2014} and in XRFs \citep{2016ApJ...833..107B}. These process are practically Newtonian in character with velocity of expansions of the order of $10^8$--$10^9$~cm~s$^{-1}$ \citep[see e.g.][for the case of GRB 090618]{2012A&A...543A..10I}. } { A second thermal emission process has been identified in the acceleration process of GRBs, when the self-accelerating optically thick $e^+e^-$ plasma reaches transparency and a thermal emission with very high Lorentz factor $\Gamma \sim 10^2$--$10^3$ is observed. This has been computed both in the fireball model \citep{1999PhR...314..575P,2002MNRAS.336.1271D,2007ApJ...664L...1P} and in the fireshell model \citep{RSWX2,2000A&A...359..855R}. The difference consists in the description of the equations of motion of the fireball assumed in the literature and instead explicitly evaluated in the fireshell model from the integration of classical and quantum magnetohydrodynamic process \citep[see also][and references therein]{2007ralc.conf..402R}. The moment of transparency leads to a thermal emission whose relativistic effect have been evaluated leading to the concept of the equitemporal surface \citep[EQTS][]{Bianco2005a}. This derivation has been successfully applied also to short GRBs \citep{2017ApJ...844...83A,2016ApJ...831..178R,2015ApJ...808..190R}, and is here applied in section \ref{sec2} to the UPE. } { There is finally a third additional extended thermal mission (ETE) observed in BdHNe and in the the X-ray flares \citep{2018ApJ...852...53R}, this ETE has allowed the determination of the velocity of expansion and Lorentz Gamma factor of the thermal emission based on the variation in time of the observed radius and temperature of the thermal emission (see equation in Fig.~\ref{fig:funcV}) under the assumption of uncollimated emission and considering only the radiation coming from the line of sight. The left-hand side term is only a function of the velocity $\beta$, the right-hand side term is only function of the observables, $D_L(z)$ is the luminosity distance for redshift $z$. Therefore, from the observed thermal flux $F_\mathrm{bb,obs}$ and temperature $T_\mathrm{obs}$ at times $t_1$ and $t_2$, we can compute the velocity $\beta$. This highly non-linear equation is not straightforwardly solvable analytically so in the present paper we solve it numerically after verifying the monotonically increasing behavior of the left-hand side term as a function of $\beta$ (see, e.g., Bianco, Rueda, Ruffini, Wang, in preparation). } \begin{figure*}[!ht] \centering \includegraphics[width=0.95\hsize,clip]{functionV} \caption{Equation to compute the velocity from the thermal component: this equation is summarized from \citet{2018ApJ...852...53R}. The left-hand side term is only a function of velocity $\beta$, the right-hand side term is only of the observables. $D_L(z)$ is the luminosity distance for redshift $z$. From the observed thermal flux $F_\mathrm{bb,obs}$ and temperature $T_\mathrm{obs}$ at arrival times of the detector $t_{a,1}^d$ and $t_{a,2}^d$, the velocity and the corresponding Lorentz factor can be computed. This equation assumes uncollimated emission and considers only the radiation coming from the line of sight. The computed velocity is instantaneous and there is no reliance on the expansion history.} \label{fig:funcV} \end{figure*} { The second progress has been presented in \citet{2016ApJ...833..107B} and more recently in \citet{2018arXiv180304356B}: the first 3D SPH simulations of the IGC leading to a BdHN are there presented. We simulate the SN explosion of a CO$_{\rm core}$ forming a binary system with a NS companion. We follow the evolution of the SN ejecta, including their morphological structure, subjected to the gravitational field of both the new NS ($\nu$NS), formed at the center of the SN, and the one of the NS companion. We compute the accretion rate of the SN ejecta onto the NS companion as well as onto the $\nu$NS from SN matter fallback. We determine the fate of the binary system for a wide parameter space including different CO$_{\rm core}$ masses, orbital periods ($\sim 300$~s) and SN explosion geometry and energies. We evaluate, for selected NS equations of state, if the accretion process leads the NS either to the mass-shedding limit, or to the secular asymmetric instability for gravitational collapse to a BH, or to a more massive, fast rotating, but stable NS. We also assess whether the binary keeps or not gravitationally bound after the SN explosion, hence exploring the space of binary and SN explosion parameters leading to the formation of $\nu$NS-NS or $\nu$NS-BH binaries. The consequences of our results for the modeling of GRBs via the IGC scenario are discussed in \citet{2018arXiv180304356B}. The relevance of these simulations for GRB 151027A which is subject of this paper will be illustrated below, see Fig.~\ref{fig:cc}. } \begin{figure} \centering \includegraphics[width=1\hsize,clip]{plotdef_1} \caption{Three-dimensional, half-hemisphere view of the density distribution of the SN ejecta at the moment of BH formation in a BdHN. The simulation is performed with an SPH code that follows the SN ejecta expansion under the influence of the gravitational field of both the $\nu$NS formed at the center of the SN and of the NS companion. It includes the effects of the orbital motion and the changes in the NS gravitational mass by the hypercritical accretion process \citep[see][for additional details]{2016ApJ...833..107B}. The binary parameters of this simulation are: the NS companion has an initial mass of $2.0~M_\odot$; the CO$_{\rm core}$, obtained from a progenitor with ZAMS mass $M_{\rm ZAMS}=30~M_\odot$, leads to a total ejecta mass $7.94~M_\odot$ and to a $1.5~M_\odot$ $\nu$NS, the orbital period is $P\approx 5$~min (binary separation $a\approx 1.5\times 10^{10}$~cm). {Only the sources, whose ultra-relativistic emission lies within the allowed cone of $\sim 10^\circ$ with low baryon contamination, will trigger the gamma-ray instrument (e.g. Fermi/GBM or Swift/BAT).}} \label{fig:cc} \end{figure} { Finally, we present an update of the BdHN space-time diagram (see Fig.~\ref{fig:Carlo}) which clearly evidences the large number of episodes and physical processes, each with observationally computed time-varying Lorentz $\Gamma$ factors, which require the systematic use of the four different time coordinates, already indicated in \citet{Ruffini2001c}. The diagram illustrates departures from the traditional collapsar-fireball description of a GRB. The diagram shows how the sequence of events of the UPE, of the hard X-ray flare and of the soft X-ray flare occur in a sequence only when parametrized in the arrival time and are not in fact causally related. } { We recall that within our model the line of sight of the protypical GRB 151027A lies in the equatorial plane of the progenitor binary system. The more general case of an arbitrary viewing angle has been explored in \citet{2018arXiv180305476R}, and some specific additional characteristic features common to the collapsar model have been manifested in this more general case. } \begin{figure}[ht] \centering \includegraphics[width=\hsize,clip]{SchemaST_Nappo} \caption{Space-time diagram (not in scale) of BdHNe. The CO$_\mathrm{core}$ explodes as a SN at point A and forms a $\nu$NS. The companion NS (bottom right line) accretes the SN ejecta starting from point B, giving rise to the non-relativistic Episode 1 emission (with Lorentz factor $\Gamma\approx 1$). At the point C the NS companion collapses into a BH, and an $e^+e^-$ plasma --- the dyadosphere --- is formed \citep{RSWX2}. The following self-acceleration process occurs in a spherically symmetric manner (thick black lines). A large portion of plasma propagates in the direction of the line of sight, where the environment is cleaned up by the previous accretion into the NS companion, finding a baryon load $B \lesssim 10^{-2}$ and leading to the GRB UPE gamma-ray spikes (Episode 2, point D) with $\Gamma \sim 10^2$--$10^3$. The remaining part of the plasma impacts with the high density portion of the SN ejecta (point E), propagates inside the ejecta encountering a baryon load $B \sim 10^{1}-10^2$, and finally reaches transparency, leading to the hard X-ray flare emission (point F) in gamma rays with an effective Lorentz factor $\Gamma \lesssim 10$ and to soft X-ray flare emission (point G) with an effective $\Gamma \lesssim 4$, which are then followed by the late afterglow phases (point H). For simplicity, this diagram is 2D and static and does not attempt to show the 3D rotation of the ejecta.} \label{fig:Carlo} \end{figure} \section{Ultra-relativistic Prompt Emission (UPE)}\label{sec2} \begin{figure}[ht] \centering (a)\includegraphics[width=0.95\hsize,clip]{lightcurve} (b)\includegraphics[width=0.95\hsize,clip]{spike1} (c)\includegraphics[width=0.95\hsize,clip]{spike2} \caption{(a) The \textit{Fermi}-GBM light curve from the NaI-n0 detector ($\approx8$--$800$~keV) of the UPE of GRB 151027A. The dotted horizontal line corresponds to the $\gamma$-ray background. (b) Time-integrated $\nu F_\nu$ spectrum of the first spike. (c) Time-integrated $\nu F_\nu$ spectrum of the second spike.} \label{fig1a} \end{figure} \begin{figure*} \centering (a)\includegraphics[width=0.45\hsize,clip]{PGRB} (b)\includegraphics[width=0.45\hsize,clip]{lc_151027A} (c)\includegraphics[width=0.45\hsize,clip]{spec_151027A} (d)\includegraphics[width=0.45\hsize,clip]{dens_151027A} \caption{Ultra-relativistic prompt emission (UPE): (a) The combined NaI-n0, n3+BGO-b0 $\nu F_\nu$ spectrum of the P-GRB in the time interval $T_0-0.1$--$T_0+0.9$~s. The best-fit model is CPL+BB. (b) The comparison between the background subtracted $10$--$1000$~keV \textit{Fermi}-GBM light curve (green) and the simulation with the fireshell model (red curve) in the time interval $T_0+0.9$--$T_0+9.6$~s. (c) The comparison between the NaI-n0 (purple squares), n3 (blue diamonds) and the BGO-b0 (green circles) $\nu F_\nu$ data in the time interval $T_0+0.9$--$T_0+9.6$~s and the simulated fireshell spectrum (red curve). (d) The radial density of the CBM clouds used for the above UPE light curve and spectrum simulations.} \label{fig0a} \end{figure*} \begin{figure} \centering \includegraphics[width=1.1\hsize,clip]{Fig03} \caption{{Spacetime diagram of the UPE. The initial $e^+e^-$ plasma self-accelerates in the small-density cone until it reaches transparency (curved black line), producing the first of the two ultra-relativistic UPE spikes (lower solid red line). The second one is produced by a latter emission from the BH formation, with a difference in the observed time of $\sim 17$~s (rest-frame $\sim 9.4~s$) (upper solid red line).}} \label{fig:03} \end{figure} GRB 151027A was detected and located by the \textit{Swift} Burst Alert Telescope (BAT) \citep{2015GCN..18478...1M}. It was also detected by the \textit{Fermi} Gamma-ray Burst Monitor (GBM) \citep{2015GCN..18492...1T}, MAXI \citep{2015GCN.18525....1M} and by \textit{Konus}-Wind \citep{2015GCN..18516...1G}. The \textit{Swift} X-Ray Telescope (XRT) started its observation $87$~s after the burst trigger \citep{2015GCN..18482...1G}. The redshift of the source, measured through the MgII doublet in absorption from the Keck/HIRES spectrum, is $z=0.81$ \citep{2015GCN..18487...1P}. The LAT boresight of the source was $10^\mathrm{o}$ at the time of the trigger, there are no associated high energy photons; an upper limit of observed count flux is computed as $9.24 \times 10^{-6}$~photons~cm$^{-2}$~s$^{-1}$ following the standard Fermi-LAT likelihood analysis. The BAT light curve shows a complex peaked structure lasting at least $83$ seconds. XRT began observing the field $48$~s after the BAT trigger. The GBM light curve consists of various pulses with a duration of about $68$~s in the $50$--$300$~keV band. The Konus-Wind light curve consists of various pulses with a total duration of $\sim 66$~s. The MAXI detection is not significant, but the flux is consistent with the interpolation from the \textit{Swift}/XRT light curve. The first $25$~s (rest-frame $14$~s) corresponds to the UPE. It encompasses two spikes of duration $\approx8.5$~s and $\approx7.5$~s, respectively with a separation between two peaks $\approx17$~s (see Fig.~\ref{fig1a}~(a)). The rest-frame $1$--$10^4$~keV isotropic equivalent energies computed from the {time integrated} spectra of these two spikes (see Figs.~\ref{fig1a}~(b) and (c)) are $E_{\rm iso,1}=(7.26\pm0.36)\times10^{51}$~erg and $E_{\rm iso,2}=(4.99\pm0.60)\times10^{51}$~erg, respectively. A similar analysis was performed by \cite{2017A&A...598A..23N}. They describe the two spikes of the UPE by a single light curve with a ``Fast Rise and Exponential Decay'' (FRED) shape. {We analyze} the first spike (see Fig.~\ref{fig0a}) as the traditional UPE of a long GRB within the fireshell model \citep[see, e.g.,][for a review]{RVX}. Thanks to the wide energy range of the \textit{Fermi}-GBM instrument ($8$--$1000$~keV) it has been possible to perform a time-resolved analysis within the UPE phase to search for the typical P-GRB emission at the transparency of the $e^+e^-$--baryon plasma \citep{RSWX2,2000A&A...359..855R,Ruffini2001}. Indeed, we find this thermal spectral feature in the time interval $T_0-0.1$--$T_0+0.9$~s (with respect to the \textit{Fermi}-GBM trigger time $T_0$). The best-fit model of this emission is a composition of a black-body (BB) spectrum and a cut-off power-law model (CPL, see Fig.~\ref{fig0a}(a)). The BB component has an observed temperature $kT=(36.6\pm5.2)$~keV and an energy $E_{\rm BB}=(0.074\pm0.038)\times E_{\rm iso,1}=(5.3\pm2.7)\times10^{50}$~erg. These values are in agreement with an initial $e^+e^-$ plasma of energy $E_{\rm iso,1}$, with a baryon load $B=(1.92\pm0.35)\times10^{-3}$, and a Lorentz factor and a radius at the transparency condition of $\Gamma_0=503\pm76$ and $r_{\rm tr}=(1.92\pm0.17)\times10^{13}$~cm, respectively. {We turn now to the simulation of the remaining part of the first spike of the UPE} (from $T_0+0.9$~s to $T_0+9.6$~s). In the fireshell model, this emission occurs after the P-GRB and results from {the slowing down of the accelerated baryons due to their interaction with the CBM} \citep{2002ApJ...581L..19R,Ruffini2006,Patricelli2011}. To simulate the UPE light curve and its corresponding spectrum, we need to derive the number density of the CBM clouds surrounding the burst site. The agreement between the observations and the simulated light curve (see Fig.~\ref{fig0a}(b)) and the corresponding spectrum (see Fig.~\ref{fig0a}(c)) is obtained for an average CBM density of $(7.46\pm1.2)$~cm$^{-3}$ (see Fig.~\ref{fig0a}(d)) consistent with the typical value of the long burst host galaxies at radii $\simeq 10^{16}$~cm. By contrast the second spike of the UPE appears to be featureless. { The general conclusion of the UPE is the following:} { From the morphological 3D simulation, the SN ejecta is distorted by the binary accretion: a cone of very low baryon contamination is formed along the direction from the SN center pointing to the newly born BH, see Fig.~\ref{fig:cc}. A portion of $e^+e^-$ plasma generated from the BH formation propagates through this cone and engulfs a low baryon load of $B=(1.92\pm0.35)\times10^{-3}$ and reaching a Lorentz gamma factor of $\Gamma_0=503\pm76$. The $e^+e^-$ plasma self-accelerates and expands ultra-relativistically till reaching transparency \citep{1998bhhe.conf..167R,2007PhRvL..99l5003A,2010PhR...487....1R}, when a short duration ($<1$~s) thermal emission occurs: the P-GRB. The ultra-relativistic associated baryons then interact with the circumburst medium (CBM) clouds: the dynamics of the plasma has been integrated by the classical hydrodynamics equations, by the equation of annihilation-creation rate \citep{2001A&A...368..377B,Bianco2004,Bianco2005a,2005ApJ...633L..13B,2006ApJ...644L.105B}, and it enables to simulate the structure of spikes in the prompt emission, and it has been applied to the case of BdHNe \citep[see, e.g.,][]{2002ApJ...581L..19R,2005ApJ...634L..29B,2012A&A...543A..10I,2012A&A...538A..58P,2013A&A...551A.133P,2016ApJ...831..178R}. For typical baryon load for the cone direction, $10^{-4} \lesssim B \lesssim 10^{-2}$, leading to a Lorentz factor $\Gamma \approx 10^2$--$10^3$, characteristic the prompt emission occurs in a distance $\approx10^{15}$--$10^{17}$~cm from the BH \citep{2016ApJ...832..136R}.} { \begin{enumerate} \item a double emission is clearly manifested by presence of the two spikes at the time interval of the $17 s$ (rest-frame $9$ s). We are currently examining the possibility that this double emission is an imprinting of the process of the BH formation. \item when we take into account the rotation period of the binary $\sim 300$~s we see that UPE occurs in a cone centered in the BH of $10^{\circ}$; \item this conical region is endowed with very low density determined by the P-GRB and the inferred CBM medium density of $(7.46\pm1.2)$~cm$^{-3}$ up to $10^{16}$ cm from BH along the cone, see Fig.~\ref{fig0a}(d) \end{enumerate} } { This conceptual framework can in principle explain the featureless nature of the second spike which propagates along the region which has already been swept by the first spike (see Fig.~\ref{fig:03}).} \section{Hard and Soft X-ray flare }\label{sec3} \subsection{Hard X-ray flare} {We turn now to the hard X-ray flare and the soft X-ray flare.} The hard X-ray flare is observed in the time interval $94$--$180$~s (corresponding to the rest-frame time interval $52$--$99$~s, see Fig.~\ref{fig1b}~(a)). The luminosity light curves in the rest-frame energy bands $10$--$1000$~keV for \textit{Fermi}-GBM (green), $15$--$150$~keV for \textit{Swift}-BAT (red), and $0.3$--$10$~keV for \textit{Swift}-XRT (blue) are displayed. The total isotropic energy of the hard X-ray flare is $E_{\gamma}=(3.28\pm0.13)\times10^{52}$~erg. The overall spectrum is best-fit by a superposition of a power-law (PL) function with an index $-1.69\pm0.01$ and a BB model with a temperature $kT=1.13\pm0.08$~keV (see Fig.~\ref{fig1b}~(b)). \begin{figure}[ht] \centering (a)\includegraphics[width=\hsize,clip]{luminosity151027A} (b)\includegraphics[width=0.88\hsize,clip]{GRF1} \caption{(a) Luminosity light curves in the rest-frame energy bands $10$--$1000$~keV for \textit{Fermi}-GBM (green), $15$--$150$~keV for \textit{Swift}-BAT (red), and $0.3$--$10$~keV for \textit{Swift}-XRT (blue). The red dotted line marks the position of the hard X-ray flare. (b) Time-integrated $\nu F_\nu$ spectrum of the hard X-ray flare and the PL+BB model (solid red curve) best-fitting the data.} \label{fig1b} \end{figure} We perform a more detailed analysis by dividing the whole hard X-ray flare duration ($94$--$180$~s) into $8$ intervals (indicated with $\Delta t_a^d$ in Tab.~\ref{tab1}). Among these time intervals, the first $6$ have both BAT and XRT data (total energy range $0.3$--$150$~keV), while the last $2$ fits involve XRT data only (energy range $0.3$--$10$~keV). The XRT data were extremely piled-up and corrections have been performed in a conservative way to ascertain that the BB is not due to pile-up effects \citep{2006A&A...456..917R}. The absorption of the spectrum below $2$~keV has been also taken into due account. We use here the following spectral energy distributions to fit the data: power-law (PL), CPL, PL$+$BB and CPL$+$BB. An extra BB component is always preferred to the simple PL models and, only in the sixth interval, to the CPL model whose cutoff energy may be constrained within $90$\% significance. The results of the time-resolved analysis are shown in Fig.~\ref{fig4} and summarized in Tab.~\ref{tab1}. The BB parameters and errors in Tab.~\ref{tab1} correspond, respectively, to the main values and the $90$\% probability interval errors with respect to the central values, both obtained from Markov Chain-Monte Carlo method applied in \texttt{XSpec} with $10^5$ steps (excluding first $10^4$). The values are in line with the ones corresponding to minimum $\chi^2$ and errors to the ones corresponding to intervals obtained from the difference $\Delta\chi^2=2.706$ from the minimum $\chi^2$ value. The only exception is the first time bin where $\chi^2_{min}$ value is almost two times lower than the main value. It is useful to infer the bulk Lorentz factor of the hard X-ray flare emission from the non-thermal component of the spectrum. Using the \textit{Fermi} data, the best-fit model for this non-thermal component in the time interval $95$--$130$~s is a CPL with a spectral cutoff energy $E_c=926\pm238$~keV. Such a cutoff can be caused by $\gamma\gamma$ absorption, for which the target photon's energy is comparable to $E_c$, i.e., $E_c\gtrsim[\Gamma m_e c^2/(1+z)]^2/E_c$ and, therefore, the Lorentz factor can be deduced by \begin{equation} \Gamma\approx\frac{E_c}{m_e c^2}(1+z)\,, \label{gammamax} \end{equation} where $m_e$ is the electron mass. From the above value of $E_c$, we infer $\Gamma=3.28\pm0.84$, which represents an average over the hard X-ray flare duration. It is in the range of the ones observed in thermal component (see the first five columns of the Tab.~\ref{tab1}), coinciding in turn with the numerical simulation of the interaction of the $e^+e^-$ plasma with the SN ejecta described in the Sec.~\ref{sec5}. \begin{figure*} \centering \includegraphics[width=\hsize,clip]{nFnu} \caption{hard X-ray flare: Time-resolved $\nu$F$_\nu$ spectra of the $8$ time intervals in Tab.~\ref{tab1} (from the top left to the right and from the bottom left to the right). XRT data are displayed in green and BAT data in blue; BAT data points with no vertical lines corresponds to upper limits. Plots correspond to parameters obtained from minimum $\chi^2$ fit.} \label{fig4} \end{figure*} \subsection{Soft X-ray flare} The soft X-ray flare, which has been discussed in \citet{2018ApJ...852...53R}, peaks at a rest-frame time $t_p=(184\pm 16)$~s, has a duration $\Delta t=(164\pm30)$~s, a peak luminosity $L_p=(7.1\pm 1.8)\times 10^{48}$~erg/s, and a total energy in the rest-frame $0.3$--$10$~keV energy range $E_X=(4.4 \pm 2.9)\times 10^{51}$~erg. The overall spectrum within its duration $\Delta t$ is best-fit by a PL model with a power-law index of $-2.24\pm0.03$ (see Fig.~\ref{fig1c}). \begin{figure}[ht] \centering (a)\includegraphics[width=\hsize,clip]{151027A_LC} (b)\includegraphics[width=0.88\hsize,clip]{XRF1bis} \caption{(a)Rest-frame $0.3$--$10$~keV luminosity light curve of GRB 151027A. The red dotted line marks the position of the soft X-ray flare. (b) Time-integrated $\nu F_\nu$ spectrum of the X-ray flare and the PL model (solid red curve) best-fitting the data.} \label{fig1c} \end{figure} We perform here also a time-resolved analysis of the soft X-ray flare. We divide the total interval $\Delta t$ into four sub-intervals, i.e., $235$--$300$~s, $300$--$365$~s, $365$--$435$~s and $435$--$500$~s in the observer frame (see Fig.~\ref{figx}). The best-fits of each of these $4$ time intervals are PL models with indexes ranging from $-2.3$ to $-2.1$, which are consistent with the typical values inferred in \citep{2018ApJ...852...53R}. \begin{figure*} \centering \includegraphics[width=1\hsize,clip]{XRFTS5} \caption{soft X-ray flare: Time-resolved BAT (blue) and XRT (green) $\nu$F$_\nu$ spectra of the soft X-ray flare in the indicated time intervals.} \label{figx} \end{figure*} The complete space-time diagram, showing UPE, hard X-ray flare and soft X-ray flare, is represented in Fig.~\ref{fig:03b}. \begin{figure} \centering \includegraphics[width=1.1\hsize,clip]{Fig04} \caption{{Same as Fig.~\ref{fig:03}, this time showing also the position of the plasma shock within the SN ejecta (dashed black lines) for each of the components of the UPE, until breakout. The first spike originates the hard X-ray flare and the second spike originates the soft X-ray flare. The photon wordlines (solid red lines) of hard X-ray flare and soft X-ray flare are observed with a time difference of $\sim 230$~s (rest-frame $\sim 130$~s ) due to the differential deceleration of the two UPE components within the SN ejecta.}} \label{fig:03b} \end{figure} \section{Evolution of thermal component around the hard X-ray flare}\label{sec4} \begin{table*} \centering \caption{Hard X-ray flare: parameters of the time-resolved spectral analysis. Columns list, respectively, the time interval of the spectral analysis, the PL or CPL index $\alpha$, the CPL peak energy $E_{\rm p}$ when present, the BB observed temperature $kT_{\rm obs}$ and normalization $A_{\rm BB}$, fitted from Sec.~\ref{sec3} . The quantity $\phi_0$, the expansion velocity $\beta$ and the Lorentz factor $\Gamma$, and the effective thermal emitter radius in the laboratory frame $R$ inferred from Sec.~\ref{sec4}.} \tiny \begin{tabular}{cccccccccc} \hline\hline $\Delta t_a^d$ & Model & $\alpha$ & $E_{\rm p}$ & $kT_{\rm obs}$ & $A_{\rm BB}$ & $\phi_0$ & $\beta$ & $\Gamma$ & $R$ \\ (s) & & & (keV) & (keV) & (ph~cm$^{-2}$s$^{-1}$) & ($10^{12}$~cm) & & & ($10^{12}$~cm) \\ \hline $94$--$100$ & BB$+$PL & $1.349_{-0.036}^{+0.024}$ && $2.2_{-1.1}^{+1.1}$ & $0.052^{+0.043}_{-0.034}$ & $0.065^{+0.070}_{-0.064}$ & $0.38^{+0.19}_{-0.31}$ & $1.079^{+0.138}_{-0.077}$ & $0.10^{+0.11}_{-0.10}$ \\ $100$--$110$ & BB$+$PL & $1.293_{-0.031}^{+0.029}$ && $2.57^{+0.43}_{-0.50}$ & $0.206^{+0.083}_{-0.084}$ & $0.094^{+0.037}_{-0.041}$ & $0.606^{+0.042}_{-0.049}$ & $1.257^{+0.057}_{-0.053}$ & $0.194^{+0.077}_{-0.086}$ \\ $110$--$120$ & BB$+$PL & $1.392_{-0.033}^{+0.028}$ && $2.17^{+0.22}_{-0.26}$ & $0.62^{+0.14}_{-0.15}$ & $0.229^{+0.053}_{-0.062}$ & $0.852^{+0.035}_{-0.052}$ & $1.91^{+0.26}_{-0.24}$ & $0.80^{+0.21}_{-0.25}$ \\ $120$--$130$ & BB$+$PL & $1.732_{-0.057}^{+0.049}$ && $1.10^{+0.14}_{-0.12}$ & $0.592^{+0.077}_{-0.073}$ & $0.87^{+0.23}_{-0.20}$ & $0.957^{+0.014}_{-0.028}$ & $3.46^{+0.78}_{-0.76}$ & $5.7^{+1.8}_{-2.3}$ \\ $130$--$140$ & BB$+$PL & $1.82_{-0.14}^{+0.11}$ && $0.617^{+0.046}_{-0.043}$ & $0.247^{+0.037}_{-0.038}$ & $1.79^{+0.30}_{-0.28}$ & $0.983^{+0.0046}_{-0.0079}$ & $5.6^{+1.0}_{-1.0}$ & $19.1^{+4.2}_{-5.6}$ \\ $140$--$150$ & CPL$+$PL & $1.65_{-0.16}^{+0.15}$ & $7.3^{+66.3}_{-4.6}$ & $0.469^{+0.065}_{-0.064}$ & $0.102^{+0.028}_{-0.027}$ & $1.99^{+0.61}_{-0.61}$ & $0.919^{+0.054}_{-0.560}$ & $2.5^{+1.8}_{-1.5}$ & $9.5^{+4.4}_{-9.5}$ \\ $150$--$160$ & BB$+$PL & $2.40_{-0.34}^{+0.45}$ && $0.386^{+0.061}_{-0.061}$ & $0.046^{+0.016}_{-0.015}$ & $1.97^{+0.71}_{-0.70}$ & $0.935^{+0.048}_{-0.934}$ & $2.8^{+2.7}_{-1.8}$ & $10.5^{+5.5}_{-10.5}$ \\ $160$--$180$ & BB$+$PL & $2.15_{-0.34}^{+0.29}$ && $0.193^{+0.032}_{-0.030}$ & $0.020^{+0.011}_{-0.013}$ & $5.2^{+2.3}_{-2.3}$ & $0.953^{+0.042}_{-0.952}$ & $3.3^{+7.0}_{-2.3}$ & $32^{+21}_{-32}$ \\ \hline \end{tabular} \label{tab1} \end{table*} {Following Fig.~\ref{fig:funcV} it is possible to infer the expansion velocity $\beta$ (i.e., the velocity in units of the velocity of light $c$). We assume that the black body emitter has spherical symmetry and expands with a constant Lorentz gamma factor. Therefore, the expansion velocity $\beta$ is also constant during the emission. The relations between the comoving time $t_{com}$, the laboratory time $t$, the arrival time $t_a$, and the arrival time $t_a^d$ at the detector \citep[see][]{2001A&A...368..377B,Ruffini2001L107,2002ApJ...581L..19R,Bianco2005a} in this case become: \begin{align} t_a^d & = t_a(1+z) = t (1-\beta\cos\vartheta)(1+z)\nonumber \\ & = \Gamma t_{com} (1-\beta\cos\vartheta)(1+z)\ . \label{times} \end{align} We can infer an effective radius $R$ of the black body emitter from: 1) the observed black body temperature $T_\mathrm{obs}$, which comes from the spectral fit of the data; 2) the observed bolometric black body flux $F_\mathrm{bb,obs}$, computed from $T_\mathrm{obs}$ and the normalization of the black body spectral fit; and 3) the cosmological redshift $z$ of the source \citep[see also][]{2012A&A...543A..10I}. We recall that $F_\mathrm{bb,obs}$ by definition is given by: \begin{equation} F_\mathrm{bb,obs} = \frac{L}{4\pi D_L(z)^2}\ , \label{fbbobs0} \end{equation} where $D_L(z)$ is the luminosity distance of the source, which in turn is a function of the cosmological redshift $z$, and $L$ is the source bolometric luminosity (i.e., the total emitted energy per unit time). $L$ is Lorentz invariant, so we can compute it in the co-moving frame of the emitter using the usual black body expression: \begin{equation} L=4\pi {R_\mathrm{com}}^2 \sigma {T_\mathrm{com}}^4\ , \label{lum} \end{equation} where $R_\mathrm{com}$ and $T_\mathrm{com}$ are the comoving radius and the comoving temperature of the emitter, respectively, and $\sigma$ is the Stefan-Boltzmann constant. We recall that $T_\mathrm{com}$ is constant over the entire shell due to our assumption of spherical symmetry. From Eq.~(\ref{fbbobs0}) and Eq.~(\ref{lum}) we then have: \begin{equation} F_\mathrm{bb,obs}= \frac{{R_\mathrm{com}}^2 \sigma {T_\mathrm{com}}^4}{D_L(z)^2}\ . \label{fbbobs1} \end{equation} } {We now need the relation between $T_\mathrm{com}$ and the observed black body temperature $T_\mathrm{obs}$. Considering both the cosmological redshift and the Doppler effect due to the velocity of the emitting surface, we have: \begin{align} T_\mathrm{obs} (T_\mathrm{com},z,\Gamma,\cos\vartheta) &= \frac{T_\mathrm{com}}{\left(1+z\right)\Gamma\left(1-\beta\cos\vartheta\right)}\nonumber\\ &= \frac{T_\mathrm{com}\mathcal{D}(\cos\vartheta)}{1+z}\, , \label{tdef} \end{align} where we have defined the Doppler factor $\mathcal{D}(\cos\vartheta)$ as: \begin{equation} \mathcal{D}(\cos\vartheta)\equiv\frac{1}{\Gamma\left(1-\beta\cos\vartheta\right)}\, . \label{defd} \end{equation} Eq.~(\ref{tdef}) gives us the observed black body temperature of the radiation coming from different points of the emitter surface, corresponding to different values of $\cos\vartheta$. However, since the emitter is at a cosmological distance, we are not able to resolve spatially the source with our detectors. Therefore, the temperature that we actually observe corresponds to an average of Eq.~(\ref{tdef}) computed over the emitter surface: \begin{align} \nonumber T_\mathrm{obs}(T_\mathrm{com},z,\Gamma)=&\,\displaystyle\frac{1}{1+z}\frac{\int^{1}_{\beta}{\mathcal{D}(\cos\vartheta)T_\mathrm{com}\cos\vartheta d\cos\vartheta}}{\int^{1}_{\beta}{\cos\vartheta d\cos\vartheta}} \\[6pt] \nonumber=&\, \displaystyle\frac{2}{1+z}\frac{\beta\left(\beta-1\right)+\ln\left(1+\beta\right)}{\Gamma\beta^2\left(1-\beta^2\right)}T_\mathrm{com}\\[6pt] \label{tobsbb}=&\,\Theta(\beta)\frac{\Gamma}{1+z}T_\mathrm{com} \end{align} where we defined \begin{equation} \Theta(\beta) \equiv 2\, \frac{\beta\left(\beta-1\right)+\ln\left(1+\beta\right)}{\beta^2}\, , \label{ThetaDef} \end{equation} we have used the fact that due to relativistic beaming, we observe only a portion of the surface of the emitter defined by: \begin{equation} \beta \leq \cos\vartheta \leq 1\, , \label{visible} \end{equation} and we used the definition of $\Gamma$ given above. Therefore, inverting Eq.~(\ref{tobsbb}), the comoving black body temperature $T_\mathrm{com}$ can be computed from the observed black body temperature $T_\mathrm{obs}$, the source cosmological redshift $z$ and the emitter Lorentz gamma factor in the following way: \begin{equation} T_\mathrm{com} (T_\mathrm{obs},z,\Gamma) = \frac{1+z}{\Theta(\beta)\Gamma}T_\mathrm{obs}\, . \label{tcomdef} \end{equation} } {We can now insert Eq.~(\ref{tcomdef}) into Eq.~(\ref{fbbobs1}) to obtain: \begin{equation} F_\mathrm{bb,obs} = \frac{{R_\mathrm{com}}^2}{D_L(z)^2} \sigma T_\mathrm{com}^4 = \frac{{R_\mathrm{com}}^2}{D_L(z)^2} \sigma \left[\frac{1+z}{\Theta(\beta)\Gamma}T_\mathrm{obs}\right]^4\, . \label{fbbobs2} \end{equation} Since the radius $R$ of the emitter in the laboratory frame is related to $R_\mathrm{com}$ by: \begin{equation} R_\mathrm{com} = \Gamma R\, , \label{rcomdef} \end{equation} we can insert Eq.~(\ref{rcomdef}) into Eq.~(\ref{fbbobs2}) and obtain: \begin{equation} \label{fbbobs} F_\mathrm{bb,obs}=\frac{\left(1+z\right)^4}{\Gamma^2}\left(\frac{R}{D_L(z)}\right)^2\sigma \left[\frac{T_\mathrm{obs}}{\Theta(\beta)}\right]^4\ . \end{equation} Solving Eq.~(\ref{fbbobs}) for $R$ we finally obtain the thermal emitter effective radius in the laboratory frame: \begin{equation} \label{raggiorel} R=\Theta(\beta)^2\Gamma\frac{D_L(z)}{(1+z)^2}\sqrt{\frac{F_\mathrm{bb,obs}}{\sigma T_\mathrm{obs}^4}}=\Theta(\beta)^2\Gamma \phi_0\ , \end{equation} where we have defined $\phi_0$: \begin{equation} \phi_0 \equiv \frac{D_L(z)}{(1+z)^2}\sqrt{\frac{F_\mathrm{bb,obs}}{\sigma T_\mathrm{obs}^4}}\, . \label{rclass} \end{equation} The evolutions of the rest-frame temperature and $\phi_0$ are shown in Fig.~\ref{fig2}. In astronomy the quantity $\phi_0$ is usually identified with the radius of the emitter. However, in relativistic astrophysics this identity cannot be straightforwardly applied, because the estimate of the effective emitter radius $R$ in Eq.~\ref{raggiorel} crucially depends on the knowledge of its expansion velocity $\beta$ (and, correspondingly, of $\Gamma$). } { It must be noted that Eq.~(\ref{raggiorel}) above gives the correct value of $R$ for all values of $0 \leq \beta \leq 1$ by taking all the relativistic transformations properly into account. In the non-relativistic limit ($\beta \rightarrow 0$) we have respectively: \begin{align} &\Theta\xrightarrow[\beta\rightarrow 0]{} 1\, , &\Theta^2\xrightarrow[\beta\rightarrow 0]{} 1\, , \\ &T_\mathrm{com}\xrightarrow[\beta\rightarrow 0]{}T_\mathrm{obs}(1+z)\, , &R\xrightarrow[\beta\rightarrow 0]{}\phi_0\, , \end{align} as expected. Analogously, in the ultrarelativistic limit ($\beta \rightarrow 1$) we have: \begin{align} &\Theta\xrightarrow[\beta\rightarrow 1]{} 1.39\, , &\Theta^2\xrightarrow[\beta\rightarrow 1]{} 1.92\, , \\ &T_\mathrm{com}\xrightarrow[\beta\rightarrow 1]{}\frac{0.72}{\Gamma}T_\mathrm{obs}(1+z)\, , &R\xrightarrow[\beta\rightarrow 1]{}1.92\Gamma\phi_0\, , \end{align} It must also be noted that the numerical coefficient in Eq.(\ref{raggiorel}) is computed as a function of $\beta$ using Eq.(\ref{ThetaDef}) above, and it is different from the constant values proposed by \citet{2007ApJ...664L...1P} and by \citet{2013MNRAS.432.3237G}. } An estimate of the expansion velocity $\beta$ can be deduced from the ratio between the variation of the emitter effective radius $\Delta R$ and the emission duration in laboratory frame $\Delta t$, i.e., \begin{equation} \beta=\frac{\Delta R}{c \Delta t}=\Theta(\beta)^2\Gamma(1-\beta\cos\vartheta)(1+z)\frac{\Delta \phi_0}{c \Delta t_a^d}\, , \label{beta1} \end{equation} where we have used Eq.~(\ref{raggiorel}) and the relation between $\Delta t$ and $\Delta t_a^d$ given in Eq.~(\ref{times}), we have used the definition of $\Gamma$ given above and $\vartheta$ is the displacement angle of the considered photon emission point on the surface from the line of sight. In the following we consider only the case $\cos\vartheta=1$. In this case, using Eq.(\ref{ThetaDef}), Eq.(\ref{beta1}) assumes the form presented in Fig.~\ref{fig:funcV}. It allows to estimate the expansion velocity $\beta$ of the emitter using only the observed black body flux, temperature, photon arrival time, and cosmological redshift, assuming uncollimated emission and considering only the radiation coming from the line of sight. We can explain the observed black body emission in GRB 151027A without introducing the ``reborn fireball'' scenario \citep[see][]{2007MNRAS.382L..72G,2017A&A...598A..23N}. \begin{figure}[ht] \centering \includegraphics[width=\hsize,clip]{KT_rad} \caption{The cosmological rest-frame evolution of $kT$ (upper panel) and $\phi_0$ (bottom panel) of the thermal emitter in the hard X-ray flare of GRB 151027A. The $\phi_0$ interpolation (red line) is obtained by using two smoothly joined PL segments.} \label{fig2} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\hsize,clip]{KT_rad_rel} \caption{The evolution in the laboratory frame of $\beta$, $\Gamma$ and $R$ of the thermal emitter from the time intervals in Tab.~\ref{tab1}.} \label{fig3} \end{figure} To infer $\beta$, we fit the evolution of $\phi_0$ (see Fig.~\ref{fig2} and Tab.~\ref{tab1}) by using two smoothly joined PL segments. It allows us to estimate the ratio $\Delta \phi_0/(c \Delta t_a^d)$ in Eq.~(\ref{beta1}) and, therefore, the values of $\beta$ and $\Gamma$ assuming that they are constant in each time interval (see Fig.~\ref{fig3}, upper and middle panels). Consequently, we can estimate the evolution of the radius $R$ of the emitter in the laboratory frame by taking into account the relativistic transformations described in Eqs.~(\ref{times}), (\ref{raggiorel}), (\ref{rclass}); see lower panel of Fig.~\ref{fig3}. The results are summarized also in Tab.~\ref{tab1}. \section{On the nature of the hard X-ray flare and the soft X-ray flare}\label{sec5} \begin{figure*} \centering (a)\hfill$t=0$~s\hfill\hfill(b)\hfill$t=56.7$~s\hfill\hfill(c)\hfill$t=236.8$~s\hfill\null\\ \includegraphics[width=\hsize,clip]{SchemaST_2_Nappo} \caption{Three snapshots of the density distribution of the SN ejecta in the equatorial plane of the progenitor binary system. The time $t=0$ indicates the instant when the NS companion reaches, by accretion, the critical mass and leads to the formation of a BH (black dot). As evidenced in panel (a), the location of the black hole formation is widely separated from the central position represented by SN explosion, it is actually located in the white conical region in Fig.~\ref{fig:cc}. The binary parameters of this simulations are: the NS companion has an initial mass of $2.0~M_\odot$; the CO$_{\rm core}$, obtained from a progenitor with ZAMS mass $M_{\rm ZAMS}=30~M_\odot$, leads to a total ejecta mass $7.94~M_\odot$ and to a $1.5~M_\odot$ $\nu$NS (white dot), the orbital period is $P\approx 5$~min, i.e. a binary separation $a\approx 1.5\times 10^{10}$~cm.} \label{fig:Carlo2} \end{figure*} Following the procedure described in section~10 of \citet{2018ApJ...852...53R}, we interpret the thermal emission observed in the hard X-ray flare as the observational feature arising from the early interaction between the expanding SN ejecta and the $e^+e^-$ plasma. In order to test the consistency of this model with the data, we have performed a series of numerical simulations, whose details we summarize as follows. a) Our treatment of the problem is based on an implementation of the one-dimensional relativistic hydrodynamical (RHD) module included in the PLUTO code\footnote{http://plutocode.ph.unito.it/} \citep{PLUTO}. In the spherically symmetric case considered, only the radial coordinate is used, and consequently the code integrates a system of partial differential equations in only two coordinates: the radius and the time. This permits the study of the evolution of the plasma along one selected radial direction at a time. The aforementioned equations are those of an ideal relativistic fluid, which can be written as follows: \begin{align} &\frac{\partial(\rho \Gamma)}{\partial t} +\nabla.\left(\rho\Gamma \mathbf{v}\right)=0, \label{consmass}\\ &\frac{\partial m_r}{\partial t} +\nabla.\left(m_r \mathbf{v}\right)+\frac{\partial p}{\partial r}=0,\label{consmomentum}\\ &\frac{\partial \mathcal{E}}{\partial t}+ \nabla .\,\left(\mathbf{m}-\rho\Gamma\mathbf{v}\right)=0,\label{consenergy} \end{align} where $\rho$ and $p$ are the comoving fluid density and pressure, $\mathbf{v}$ is the coordinate velocity in natural units ($c=1$), $\Gamma=(1-\mathbf{v}^2)^{-\frac{1}{2}}$ is the Lorentz gamma factor, $\mathbf{m}=h\Gamma^2\mathbf{v}$ is the fluid momentum, $m_r$ its radial component, $\mathcal{E}$ is the internal energy density measured in the comoving frame, and $h$ is the comoving enthalpy density which is defined by $h=\rho+\epsilon+p$. We define $\mathcal{E}$ as follows: \begin{equation} \mathcal{E}=h\Gamma^2-p-\rho\Gamma. \end{equation} The first two terms on the right-hand side of this equation coincide with the $T^{00}$ component of the fluid energy-momentum, and the last one is the mass density in the laboratory frame. Under the conditions discussed in \citet{2018ApJ...852...53R}, the plasma satisfies the equation of state of an ideal relativistic gas, which can be expressed in terms of its enthalpy as: \begin{equation} h=\rho+\frac{\gamma p}{\gamma-1}, \label{eq:eos} \end{equation} with $\gamma=4/3$. Imposing this equation of state closes and defines completely the system of equations, leaving as the only remaining freedom the choice of the matter density profile and the boundary conditions. To compute the evolution of these quantities in the chosen setup, the code uses the HLLC Riemann solver for relativistic fluids \citep[see][]{PLUTO}. The time evolution is performed by means of a second-order Runge-Kutta integration, and a second-order total variation diminishing scheme is used for the spatial interpolation. An adaptive mesh refinement algorithm is implemented as well, provided by the CHOMBO library \citep{CHOMBO}. We turn now to the determination of the SN ejecta. b) {The initially ultrarelativistic $e^+e^-$ plasma expands through the SN ejecta matter slowing down to mildly relativistic velocities}. The SN density and velocity profiles are taken from the 3D SPH simulation of the SN ejecta expansion under the influence of the $\nu$NS and the NS companion gravitational field. In our simulations we include the NS orbital motion and the NS gravitational-mass changes due to the accretion process modeled with the Bondi-Hoyle formalism \citep[see][for more details]{2016ApJ...833..107B}. We set the SN ejecta initial conditions adopting an homologous velocity distribution in free expansion and the SN matter was modeled with 16 million point-like particles. Each SN layer is initially populated following a power-law density profile of the CO$_{\rm core}$ as obtained from low-metallicity progenitors evolved with the Kepler stellar evolution code \citep{2002RvMP...74.1015W}. We take here as reference model the simulation of an initial binary system formed by a $2.0\,M_\odot$ NS and a CO$_{\rm core}$ produced by a $M_{\rm ZAMS} = 30\, M_\odot$ progenitor. This leads to a total ejecta with mass $7.94\, M_\odot$ and a $\nu$NS of $1.5\,M_\odot$. The orbital period of the binary is $P\approx 5$~min, i.e. a binary separation $a \approx 1.5 \times 10^{10}$~cm. The density profile exhibiting the evolution of the SN ejecta and the companion star is shown in Fig.~\ref{fig:Carlo2}. Fig.~\ref{fig:model1} shows the SN ejecta mass enclosed within a cone of 5 degrees of semi-aperture angle, and with vertex at the position of the BH at the moment of its formation. The cone axis stands along the $\theta$ direction measured counterclockwise with respect to the line of sight. We simulate the interaction of the $e^+e^-$ plasma with such ejecta from a radius $\approx10^{10}$~cm all the way to $\approx10^{12}$~cm where transparency is reached. We have recently run new 3D SPH simulations of this process in \citet{2018arXiv180304356B} using the SNSPH code \citep{2006ApJ...643..292F}. These new simulations have allowed a wide exploration of the binary parameter space and have confirmed the results and the physical picture presented in \citet{2016ApJ...833..107B}. On the basis of these new simulations we have determined the value of the baryon loads both for the hard X-ray flares and the soft X-ray flares. \begin{figure}[ht] \centering \includegraphics[width=1\hsize,clip]{massB_5deg_range} \caption{The SN ejecta mass enclosed within a cone of 5 degrees of semi-aperture angle and vertex centered on the SN and positioned to an angle $\theta$, measured counterclockwise, with respect to the line of sight (which passes through the $\nu$NS and BH at the moment of its formation; see Conclusions). The binary parameters of this simulations are: the NS has an initial mass of $2.0~M_\odot$; the CO$_{\rm core}$ obtained from a progenitor with ZAMS mass $M_{\rm ZAMS}=30~M_\odot$, leads to a total ejecta mass $7.94~M_\odot$, the orbital period is $P\approx 5$~min, i.e. a binary separation $a\approx 1.5\times 10^{10}$~cm. The right-side vertical axis gives, as an example, the corresponding value of the baryon load $B$ assuming a plasma energy of $E_{e^+e^-}=1\times10^{53}$~erg. It is appropriate to mention that the above values of the baryon load are computed using an averaging procedure which is performed centered on the SN explosion, and which produces larger values than the one centered around the BH with specific value of baryon load $B \sim 1.9 \times 10^{-3}$, see Fig.~\ref{fig:Carlo2}a.} \label{fig:model1} \end{figure} c) For the simulation of the hard X-ray flare we set a total energy of the plasma equal to that of the hard X-ray flare, i.e., $E_{\gamma}=3.28\times10^{52}$~erg, and a baryon load $B=79$, corresponding to a baryonic mass of $M_B=1.45~M_\odot$. We obtain a radius of the transparency $R_{ph}=4.26\times10^{11}$~cm, a Lorentz factor at transparency $\Gamma=2.86$ and an arrival time of the corresponding radiation in the cosmological rest frame $t_a=56.7$~s (see Fig.~\ref{fig:gamma_ray_flare}). This time is in agreement with the starting time of the hard X-ray flare in the source rest-frame (see Sec.~\ref{sec2}). For the simulation of the soft X-ray flare we set the energy $E_X=4.39\times 10^{51}$~erg as the total energy of the plasma and a baryon load $B=207$, which corresponds to a baryonic mass of $M_B=0.51$~M$_\odot$, we obtain a radius of the transparency $R_{ph}=1.01\times10^{12}$~cm, a Lorentz gamma factor at transparency $\Gamma=1.15$ and an arrival time of the corresponding radiation in the cosmological rest frame $t_a=236.8$~s (see Fig.~\ref{fig:x_ray_flare}). This time is in agreement with the above time $t_p$ at which the soft X-ray flare peaks in the rest frame. \begin{figure}[ht] \centering \includegraphics[width=\hsize,clip]{gamma_ray_flare} \caption{Numerical simulation of the hard X-ray flare. We set a total energy of the plasma $E_{\gamma}=3.28\times10^{52}$~erg and a baryon load $B=79$, corresponding to a baryonic mass of $M_B=1.45$~M$_\odot$. \textbf{Above:} Distribution of the velocity inside the SN ejecta at the two fixed values of the laboratory time $t_1$ (before the plasma reaches the external surface of the ejecta) and $t_2$ (the moment at which the plasma, after having crossed the entire SN ejecta, reaches the external surface). We plotted the quantity $\Gamma\beta$, recalling that we have $\Gamma\beta \sim \beta$ when $\beta < 1$ and $\Gamma\beta \sim \Gamma$ when $\beta \sim 1$. \textbf{Below:} Corresponding distribution of the mass density of the SN ejecta in the laboratory frame $\rho_{lab}$. We obtain a radius of the transparency $R_{ph}=4.26\times10^{11}$~cm, a Lorentz factor at transparency $\Gamma=2.86$ and an arrival time of the corresponding radiation in the cosmological rest frame $t_a=56.7$~s.} \label{fig:gamma_ray_flare} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\hsize,clip]{x_ray_flare_2} \caption{Numerical simulation of the soft X-ray flare. We set a total energy of the plasma $E_X=4.39\times 10^{51}$~erg and a baryon load $B=207$, corresponding to a baryonic mass of $M_B=0.51~M_\odot$. The plotted quantities are the same as in Fig.~\ref{fig:gamma_ray_flare}. We obtain a radius of the transparency $R_{ph}=1.01\times10^{12}$~cm, a Lorentz factor at transparency $\Gamma=1.15$ and an arrival time of the corresponding radiation in the cosmological rest frame $t_a=236.8$~s.} \label{fig:x_ray_flare} \end{figure} \begin{table*} \centering \small\addtolength{\tabcolsep}{-5pt} \begin{tabular}{lccccccc} \hline\hline &Name& Radius(cm) & $\Gamma$ & Baryon load &$t_{start}(s)$&duration(s)& Spectrum\\ \hline & First spike(P-GRB) & $\sim10^{13}$ & $\sim 10^{2}-10^{3}$& $\sim 10^{-4}-10^{-2}$ & $\sim T_0$& $ \sim 1$& CPL+BB\\ UPE \Bigg\{ & First spike(Rest) & $\sim 10^{15}-10^{17}$ & $\sim 10^{2}-10^{3}$& $ \sim10^{-4}-10^{-2}$& $\sim T_0 +1$& $\sim 5$ & Band\\ &Second spike & $\sim 10^{15}-10^{17}$ & $\gtrsim 10^{3}$& $\lesssim 10^{-4}$&$\sim T_0 +15$&$\sim 5$ &Band\\ \hline &hard X-ray flare& $\sim 10^{11}-10^{12}$ & $\lesssim 10$ & $\sim 10^{2}$ &$\sim T_0 +50$&$\sim 10^2$&PL+BB\\ &soft X-ray flare& $\sim 10^{12}-10^{13}$ & $\lesssim 4$ & $\sim 10^{3}$&$\sim T_0 +10^2$&$\sim 150$&PL(+BB) \\ \hline & Late Afterglow& $\gtrsim 10^{13}$ & $\lesssim 2$ & $-$&$\gtrsim T_0 +10^2$&$\gtrsim 10^6 $&PL \\ \hline & SN optical emission& $\sim 10^{15}$ & $\sim 1$ & $-$&$\sim T_0 +10^6$&$\gtrsim 10^6 $&PL \\ \hline &GeV emission & $-$ & $-$ & $-$&$\sim T_0 +1$&$\sim 10^4 $&PL \\ \hline \hline \end{tabular} \caption{{\textit{\textbf{Parameters of sequence of astrophysical processes characterizing the BdHNe}}: The columns list, respectively, the name of each process, the radius of transparency, the Lorentz Gamma factor ($\Gamma$) and the baryon load, starting time of the process, the duration and finally the best-fit model of the spectrum. $T_0$ is the \textit{Fermi}-GBM trigger time.}} \label{Tab1} \end{table*} \section{Summary, Discussion and Conclusions}\label{sec6} \subsection{Summary} { It is by now clear that seven different subclass of GRBs with different progenitors exist \citep{2016ApJ...832..136R}. Each GRB subclass is itself composed of different episodes each one characterized by specific observational data which make their firm identification possible \citep[see e.g.][and references therein]{2018ApJ...852...53R}. We here evidence how, within the BdHN subclass, a further differentiation follows by selecting special viewing angles. We have applied our recent treatment \citep{2018ApJ...852...53R} to the UPE phase and the hard X-ray flare using as a prototype the specific case of GRB 151027A in view of the excellent available data.} { We recall three results: \begin{enumerate} \item We have confirmed the ultrarelativistic nature of the UPE which appear to be composed by a double spike; see Figs.~\ref{fig1a}(a) and \ref{fig0a}(b). This double spike structure appears to be also present in other systems such as GRB 140206A and GRB 160509A (Ruffini, et al., in preparation). From the analysis of the P-GRB of the first spike we have derived an ultra-relativistic Lorentz factor $\Gamma_0 = 503\pm 76$, a baryon load $B=(1.92\pm0.35)\times10^{-3}$, and a structure in the CBM with density $(7.46\pm1.2)$~cm$^{-3}$ extending to dimensions of $10^{16}$~cm; see Fig.~\ref{fig0a}(d). The second spike of energy $E_{\rm iso,2} = (4.99\pm 0.60)\times 10^{51}$~erg, following by $9$~s in the cosmological rest frame the first spike of energy $E_{\rm iso,1} = (7.26\pm 0.36)\times 10^{51}$~erg, see Fig.~\ref{fig1a}(b) and (c), appears to be featureless. We are currently examining the possibility that the nature of these two spikes and their morphology be directly connected to the formation process of the BH. \item A double spikes appears to occur also in the FPA phase (see Fig.~\ref{fig1b}(a)): the first component is the hard X-ray flare and the second is the soft X-ray flare. The energy of the hard X-ray flare is $E_{\gamma}=(3.28\pm0.13)\times10^{52}$~erg (Fig.~\ref{fig1b}) and the one of the soft X-ray flare is $E_X=(4.4 \pm 2.9)\times 10^{51}$~erg (Fig.~\ref{fig1c}). We have analyzed both flares by our usual approach of the hydrodynamical equations describing the interaction of the $e^+e^-$ plasma with the SN ejecta: see Fig.~\ref{fig:gamma_ray_flare} for the hard X-ray flare and Fig.~\ref{fig:x_ray_flare} for the soft X-ray flare. The baryon load of the two flares are different, $B=79$ for the hard X-ray flare and $B=207$ for the soft X-ray flare. This is visualized in Fig.~\ref{fig:03b} as well as in our three-dimensional simulations; see the three snapshots shown in Fig.~\ref{fig:Carlo2}. Both the hard X-ray flare and the soft X-ray flare show mildly-relativistic regime, already observed in \citet{2018ApJ...852...53R}, namely a Lorentz factor at transparency of $\Gamma\sim 5$ for the hard X-ray flare and a Lorentz factor of $\Gamma\sim 2$ for the soft X-ray flare. \item We have studied the ETE associated to the hard X-ray flare: we have measured its expansion velocity derived from the relativistic treatment described in Sec.~\ref{sec4}, following the formula in Fig.~\ref{fig:funcV} \citep[see also][]{2018ApJ...852...53R}. We have identified the transition from a SN, with an initial computed velocity of $0.38~c$, to an HN, with a computed velocity of $0.98~c$; see Fig.~\ref{fig3} and Tab.~\ref{tab1}. These results are in good agreement with observations of both SNe and HNe \citep[see e.g.~Table 3 and Fig.~20 in][]{2015MNRAS.452.3869N}. \end{enumerate} } The above observational analysis, as already presented in \citet{Pisani2013,2016ApJ...833..159P}, set the ensemble of the data that any viable model of GRBs has to conform. In the last thirty years the enormous number of high quality data obtained e.g. by Beppo-SAX, Swift, Agile and Fermi, further extended by specific optical, radio and ultrahigh-energy data, offered the possibility to test the viable models which conform to these data. We have shown that the BdHN model can explain the above observational features. \subsection{Discussion} { \begin{enumerate} \item Thanks to adopting the BdHN approach we have discovered the existence of four different process: a double feature in the UPE phase, the Hard X-ray Flares and the Soft X-ray Flares and the ETE phase. Each one of these processes is generated by a different $e^+ e^-$ injection occurring in a different baryon load media. By using the binary nature of the progenitor system in BDHN, especially the presence of an incipient SN and a companion NS, together with an appropriate theoretical treatment and an ample program of numerical simulations \citep{2018arXiv180304356B}, we have been able to determine the nature of these processes. Clear observational predictions have followed including, the major one, the coincidence of the numerical value of the velocity of expansion at the end of the ETE phase with the observed expansion velocity of the HN, confirmed in additional BdHN and being currently observationally addressed in additional cases. A clear temporal sequence in the occurrence of these process as well as the specific sequence in the values of the Lorentz a Gamma factors has been established. \item For the first time there of rotation of the binary system, of the order of 400 seconds, has been essential in order to untangle the sequence of events discovered and explained in this article, recognizing their a-causal nature and their modulation by the rotation of the progenitor binary system. \item The above different processes, including the double spiky structure of the UPE phase, the Hard and Soft X-Ray Flares, and the ETE phase are actually different appearances of the same physical process, the Black Hole formation as seen from different viewing angles due to the rotation of the SN ejecta in the binary system (see Fig.~\ref{fig:Carlo2}) and the consequent angular dependence of the baryon load (see Fig.~\ref{fig:model1}). \end{enumerate} } \subsection{Conclusions} { \begin{enumerate} \item A clear prediction which will soon be submitted to scrutiny, following from our paper, is that of all the BdHNe occurring with line of sight in the orbital plane of the binary only a fraction of approximately $10\%$ are actually detectable. They correspond to the sources whose ultra-relativistic emission lies within the allowed cone of $\sim 10^\circ$ of low baryon contamination (see Fig.~\ref{fig:cc} and Fig.~\ref{fig:model1}). They are the only ones able to trigger the gamma-ray instruments (e.g.~the Fermi/GBM or Swift/BAT detectors). The remaining $90$\% will not be detectable by the current satellites and will need possibly a new mission operating in soft X-rays \citep[like, e.g., THESEUS, see][]{2017arXiv171004638A}. \item The $E_{\rm iso}$, traditionally defined using an underlying assumption of isotropy of the BH emission, has to be modified by considering an anisotropic emission process. A total energy $E_\mathrm{tot}$, summing the energies of the UPE, of the hard X-ray flare, of the ETE, and of the soft X-ray flare, has to be considered for sources seen in the equatorial plane. It is not surprising that the energy of the hard X-ray flare in GRB 151027A is larger than the one of the UPE, pointing to an anisotropic emission from the BH. \item When the inclination of the viewing angle is less that $60^\circ$ from the normal to the plane of the binary system, the GeV radiation becomes detectable and its energy, which has been related to the BH rotational energy, will need to be taken into account \citep{2018arXiv180305476R}. \end{enumerate} } \acknowledgments We acknowledge the referee comments which have significantly helped us in formulating a clearer, logically motivated and well balanced presentation of our results. \software{PLUTO \citep{PLUTO}, CHOMBO \citep{CHOMBO}, SNSPH \citep{2006ApJ...643..292F}}
{ "timestamp": "2018-11-06T02:22:30", "yymm": "1712", "arxiv_id": "1712.05001", "language": "en", "url": "https://arxiv.org/abs/1712.05001" }
\section{Introduction} Autonomous robot navigation has a broad spectrum of applications, ranging from search and rescue to transportation. Typical approaches to introduce autonomy rely on heuristics or require a predefined set of manually specified rules. Alternative solutions based on machine learning generally require a training set which remains fixed after the learning phase. In this paper, we explore alternatives where learning takes place in an online fashion, allowing for adaptability to new circumstances. In this context, self-supervised navigation can be highly advantageous as it allows a robot to train a classifier in real-time, during navigation, not relying on a human driver's knowledge about the environment. This, in turn, allows the robot to explore the environment more effectively without favoring a potential supervisor's bias. Therefore, we devise a self-supervised technique capable of navigating in an unknown environment through self exploration. Vision based autonomous robot navigation is prevalent in many domains such as environmental monitoring~\cite{lee2012vision}, search and rescue~\cite{giustimachine}, off-road driving~\cite{muller2005off}, unmanned aerial vehicle (UAV) maneuvering~\cite{ross2013learning}, reconnaissance etc. Such proliferation of vision-based applications comes at no surprise due to the development of fast and accurate sensors. However, processing sensory data remains challenging, generally requiring feature engineering or large sets of manually labeled data. \begin{figure*}[t] \hspace{-0.5cm} \begin{subfigure}[t]{0.55\textwidth} \includegraphics[width=0.8\textwidth]{images/auto_nav_system_small.png} \vspace{-0.4cm} \centering \caption{An illustration of the navigation framework comprising of four major modules; sensing, perception and control, data acquisition, and learning. The LASER Range Finder is solely used to simulate collisions to avoid damages. Initially the robot executes a forward move to collect initial data. This triggers the movement detection module which subsequently triggers both image capture and collision detection modules. This produces $\mathcal{D}^i$, where $\mathbf{x}^i$ represents an image, and $\mathbf{y}^i$ the result of an action, collided or not. Finally, $\mathcal{D}^i$ is fed to RA-DAE that outputs $a^{i+1}$ which is sent to the action executor. The execution of the action triggers the movement module forming a cycle.} \label{fig:auto-nav-system} \end{subfigure}% \hspace{10cm} \begin{subfigure}[t]{.42\textwidth} \vspace{-8.5cm} \centering \includegraphics[width=0.8\textwidth,angle=270,trim={3cm 3.5cm 7.9cm 14cm},clip]{images/RA-DAE_small.pdf} \vspace{-1.8cm} \caption{The structure of a two-layer Reinforced Adaptive Denoising Autoencoder (RA-DAE) before (left-side) and after adaptation (right-side). The model uses multiple single node softmax layers (top layer) for each action. The blue lines denote already existed connections and the red lines denote the newly added connections. New connections are introduced while preserving the fully-connected nature. Note that the addition or removal of nodes occurs in a layer-wise fashion (i.e. not simultaneous).} \label{fig:radae} \vspace{0.5cm} \end{subfigure} \vspace{-0.2cm} \caption{Block diagram and network structure of RA-DAE.} \vspace{-0.5cm} \label{fig:auto_nav_and_radae} \end{figure*} We identify several drawbacks of existing approaches that motivate our method. For example, many of them rely on stereo cameras or laser range finders which can be expensive and computationally prohibitive, e.g. computational complexity of the stereo vision systems~\cite{guzel2011vision}. Recent end-to-end learning approaches are based on deep Convolutional Neural Networks (CNNs)~\cite{muller2005off,scoffier2010fully} with a \emph{fixed} structure, leaving many parameters (e.g. number of neurons and layers) to be hand-tuned in order to achieve optimal performance. Furthermore, human supervision is required to provide large training sets, while training is performed offline. This has several disadvantages such as adding significant human bias to a specified task, and sensitivity to changes in the environment, e.g. lighting and weather conditions. In this paper, we propose a novel approach for a robot to navigate in an unknown environment. Our approach has several appealing characteristics, as it (i) relies on minimal sensory data; a single monocular wide-angle camera, (ii) has an end-to-end, real-time and self-supervised learning process allowing the robot to train and take decisions online (by mapping camera images to actions) without the need for pre-training or pre-labeled data, and (iii) provides a learning algorithm that adapts the structure of a deep network (i.e. number of neurons) on-demand, thus increasing the complexity of the model only as required. For this we devise a Reinforced Adaptive Denoising Autoencoder (RA-DAE) that automatically learns the number of neurons required in the network based on current performance in an online fashion. The interaction of the robot with the environment, i.e. whether it has collided or not into obstacles is used to train the system in a self-supervised manner. The network is initialized with a small number of neurons, growing progressively, as more data becomes available and the complexity of the navigation task increases. \section{Related Work} A plethora of approaches have been proposed for vision based navigation over the last decades. Many of the previous work heavily relied on feature engineering. For example scale invariant feature transform (SIFT) \cite{lee2012vision,farag2004detection}, optical flow \cite{lookingbill2007reverse,hyslop2010autonomous} and voxel based~\cite{bagnelllearning,wellington2004online} autonomous navigation techniques can be found in the literature. However, the features learnt by deep learning algorithms have shown to perform better than hand-crafted features. Recently, deep learning techniques have been adopted in a multitude of robotics applications. Deep learning techniques~\cite{lecun2015deep} are renowned for their ability to jointly perform feature extraction and classification using raw data. Furthermore, deep networks have demonstrated unprecedented performance in certain cognitive tasks such as traffic sign recognition~\cite{cirecsan2012multi} and pedestrian detection~\cite{sermanet2013pedestrian}. Inspired by the state-of-the-art performance of deep neural networks, especially CNNs have been leveraged successfully for visual navigation for autonomous robots~\cite{muller2005off,giustimachine}. However, these approaches still mainly rely on human supervision or high-quality sensory information produced by several equipment such as Stereo Cameras or Light Detection And Ranging (LIDAR) posing limitations for adapting such methods for real-time navigation. Furthermore, the idea of adapting the structure of neural networks has been around for a long time. One popular approach was to use genetic algorithm to evolve the structure of the network guided by a fitness function~\cite{stanley2002evolving}. This method has been successfully used in various robotics application~\cite{de2009method,stanley2004competitive}. However for high-dimensional raw sensory inputs (i.e. images) and complex multi-layer networks, it becomes computationally infeasible due to the large number of possible combinations. Finally, ~\cite{courbon2009autonomous} proposes navigation techniques that rely only on cheap and low power devices such as a single monocular camera. Despite their performance, these techniques require a human guiding the robot through the environment during training. This could be costly for unknown or human-inaccessible terrains. Moreover, ~\cite{courbon2009autonomous} relies on persisted visual memory (images) for navigation, which does not scale well for navigating large spaces. \section{Overview} Figure~\ref{fig:auto-nav-system} depicts the high-level architecture of our framework. Our method is an end-to-end learning process which converts images captured by a monocular camera into navigation commands for the robot. The approach comprises several vital components that together learn in real-time. Our approach consists of the following steps. During the execution of each action, tuples of images of what the robot perceives and labels are collected. Each label is an integer indicating whether the robot has collided or not during the execution of an action, i.e. 0 or 1. We define the actions of the robot as discrete movements. The robot can turn left ($L$), go straight ($S$) or turn right ($R$). Each of these movements move the robot by a fixed $\delta$ (step-size) distance in the corresponding direction. Then several pre-processing operations are executed on the collected images such as normalization. Next the accumulated collection of tuples of images and labels are fed to the learning algorithm. The learning model trained on the received data, converts the images into actions (i.e. movements). This procedure is repeated for each action and associated tuples of images and labels. As the data being collected grows, we need an online mechanism to quickly adapt to new information. Reinforced Adaptive Denoising Autoencoder (RA-DAE)~\cite{ganegedara2016online} is a deep learning technique that uses reinforcement learning to dynamically adapt the structure of a deep network as the data distribution changes. Such adaptations include adding neurons, merging neurons, and fine tuning. Figure ~\ref{fig:radae} illustrates the resulting adapted network after adding neurons. RA-DAE leverages Q-Learning~\cite{sutton1998reinforcement}; a reinforcement learning (RL) technique, to find the best adaptation settings based on the errors made during the training phase. The main motivation for our approach is that the vanilla deep network techniques do not possess the ability to adapt their structure to compensate for changes in data distribution. Such changes in data distribution, known as \emph{covariate shift} can cause \emph{catastrophic forgetting} in the networks. RA-DAE not only has the ability to adapt the structure, but also strives towards finding best adaptation strategy (i.e. add neurons, remove neurons or no change) for the perceived changes in the data distribution. With such capabilities, RA-DAE creates an opportunity for deep networks to be used for robotics applications by cutting down on the training time and the prediction time as well. This is enabled by RA-DAE's ability to start with a small neural network and grow the network by small incremental steps as needed. \section{Background} This section provides a brief description of Stacked Denoising Autoencoders as the basic model used by RA-DAE. We begin by defining notation. \noindent \textbf{Notation:} Let us assume we have a data stream $\mathcal{D}=\{(\mathbf{x}^1,\mathbf{y}^1),(\mathbf{x}^2,\mathbf{y}^2),(\mathbf{x}^3,\mathbf{y}^3),\ldots\}$ where $\mathbf{x}^i=\{x^{i,1},x^{i,2},\ldots,x^{i,d}\}$, $d$ is the dimensionality of a single input and $\mathbf{y^i} \in \{0,1\}^K$ such that if $y^{i,j}$ are the elements of $\mathbf{y}^i$ then $\sum_j{y^{i,j}}=1$. The $n^{th}$ batch of data in $\mathcal{D}$ is written as $\mathcal{D}^n=\{\{\mathbf{x}^{n-p},\mathbf{y}^{n-p}\},\ldots,\{\mathbf{x}^n,\mathbf{y}^n\}\}$ where $p$ is the batch size.\\ \vspace{-0.2cm} \subsection{Autoencoder} The autoencoder aims to map input data (dimensionality $d$) into a latent feature space (dimensionality $H$) with a series of nonlinear transformations $h_{W,b}(\mathbf{x})=sig(W\mathbf{x}+b)$, and reconstruct the original input with $\mathbf{\hat{x}}=sig(W^T \times h_{W,b}(\mathbf{x})+b')$ from the latent feature space, where $sig(s) = \frac{1}{1+\exp{-s}}$ and $W \in {\rm I\!R}^{H\times d}$, $b\in{\rm I\!R}^{H\times 1}$ and $b'\in{\rm I\!R}^{d\times 1}$ are the parameters of the autoencoder. This is achieved by optimizing the parameters of the network with respect to the generative (i.e. reconstruction) error $L_{gen}(\mathbf{x}^i,\mathbf{\hat{x}}^i)=\sum_{j=1}^{d}x^{i,j}\text{log}(\hat{x}^{i,j}) + (1-x^{i,j})\text{log}(1-\hat{x}^{i,j})$ $\forall \mathbf{x}^i$ where $\mathbf{x}^i$ is the input and $\mathbf{\hat{x}}^i$ is the reconstructed input. Notice that the learning in an autoencoder is unsupervised.\\ \vspace{-0.2cm} \subsection{Stacked Autoencoders} By stacking $J(>1)$ autoencoders vertically, and topping it with a classification layer e.g. softmax, the construction can be leveraged to solve a supervised classification task. Such networks are called stacked autoencoders (SAE)~\cite{vincent2010stacked}. In the training process of SAE, the predicted label, $\mathbf{\hat{y}} = \text{softmax}(W^{out}h_{W,b}^J(\mathbf{x})+b^{out})$ is calculated for input $\mathbf{x}$ where $h_{W,b}^J(\mathbf{x})$ is the output of the $J^{th}$ autoencoder and softmax($a_k$) = $\frac{\exp(a_k)}{\sum_{k'}\exp(a_{k'})}$ where $a\in[0,1]^K$. Then all the parameters ($W^1,\ldots,W^J$,$b^1,\ldots,b^J$,$b'^1,\ldots,b'^J$,$W^{out},b^{out}$ and $b'^{out}$) are optimized with respect to two error measures; the generative error $L_{gen}(\mathbf{x},\mathbf{\hat{x}})$ and the discriminative (i.e. classification) error $L_{disc}(\mathbf{y},\mathbf{\hat{y}})$ $\forall \{\mathbf{x},\mathbf{y}\} \in \mathcal{D}$ where $L_{disc}(\mathbf{y},\mathbf{\hat{y}}) = \sum_{j=1}^K(y^j\text{log}\hat{y}^j + (1-y^j)\text{log}(1-\hat{y}^j))$ and $\{W^i,b^i,b'^i\}$ are the parameters of the $i^{th}$ autoencoder. \newline \vspace{-0.5cm} \subsection{Stacked Denoising Autoencoders} Stacked Denoising Autoencoders~\cite{vincent2010stacked} is an improvement over SAE that attempts to reconstruct inputs based on corrupted versions of the inputs leading to more robust features. A common way of achieving this is to mask the input with a binomial distribution with probability $p$ where $1-p$ is the corruption level. This procedure improves the generalization properties of stacked autoencoders by acting as regularization. \section{RA-DAE} RA-DAE employs a similar approach to SDAE to learn the network from training data. However, RA-DAE adopts a novel approach as it leverages reinforcement learning to make dynamic adaptations to the structure of the network as the observed data distribution changes. The problem of adapting the network over time is formulated as a Markov Decision Process (MDP) with the state space (S), action space (A) and a reward function ($r^n$) defined as follows. \subsubsection{State Space} The state space is defined as \begin{align} \label{eq:cont-states} S=\{\mathcal{\tilde{L}}_{g}^{n}(m), \mathcal{\tilde{L}}_{c}^n(m), \nu_1^n \} \in {\rm I\!R}^3 \end{align} \noindent where the moving exponential average ($\mathcal{\tilde{L}}$) is defined as $\mathcal{\tilde{L}}^n(m) = \alpha \text{L}^n + (1-\alpha) \mathcal{\tilde{L}}^{n-1}(m-1)$, $n \geq m$ and $m$ is a predefined constant. $\mathcal{\tilde{L}}_{g}^n$ and $\mathcal{\tilde{L}}_{c}^n$ denote $\mathcal{\tilde{L}}^n$ w.r.t. L$_{g}^n$ and L$_{c}^n$, where L$_g^n$ and L$_c^n$ are the average generative and discriminative errors for the $n_{th}$ batch of data, respectively, and $\nu_l^n = \frac{\text{Node Count}_{current}}{\text{Node Count}_{initial}}$ for the $l^{th}$ hidden layer. $\mathcal{\tilde{L}}$ is defined in terms of recursive decay to respond rapidly to immediate changes. \subsubsection{Action Space} The action space is defined as, \begin{equation} \label{eq:actions} A = \{Pool, Increment(\Delta), Merge(\Delta)\}, \end{equation} where $\Delta$ is a pre-defined constant representing the number of nodes. We define two pools of data B$_{ft}$ and B$_r$ to be utilized by the actions in Equation \ref{eq:actions}. B$_{ft}$ is composed of the $\tau$ most recent incorrectly classified batches of data, as detailed in Equation \ref{eq:b_ft}. B$_r$ contains the $\tau$ most recently observed batches, and $\tau$ is a predefined constant. Increment($\Delta$) adds $\Delta$ new nodes and greedily initializes them using pool B$_r$. The Merge($\Delta$) operation is performed by merging the 2$\Delta$ closest pairs of nodes into $\Delta$ nodes. The Pool operation trains the network with B$_{ft}$ given its previous parametrization. \subsubsection{Reward Function} The reward function is defined as follows, \begin{equation} \label{eq:rn_pean} r^n= \begin{cases} g^n - |U - \nu_1^n| & \text{if } \nu_1^n < V_1 \text{ or } \nu_1^n > V_2 \\ g^n & \text{otherwise} \end{cases}, \end{equation} \noindent where $g^n =(1-(\text{L}_{c}^n-\text{L}_{c}^{n-1}))\times(1-\text{L}_{c}^n)$ and $U, V_1$ and $V_2$ are predefined thresholds penalizing the network if it grows too large or small. With the definition of $S$, $A$ and $r^n$, Q-Learning~\cite{sutton1998reinforcement} is employed to learn a desirable policy, i.e. a function that defines which action to take in a given state, to control the structural changes. Q-Learning is a reinforcement technique that learns policies without relying on a deterministic model of the environment. This is a desirable property to have as the environment of our MDP is complex and only partially-observable. The desired policy is learned by updating an utility function $Q(s,a)$ which quantifies the reward for executing action $a$ in state $s$. In RA-DAE, Q-Learning is used in the following way. For the $n^{th}$ iteration, with data batch $\mathcal{D}^n$, \begin{enumerate} \item Until adequate samples are collected, i.e. $n \leq \eta_1$, train the network with B$_r$. \item With adequate samples collected, i.e. $n>\eta_1$, start calculating Q-values for each state-action pair observed $\{s^n,a^n\}$, where $s^n \in S$, and $a^n \in A$. \item When $\eta_1 < n \leq \eta_2$, uniformly perform actions from $A=\{$\emph{Increment}, \emph{Merge}, \emph{Pool}$\}$ to develop a descent value estimate for all actions in $A$. \item With an accurate estimation of $Q$, i.e. $n>\eta_2$, the action $a'$ is selected by $a'=\argmax _{a'}(Q(s^n,a'))$ with a controlled amount of exploration ($\epsilon$-greedy). \item Execute action $a'\in A$, train the network with $\mathcal{D}^n$ and finally calculate the new state, $s^{n+1}$, and the reward $r^{n}$. \item Update the value (utility) $Q(s,a)$ as, $Q^{(t+1)}(s^{n-1},a^{n-1}) = (1-\alpha) Q^{t}(s^{n-1},a^{n-1}) + \alpha \times q$, where $q=r^n + \gamma \times \max _{a'}(Q^{t}(s^n,a'))$, and $\eta_1$, $\eta_2$, the learning rate $\alpha$, and the discount rate $\gamma$ are predefined constants. \end{enumerate} \section{Self-Supervised Navigation} The objective of this paper is to introduce a real-time self-supervised navigation mechanism that only relies on vision. We use SDAEs to learn the optimal navigation action given the current perception of the robot. Since the learning is performed in real time it is desirable to explore model complexity performance trade-offs to make the learning efficient. This is achieved by using an adaptive variant of SDAEs known as RA-DAEs. Ideally, RA-DAE should increase the complexity of the model as new parts of an environment is being explored and either reduce or keep constant when previously seen parts of the environment are encountered. In the following we will describe the components of our method for self-supervised navigation using RA-DAE and how they work together in more detail. As the method uses a reinforcement learning framework we consider that each movement of the robot is an episode denoted by $E^i$ with $i = 0, \dots, N$ where $N$ is the number of episodes in a single experiment. The action taken in episode $i$ is denoted by $a^i \in A$ where $A=\{1,\ldots,K\}$ denotes the set of $K$ discrete actions available which each are linked to their individual softmax layer. During the execution of action $a^{i+1}$ in episode $E^i$ the robot collects a set of images and self-supervised labels. This forms the data set $\mathcal{D}^i=\{(\mathbf{x}^i_1,y^i_1),(\mathbf{x}^i_2,y^i_2),(\mathbf{x}^i_3,y^i_3),\ldots\}$ where $\mathbf{x}^i_j$ represents the pixels in a single image with the associated label $y^i_j \in \{0, 1\}$ denoting if a collision occurred during the execution of action $a^{i+1}$. Using this data we train RA-DAE$_{a^{i+1}}$ by combining the softmax layer for action $a^{i+1}$ and the shared hidden layers, as shown in Figure~\ref{fig:radae}. When deciding which action $a^\prime$ to execute next we query the RA-DAE$_{a^{i+1}}$ model to obtain the probability of executing action $a^\prime$, i.e. $P_{a^{a+1}}(a^\prime) = P(a^{i+1}=a^\prime \mid \mathcal D^i, \theta_{a^{i+1}})$, where $\theta_{a^{i+1}}$ are the latent variables, i.e. weights, of RA-DAE$_{a^{i+1}}$. With this we can obtain the probability of choosing each action as $\mathbf b^{i+1} = \{P_{a^{i+1}}(a^\prime)\} \forall a^\prime \in A$. Putting the training and querying parts together into an end-to-end process as illustrated in Figure~\ref{fig:auto-nav-system} which shows the different components of our framework. Initially the system executes action $a^0 = S$, i.e. go straight for $\delta$ meters. Then the next action is selected by computing the probability of each action $a^\prime$ as $\{P_{a^{i+1}}(a^\prime)\} \forall a^\prime \in A$ and evaluating the following action selection function: \vspace{-.1cm} \begin{equation} \label{eq:action_selection} a^{i+1}=\begin{cases} \text{random } & \text{if } P_{a^{i+1}}(\hat{a})<\mu_1 \text{ or }\\ & P_{a^{i+1}}(\hat{a})>\mu_2 \forall \hat{a} \in A\\ \argmin_{\hat{a}}(A^\prime) & \text{otherwise} \end{cases}, \vspace{-.1cm} \end{equation} \noindent where $a^\prime \in A^\prime \text{ if } \mu_1 \leq P_{a^{i+1}}(a^\prime) \leq \mu_2\ \forall a\in A$ with $\mu_1$ and $\mu_2$ as predefined constants. The action $a^{i+1}$ selected in this manner is then executed in episode $E^{i+1}$ which yield $\mathcal D^i$ which allows us to train RA-DAE$_{a^{i+1}}$. Next, RA-DAE$_{a^{i+1}}$ is trained on $\mathcal{D}^{i}$ which contains the observations the robot made while executing $a_{i+1}$ and the labels $\mathbf y^i$ attached to those, i.e. $y^i_j = 0 \quad \forall y^{i}_j \in \mathbf y^{i}$ in case of a collision. As the labels are obtained in a self-supervised manner by the robot this procedure allows it to improve the models in a self-supervised way. This process of picking the next action and improving the model based on the collected observations for that action is repeated until termination, an overview of the algorithm is given in Algorithm~\ref{algo:navigation}. The algorithm starts by executing its initial action, i.e. move forward. While the robot moves, images are stored. Once the motion terminates the algorithm checks whether or not a collision occurred. If the robot collided he reverses back to the last safe position and trains the RA-DAE model of the executed action with the stored data. The same happens when no collision was detected, with the difference that the robot stays in its current position and saves it as the last known safe position. Once this is done the next action to execute is selected by evaluation the RA-DAE models for each action. The selected action is then executed. \begin{algorithm}[bt] \caption{Navigation algorithm} \label{algo:navigation} \begin{algorithmic} \Procedure{Navigate()}{} \State \textbf{define} : lastSafePos - Last non-collided position \State $i=0$ \State lastSafePos = current robot position \State Execute action $a^i = S$ \While {notTerminated} \While {moving} \State Accumulate $\mathbf{x^i_j} \hspace{0.1cm}\text{where}\hspace{0.1cm} j=1,2,\ldots$ \State Check collision \EndWhile \If {$i>0$ and collision} \State Reverse to lastSafePos \State $\mathcal{D}^{i}=\{(\mathbf{x}^{i}_j,y^{i}_j)\} \hspace{0.1cm}\text{where}\hspace{0.1cm} y^{i}_j=0 \hspace{0.1cm} \forall j$ \State Train RA-DAE$_{a^{i+1}}$ with $\mathcal{D}^{i}$ \EndIf \If {$i>0$ and not collision} \State $\mathcal{D}^{i}=\{(\mathbf{x}^{i}_j,y^{i}_j)\} \hspace{0.1cm}\text{where}\hspace{0.1cm} y^{i}_j=1 \hspace{0.1cm} \forall j$ \State Train RA-DAE$_{a^{i+1}}$ with $\mathcal{D}^{i}$ \State lastSafePos = current robot position \EndIf \For {$\forall a \in A$} \State Calculate $P_{a+i}(a)$ with RA-DAE$_a$ \EndFor \State $i = i+1$ \State Calculate and Execute $a^{i+1}$ (Equation~\ref{eq:action_selection}) \EndWhile \EndProcedure \end{algorithmic} \end{algorithm} Finally, we discuss the two key modifications introduced to RA-DAE to make the algorithm more applicable to navigation tasks. First, we use $K$ single node softmax layers for classification, i.e. one layer for each action (Figure~\ref{fig:radae}). In contrast to the alternative of a single softmax layer with $K$ nodes our approach allows multiple actions to be valid for the same data by imposing more independence between the actions. Second, RA-DAE uses two pools of data: $B_{r}$ to train the newly added neurons and $B_{ft}$ to fine-tune the whole network, the latter was modified as follows: \begin{equation} \label{eq:b_ft} B_{ft} = \begin{cases} \mathcal D^i \cup B_{ft} \text{ if } y^{i}_j=1 \forall y^{i}_j \in \mathcal D^i \land y^{i-1}_j = 0 \forall y^{i-1}_j \in \mathcal D^{i-1} \\ \mathcal D^i \cup B_{ft} \text{ if } y^{i}_j=0 \forall y^{i}_j \in \mathcal{D}^{i} \\ B_{ft} - \mathcal D^i \text{ if } |B_{ft}| > \tau \argmin_{i^\prime}( \forall \mathcal D^{i^\prime} \in B_{ft}) \\ B_{ft} \text{ otherwise} \end{cases} \end{equation} The argument behind the modifications is as follows. As $B_{ft}$ is employed to train the whole network, we fill $B_{ft}$ with the instances our algorithm misclassified. As such, $B_{ft}$ collects data that depicts a wrong action executed or the corresponding correct action. \section{Experimental Results} \subsection{Overview and Setup} \label{sec_overview} Several experiments were conducted to assess the performance of our approach. The experiments were done in simulation and using a real robot. We used Morse\footnote{https://www.openrobots.org/wiki/morse/} as the simulation framework and an indoor environment already available in the framework. An office and an outdoor area were used as the real-world environments. Our robot (Figure~\ref{fig:env}b) is equipped with a Firefly MV camera producing 640x480 RGB images at 30Hz, a 40Hz Hokuyo laser mounted in front of the robot and an onboard Intel i7-4500U 1.80GHz. The laser scanner is used to detect imminent collisions and avoid damaging the environment, essentially acting as a bump sensor and not providing any range information. The approach (RA-DAE) was tested against a standard Stacked Denoising Autoencoder (SDAE) and a Logistic Regression Classifier (LR). For fairness, we introduced multiple single node softmax layers (i.e. layer per action) for both SDAE and LR, and a pooling step for SDAE similar to RA-DAE~\cite{ganegedara2016online} which trains the model on previously seen data. \begin{figure}[t] \hspace{.5cm} \begin{subfigure}{.22\textwidth} \includegraphics[width=0.8\linewidth]{images/wombot_resized.png} \centering \caption{The robot} \label{fig:wombot} \end{subfigure} \hspace{-.5cm} \begin{subfigure}{.22\textwidth} \centering \includegraphics[width=.8\linewidth]{images/sim_env_resized.png} \caption{Simulation} \label{fig:sim1} \end{subfigure}% \hspace{.5cm} \begin{subfigure}{.22\textwidth} \centering \includegraphics[width=.8\linewidth]{images/loc_office_resized.jpg} \caption{Office} \label{fig:loc_office} \end{subfigure} \hspace{-.5cm} \begin{subfigure}{.22\textwidth} \includegraphics[width=.8\linewidth]{images/loc_outdoor_resized.jpg} \centering \caption{Outdoor} \label{fig:loc_outdoor} \end{subfigure} \vspace{-0.3cm} \caption{Environments and the robot.} \vspace{-0.5cm} \label{fig:env} \end{figure} The following settings were used for all the experiments. Total number of episodes, N=500 for the simulation, and N=400 for the real-world respectively. Simulation experiments were run on a NVidia Tesla K40c while the real-world experiments were only using the onboard computer of the robot. Theano~\cite{bastien2012theano} was used for the implementations. The parameter $\delta$ (distance traveled before taking an action) was set to $1m$. $\mu_1$ and $\mu_2$ for the action selection algorithm were selected as 0.45 and 0.95 respectively for all algorithms. For all experiments we used a batch size of 5. A smaller batch size was important as the data was collected in real-time. The corruption level (0.15), activation function ({\em sigmoid}) and the learning rates for RA-DAE (0.01), SDAE (0.05) and LR (0.001) where chosen with a coarse grid search. Different learning rates are required as structural complexities were different for different algorithms. For example, SDAE failed to perform with low learning rates due to the complexity of the network (i.e. large number of weights). RA-DAE and SDAE were initialized with three layers having 64, 48 and 32 neurons, and 256, 196 and 128 neurons, respectively. For RA-DAE, $m$ and $\tau$ were set to 15 and 10000 respectively as a compromise between the memory requirement and performance. $\eta_1$ and $\eta_2$ were set to 5 and 30 in order to provide adequate time for Q-Learning algorithm to explore the action space before predicting actions based on the value function. $\Delta$ was set to 5 to achieve a consistent and smaller growth rate of the network over time due to the limited amount of data possessed. Finally, no regularization was employed except for denoising. We tested \emph{dropout}~\cite{srivastava2014dropout}, however, dropout failed in all the experiments, as such stochasticity disrupts the incremental nature of RA-DAE. \begin{table}[t] \caption{Percentage of collisions and time consumption w.r.t the number of hidden layers of RA-DAE. The two tables denote the results obtained for two distinct starting locations in the simulated map, Fig~\ref{fig:env}a. $L_{NW}$ and $L_{W}$ are the average percentage of the count of collisions and the average of false-positive probabilities of the collisions in the last 250 episodes. The time consumption denotes the average training and prediction time per episode respectively. It can be seen that there is a clear advantage in increasing the number of hidden layers.} \label{tbl:hidden_layers_1} \centering \begin{tabular}{|c|c|c|c|} \hline Hidden & \multicolumn{2}{|c|}{Average collision percentage} & Training\\\cline{2-3} Layers & $L_{NW}$ & $L_{W}$ & Time (s)\\ \hline 1 & 27.6$\pm$6.38\%& 16.78$\pm$4.44\% & 0.307\\ 3 & \textbf{21.6$\pm$5.64\%} & \textbf{15.09$\pm$3.44\%} & 0.497\\ \hline \end{tabular} \vspace{.1cm} \begin{tabular}{|c|c|c|c|} \hline Hidden & \multicolumn{2}{|c|}{Average collision percentage} & Training\\\cline{2-3} Layers & $L_{NW}$ & $L_{W}$ & Time (s)\\ \hline 1 & 31$\pm$11.40\%& 17.11$\pm$4.91\% & 0.326\\ 3 & \textbf{26.6$\pm$10.54\%} & \textbf{15.64$\pm$4.61\%} & 0.441\\ \hline \end{tabular} \vspace{-0.5cm} \end{table} \subsection{Preprocessing} Following the data acquisition, several low-cost pre-processing steps are performed on the captured images to make the learning more effective. Since the frame rate of the camera (30Hz) and the frequency of the laser (40Hz) are too high and mismatched, they are downsampled to 10Hz. This operation enables us to have a label corresponding to each image captured image. The images are then preprocessed as follows. First the images are resized to 128x96 and cropped vertically (19 pixels from each side) to produce 128x58 images. Next the images are converted to \emph{grayscale} from RGB space and normalized to the range of $[0,1]$. Downsampling the images is important to make the computations feasible in real-time. Finally, the mean is subtracted from the images to produce zero-mean inputs. \subsection{Performance Metric} We define the performance metrics as functions of the number of collisions occurred. The definition of accuracy can be difficult to discern for navigation tasks. For example, a standard error measure such as squared loss does not suit our approach as the labels are a mere reflection of the correctness of an action taken and does not possess information about the validity of other actions at a given time. Therefore, we calculate the non-weighted ($L_{NW}$) and weighted ($L_W$) count of collisions for a given time window to measure the performance. For the time frame $E^{i-M:i}$, where $E^{i-M:i}$ is composed of episodes $\{E^{i-M},\ldots,E^{i-1}\}$, we define, $L_{NW}^{i-M:i}$ as the number of collisions occurred during $E^{i-M:i}$ and $L_{W}^{i-M:i}$ is the weighted number of collisions with weights equal to the probability of executing the action leading to the collision. We used $M = 25$ and $i = \{0, 25, 50, \dots, N\}$. \subsection{Evaluating the Effect of the Number of Layers} In order to assess the effect of the number of hidden layers on the performance, we tested a one layer and a three layer RA-DAE in the simulated environment. Table~\ref{tbl:hidden_layers_1} shows the average percentage of collisions per 25 episodes in the last 250 episodes ($L_{NW}$ and $L_{W}$) of a total of 500 episodes as well as the training and prediction time per episode. $L_{NW}$ and $L_{W}$ are calculated by taking the average of $L_{NW}^{i-25:i}$ and $L_{W}^{i-25:i}$ where $i=\{275,\ldots,500\}$ and converting them to percentages. It can be seen that deeper models deliver better performance. To understand the feature representation capabilities of distinct layers, we visualize features learned by the models using the activation maximization procedure~\cite{erhan2009visualizing}. Figure~\ref{fig:filters} visualizes the hidden layers for the three layered RA-DAE. It can be observed that the deeper the layer is, the more detailed the representation. The first layer of the network focuses on various shadows and edges, where as the third layer network represents more defined structures. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{images/filters_v2.pdf} \vspace*{-0.6cm} \caption{Visualization of 18 filters learned by the first to third layer of a three-layered RA-DAE (top row to bottom row). The representation of structures are more visible and clear in higher layers.} \label{fig:filters} \vspace{-0.3cm} \end{figure} \begin{figure*} \centering \hspace*{0.1cm} \includegraphics[width=0.9\textwidth,trim={2cm 0cm 0cm 0cm},clip]{images/collision_plots.pdf} \vspace*{-0.5cm} \caption{Plot of the percentage of collision ($L_{NW}$) and the growth of the network over time in simulation, office and outdoors, respectively. The left and right axes denote the percentage of collisions and the number of neurons in each layer of the RA-DAE, respectively. The simulation results indicate that in general, RA-DAE performs better than SDAE and LR in both environments. Finally, the behavior of the size of the network suggests that RA-DAE increases its complexity as the complexity of the environment increases.} \vspace{-0.5cm} \label{fig:bumps} \end{figure*} \vspace{-0.1cm} \subsection{Comparisons} \subsubsection{Overview} Several experiments were performed comparing the performance of RA-DAE, SDAE and LR. For each algorithm and environment, two experiments were averaged and was taken as the result. A limit of two experiments per algorithm and environment combination was set as the algorithms displayed similar patterns in terms of the number of collisions over time. The first 100 episodes were disregarded to allow the algorithms to learn useful parameters before being compared. The number of collisions $L_{NW}$ was calculated for consecutive batches of 25 episodes, $L_{NW}=\{L_{NW}^{0:25},\ldots,L_{NW}^{N-25:N}\}$. To facilitate the interpretation of the results we converted the $L_{NW}$ values to percentages (i.e. $L_{NW}^{i-25:i}\times(100\div25)\% \hspace{0.1cm} \forall i$). \subsubsection{Simulation Results} The simulation results in Fig~\ref{fig:bumps}a indicate a clear reduction of $L_{NW}$ for RA-DAE, SDAE and LR over time. However, early in the learning process RA-DAE shows the steepest reduction in the number of collisions. This can be attributed to RA-DAE's ability to incrementally learn features on demand, as opposed to trying to learn with a fixed number of neurons. SDAE achieves the lowest percentage of collisions but only very late in the process. LR shows the worst performance as it is unable to deal with the complexity of the environment. \subsubsection{Real-World Experimental Results} Fig~\ref{fig:bumps}b and ~\ref{fig:bumps}c show the results obtained in real-world environments; an office environment (Fig~\ref{fig:env}c) and an outdoor environment (Fig~\ref{fig:env}d). In the office environment, it can be observed that RA-DAE and LR demonstrate better performance than SDAE. The reason for LR's slightly better performance can be related to the nature of the environment; the office environment was comparatively easy to navigate as the area was small and had consistent lighting throughout the experiment, enabling LR to perform better. SDAE's slightly poor performance at the end of the experiment can be ascribed to a slight over-fitting caused by the combination of the consistency of obstacles and the complex neural structure. The outdoor environment provided a more challenging and dynamic environmental conditions such as lighting changes, making the learning more challenging. Figure~\ref{fig:classifications} shows the variability of the lighting conditions in the outdoor environment. The results from the outdoor environment suggest that LR performs the worst while SDAE shows a slight reduction on the number of collisions. RA-DAE shows the highest reduction of the number of collisions. Moreover, the robot's ability to learn actions from images can be related to the improvement of the quality of the trajectory the robot follows (Fig~\ref{fig:trajectories}). It can be noted trajectories from episodes 350-400 are less erratic, which results in a lower number of collisions, compared to episodes 50-100. Figure~\ref{fig:classifications} shows actions selected for several images sampled from the real-world environments. In the office environment ($1^{st}, 2^{nd}$ and $3^{rd}$ row), where most obstacles are white, it can be seen that the robot goes straight if there are no white objects immediately in front of it ($2^{nd}$ row). However, when some immediate obstacle is present, the robot tends to turn right or left depending on the positioning of the obstacle. The same observation can be noted for the outdoor environment ($4^{th}, 5^{th}$ and $6^{th}$ rows), where the robot will prefer going straight if the image contains more light areas and is not significantly occluded by a dark blob (obstacle). Also, it can be noted that it prefers to turn towards light areas when an obstacle is present. By analyzing the images classified as straight ($2^{nd}$ and $5^{th}$ rows), it can be seen that the algorithm has learned to prefer dark areas in front of it in the office environment while in the outdoor environment, the robot prefers lighter areas. Finally, in simulation, the results demonstrate that with adequate training time, SDAE can outperform RA-DAE with enough data. However, the results at the beginning of the learning process are significantly worse. \begin{figure} \centering \hspace*{-0.3cm} \includegraphics[width=0.48\textwidth]{images/fig_training_time_v2.png} \vspace*{-0.5cm} \caption{Average training and prediction time for RA-DAE and SDAE. The solid bar and the error bar depict the average and standard deviation of the training and prediction time for a single episode. RA-DAE reduces the time per episode almost by half compared to SDAE.} \label{fig:training_time} \vspace{-0.4cm} \end{figure} \begin{figure*}[h] \begin{subfigure}{.3\textwidth} \centering \includegraphics[height=3cm]{images/loc_office_resized.jpg} \caption{Office} \label{fig:office_image} \end{subfigure} \begin{subfigure}{.30\textwidth} \centering \includegraphics[height=3cm]{images/office_50_100.jpg} \caption{Traj. (Office, 50-100 Episodes)} \label{fig:office_traj_50_100} \end{subfigure} \begin{subfigure}{.30\textwidth} \centering \includegraphics[height=3cm]{images/office_350_400.jpg} \caption{Traj. (Office, 350-400 Episodes)} \label{fig:office_traj_350_400} \end{subfigure} \vspace{-0.2cm} \caption{Trajectories sampled from different time-frames in the office environment for RA-DAE. The algorithm reduces the number of collisions over time, leading to smoother trajectories.} \label{fig:trajectories} \end{figure*} \begin{figure*}[h] \vspace{-0.2cm} \hspace{.1cm} \includegraphics[width=.92\textwidth]{images/classified_instances_compressed.pdf} \vspace{-0.2cm} \caption{The first three rows denote correctly classified instances where the last row (fourth row) illustrates misclassified instances. The first three columns illustrate samples from the office and the rest is from the outdoor environment. It is clear that the robot learns to turn depending on the obstacle position. Finally, the challenges of navigating the outdoor environment can be understood by observing the over exposures, variable lighting, etc.} \vspace{-0.6cm} \label{fig:classifications} \end{figure*} \subsection{Evaluation of the Network Growth of RA-DAE} The dashed lines in Figure \ref{fig:bumps} show the growth of the number of neurons in each layer of RA-DAE over time. In all experiments, RA-DAE begins with a small network and continues to grow it throughout the experiment. In the simulated environment, the network growth is aggressive compared to that in the real world experiments which can be explained by the obstacles in the simulated environment varying significantly in both color and shape, as shown in Figure \ref{fig:env}(b). RA-DAE displays a similar pattern of growth for both layers in the real world environments but less steep than in the simulated environment. Sudden drops can be noticed in the growth of the network in the office environment. Indicating the RA-DAE's attempt to learn with less neurons, as the environment is small. They are followed by increments, as the network needs more neurons to compensate for the features overridden due to continuous learning. Using Q-learning to adapt the model structure is limited as this is effective only for discrete and small action space. For deeper networks more powerful RL techniques such as DDQN~\cite{van2016deep} should be used and will be investigated in the future. \subsection{Evaluation of the Training Time} Figure~\ref{fig:training_time} illustrates the training time per episode for RA-DAE and the non-adaptive equivalent SDAE. The time taken to process the data from one episode by the RA-DAE is substantially lower compared to SDAE. The reason for the faster training speed of RA-DAE results from its ability to begin with a small neural network, i.e., less parameters, and incrementally adding parameters and neurons as needed. SDAE is forced to maintain the high complexity of the network throughout the experiment resulting in longer computation time. \section{CONCLUSIONS AND FUTURE WORK} \vspace{-0.1cm} Self-supervised learning remains a critical problem for autonomous navigation and long-term adaptability of robotic systems. Existing techniques rely on more expensive sensors or require extensive training sets with labels provided by humans. In this paper we developed an online self-supervised procedure based on deep neural networks that incrementally learns a predictive model, allowing a robot to navigate using a single camera. Our approach uses reinforcement learning to progressively add complexity to the network. We compare our technique (RA-DAE) to non-adaptive counterparts; Stacked Denoising Autoencoders and a Logistic Regression classifier. The experiments were conducted both in simulation and real-world environments (indoors and outdoors). The results indicate that our algorithm learns to avoid collisions comparatively better than the benchmark, while consuming less time. We explored online structure learning using models based on Stacked Autoencoders. An alternative model would be CNNs, or Convolutional Autoencoders. In CNNs, the filters are shared among different nodes which makes addition or merging operations more difficult. The inclusion of convolutional nets into our framework remains a topic for future work. Another avenue is the use of pre-trained models where the robot would start exploring using a well developed network, trained on another environment. The goal of RA-DAE would then be to adapt the network to a new environment. \vspace{-0.3cm} \bibliographystyle{named} {
{ "timestamp": "2017-12-15T02:03:28", "yymm": "1712", "arxiv_id": "1712.05084", "language": "en", "url": "https://arxiv.org/abs/1712.05084" }
\section{Introduction} Association rule mining (ARM) has emerged as a powerful and specialized tool to identify patterns in large datasets. It can be used in applications or business operations where instances of some spatio-temporal occurrence is represented in tabular format across a set of common attributes. An ARM study typically results in rules of the form A$\rightarrow$ B, which would mean that, based on evidence from the data, the presence of attribute A is likely to indicate the presence of attribute B. There are two major challenges to an ARM implementation: (i) Candidate Generation: This involves the process of filtering all the possible combinations of items that satisfy a given condition for selection. Given the exponentially large possibilities of rules, this condition focuses on the use of frequency based thresholds to remove potentially uninteresting rules \cite{AG93}. The second major challenge is (ii) Candidate Evaluation: This involves the use of an appropriate metric (\textit{interestingness measure}) to evaluate all the different rules that can be defined from the selected item sets \cite{TK}. This research concerns itself with the latter challenge. Candidate evaluation can be challenging because there are different ways of describing interestingness of rules. A recent study \cite{all} showed that even among \textit{objective measures}, there exist more than 61 that are defined in literature. Also, the information derived from these different interestingness measures (IM) may not always be consistent \cite{TK}. The properties are typically defined using a contingency table (see Table \ref{tbl1}), a simplified adaptation from \cite{TK}. Here, two states, present and absent, are defined for two variables, A (rows) and B (columns). The frequency counts $f_{11}$ and $f_{00}$ define the co-presence and co-absence of A and B, respectively. While the term $f_{10}$ would represent the presence of A and absence of B, and $f_{01}$ the opposite. \begin{table}[h] \caption{Standard 2x2 contingency table representing the frequency counts of A and B}\label{tbl1} \centering \begin{tabular}{|c|c|c|} \hline \rule[-2ex]{0pt}{5.5ex} & $B$ & $B^{c}$ \\ \hline \rule[-2ex]{0pt}{5.5ex} $A$ & $f_{11}$ & $f_{10}$\\ \hline \rule[-2ex]{0pt}{5.5ex} $A^{c}$ & $f_{01}$ & $f_{00}$\\ \hline \end{tabular} \end{table} In this research, we posit that the popularly used set of 8 properties covered in \cite{TK2} do not fully capture some important aspects of interestingness measures, and this motivates us to define a more relevant and new property based analysis of IMs. Specifically, our motivation is built on the observations of \cite{all}, who state that the empirical classification of measures based on how they rank rules has little to do with the property based classification. A deeper study on this mismatch leads us to believe that pre-existing mathematical properties are only useful in specific environmental contexts. These observations lead us to devise simpler, more generic property definitions which can be applied to different environmental contexts and bear a stronger affiliation to rule ranking patterns exhibited by the measures on empirical datasets. To this end, we create a property definition framework that defines properties based on the change in the IM per unit change in a frequency count $(f_{11},f_{10},f_{01},f_{00})$. We broadly refer to this as \textit{Rate of Change Analysis} (RCA)\footnote[1]{While this term is used in stock market analysis, our use of this term in the data mining context is novel.}. Specifically, we define two properties which look at the partial derivative of the measure at two different pre-existing states of the frequency count. The first studies the rate of change behavior of IM when the frequency count is very large (asymptotic effect as the frequency count tends to $+\infty$). We refer to this as \textit{Unit-Null Asymptotic Invariance} (UNAI). The second property is defined at the point when the frequency count is currently $0$ or is tending to $0$. We refer to this as \textit{Unit-Null Zero Rate} (UNZR). This looks at the effect of increasing the frequency count on the IM when it is currently non-existent in the data set. By defining properties based on how measures actually change at different contingency table configurations we explicitly link the rule ranking behavior with the mathematical property. \subsection{Intuition for the properties UNAI and UNZR} When UNAI is satisfied, we can say that the measure will not keep increasing or decreasing with the addition of one of the $f_{ij}$s while the others are kept constant, that is, the metric will asymptotically converge to a fixed value. A metric that fails this property will not converge to a constant value with continued addition of $f_{ij}$s. An example is Lift, which keeps increasing with addition of $f_{00}$s and does not converge to a value. UNZR is satisfied when we can say that the measure will increase when shown evidence of co-presence or co-absence, if such evidence did not previously exist. Also, it should decrease when shown evidence that one item occurs when the other does not (case of counterexamples). Such a relationship could be weak, but at the very least, such metrics will not behave counter to expectation (like decreasing when shown evidence of co-presence or co-absence) and will not stay completely invariant.\\ The major contributions of this research are listed as follows: \begin{itemize} \item Introduction of a novel approach to classify interestingness measures and the development of two specific properties, namely UNAI and UNZR, using this approach \item An analysis of the performance of these properties through the classification of various interestingness measures, as well as a comparison with other properties presented in \cite{TK} \item Presenting empirical case studies that provide validation for the findings and also demonstrates the usefulness of the properties using real-world and synthetic data sets. \end{itemize} \section{Related Work} \label{sec:prevwork} A large number of objective IMs have emerged as a result of the application of ARM across different domains. It is also documented that not all measures are capable of capturing the strength of associations and in some cases provide conflicting information of the strength of patterns \cite{TK}. Given the abundance of measures and difficulty in choosing the appropriate IM, researchers have suggested various classification schemes (of the IMs) to help identify the appropriate measure for a given application \cite{PS}, \cite{TK}, \cite{TK2}, \cite{GGDN}, \cite{all}, \cite{TGTB}. There are two different types of classification that exist in literature: classification based on the properties of IMs (e.g. \cite{PS}, \cite{TK}, \cite{TK2}, \cite{GGDN}) and classification based on empirical results of IMs on different datasets (e.g. \cite{all}). Research conducted by \cite{PS} formalized a framework consisting of three properties that an IM should satisfy, namely: the measure should take value 0 if the occurrences of itemsets are independent (P1); the measure should be monotonically increasing with the co-presence of itemsets (P2); and the measure should be monotonically decreasing with the occurrences of either itemsets (P3). \cite{TK} proposed the following 5 properties in addition to the 3 proposed by \cite{PS}: symmetry under variable permutation (O1), row/column scaling invariance (O2), anti-symmetry under row/column permutation (O3), inversion invariance (O4) and null invariance (O5). They conducted a comparative study, testing 21 different IMs against the resulting 8 properties. The authors further proposed that the optimal way of finding a suitable IM would be to let the user define a property vector indicating the properties that would be ideally required for the given application. This property vector would then be compared to the property vectors of the different objective measure to pick out the ideal interestingness measure for that particular case. For instance, the null-invariance property is considered to be important for interestingness measures used in the context of small probability events in a large dataset \cite{Wu}. While there has been further work in introducing new properties (e.g., \cite{HH}, \cite{FR}, \cite{GH}, \cite{GGDN}, \cite{Hebert2007}), these have not been as commonly used or cited as the work of \cite{PS} and \cite{TK2}. There has been limited work on classification of IMs based on empirical results on different datasets. Research by \cite{Huyn1} proposed the classification of 35 different interestingness measures based on their empirical performance on 2 different datasets by studying the correlation of the interestingness measures. These measures were classified using a graph based clustering approach to create high correlation and low-correlation graphs. The work of \cite{all} performed a comprehensive classification of 61 different objective IMs on the based on empirical results on 110 different datasets. It suggested that there exist 21 clusters of measures which are distinct and each of these clusters were studied in detail. \section{Mathematical definitions for properties UNAI and UNZR} \label{sec:mathreq} An interestingness measure (IM) can be represented as a function of the frequency counts (see Equation \ref{eqn:1}). RCA analysis seeks to assess the relative change in the interestingness measure per unit change of the frequency counts. This is essentially the first partial derivative of the interestingness measure with respect to the variables representing the counts, as shown in Equation \ref{eqn:2}. The set of formulas representing the first partial derivative of the interestingness measure with respect to each of the four state variables $f_{11}$, $f_{00}$, $f_{10}$ and $f_{01}$ represent the RCA analysis as shown in Equation \ref{eqn:3}. \begin{equation} \label{eqn:1} IM = \phi(f_{11},f_{10},f_{01},f_{00}) \end{equation} \begin{equation} \label{eqn:2} \phi^{'}_{f_{ij}} =\frac{\partial(IM)}{\partial f_{ij}} \end{equation} \begin{equation} \label{eqn:3} RCA (IM) = \{\phi^{'}_{f_{11}},\phi^{'}_{f_{10}},\phi^{'}_{f_{01}},\phi^{'}_{f_{00}}\} \end{equation} \begin{equation} \label{eqn:4} UNAI_{ij} = \lim_{f_{ij}\longrightarrow +\infty} (\phi^{'}_{f_{ij}}) \end{equation} \begin{equation} \label{eqn:5} UNZR_{ij} = \lim_{f_{ij}\longrightarrow 0} (\phi^{'}_{f_{ij}}) \end{equation} We use the RCA analysis to define two novel properties. The \textit{Unit-Null Asymptotic Invariance} (UNAI), and the \textit{Unit-Null Zero Rate} (UNZR). Mathematically, both these properties are the \textit{derivative at a point} or the \textit{instantaneous rate of change}, at two specific points. We can define the property Unit-Null Asymptotic Invariance (UNAI) as the derivative of the interestingness measure (IM) with respect to $f_{ij}$ as $f_{ij} \to \infty$, and this instantaneous rate of change can be written as shown in Equation \ref{eqn:4}. UNAI can be defined for each of the four frequency count variables by substituting $ij$ with the count of interest. Similar to UNAI, UNZR can be captured by looking at the instantaneous rate of change at $0$. Formally, this would be the derivative of the interestingness measure (IM) with respect to $f_{ij}$ as $f_{ij} \to 0$, and this instantaneous rate of change can be written as shown in Equation \ref{eqn:5}. To compute, UNAIs and UNZRs, in some cases we can simply take the first partial derivative and directly substitute the point of interest, in other scenarios we use the limit notation for derivative at a point (also shown in Equations \ref{eqn:4} and \ref{eqn:5}). Having defined the framework for computing the satisfaction of UNAIs and UNZRs, in the subsequent sections we define the conditions where an interestingness measure can be said to satisfy these properties. These sections presents a classification scheme for the properties UNAI and UNZR which are presented at the individual $f_{ij}$ level as well as the metric as a whole \subsection{UNAI property definition} \label{sub:UNAI} We create a two-pronged classification scheme for UNAI. We define $UNAI_{f_{ij}}$ which is $UNAI$ defined for each frequency count $(f_{11},f_{10},f_{01},f_{00})$. We do this explicitly for $f_{11}$ which can then be extended to the other frequency counts. We also consolidate the results across all $f_{ij}$s to present the property $UNAI$ for the metric as a whole: \begin{enumerate} \item $UNAI_{f_{11}}$ is satisfied when: $\lim_{f_{11}\to +\infty} (\phi^{'}_{f_{11}})=0$, for all feasible combination of values of $f_{00},f_{10}, \text{and} f_{01}$. We define a \textit{feasible combination} of values as ones which enable the calculation of the metric in deterministic forms for a database with non-zero rows. \\ By extension, we can say that the $UNAI_{f_{11}}$ condition is not met when $\lim_{f_{11}\to +\infty} (\phi^{'}_{f_{11}})\neq0$, for any feasible combination of values of $f_{00},f_{10}, \text{and} f_{01}$. \\ Similarly, we can define $UNAI_{f_{ij}}$ for the other three frequency counts by swapping the variables accordingly. \item $UNAI$ is satisfied when $UNAI_{f_{ij}}$ is satisfied $ \forall (ij)$. This is essentially an extension of the classification from $UNAI_{f_{ij}}$ to a general property for the metric as a whole. \end{enumerate} \subsection{UNZR property definition} \label{sub:UNZR} The classification scheme we adopt for UNZR is more complex than $UNAI$. Similar to UNAI we adopt a two-pronged approach of defining $UNZR$ at the $f_{ij}$ level as well as a defining it for the metric as a whole. However, we differ from $UNAI$ in that $UNZR$ states are not binary, but have three states that correspond to the property being satisfied, partially satisfied, and not satisfied. Another aspect of the difference is that the definitions at the $f_{ij}$ level are different for \{$f_{11}$, $f_{00}$\} and \{$f_{10}$, $f_{01}$\}. They are identically opposite in terms inequality conditions that need to be met, as shown below. We formally defined the property for $f_{11}$ and $f_{10}$ below and extend it to the other frequency counts $f_{00}$ and $f_{01}$ respectively: \begin{enumerate} \item $UNZR_{f_{11}}$ is satisfied when $\lim_{f_{11}\to 0} (\phi^{'}_{f_{11}})>0$ for all feasible combinations of $f_{00},f_{10}, \text{and} f_{01}$. Again, a \textit{feasible combination} is one that enables the computation of the metric in deterministic forms. This formulation can be extended to $UNZR_{f_{00}}$ by swapping the variables accordingly.\\ $UNZR_{f_{10}}$ is satisfied when $\lim_{f_{10}\to 0} (\phi^{'}_{f_{10}})<0$ for all feasible combinations of $f_{11},f_{00}, \text{and} f_{01}$. This formulation can be extended to $UNZR_{f_{01}}$ by swapping the variables accordingly. \item $UNZR_{f_{11}}$ is partially satisfied when two conditions are met. These are: (i) $\lim_{f_{11}\to 0} (\phi^{'}_{f_{11}})\geq 0$ for all feasible combinations of $f_{00},f_{10}, \text{and} f_{01}$, and (ii) $\lim_{f_{11}\to 0} (\phi^{'}_{f_{11}})>0$ for at least one or more feasible combinations of $f_{00},f_{10},\text{and} f_{01}$. This formulation can be extended to $UNZR_{f_{00}}$ by swapping the variables accordingly.\\ Similarly, $UNZR_{f_{10}}$ is partially satisfied when two conditions are met. These are: (i) $\lim_{f_{10}\to 0} (\phi^{'}_{f_{10}})\leq 0$ for all feasible combinations of $f_{11},f_{00}, \text{and} f_{01}$, and (ii) $\lim_{f_{10}\to 0} (\phi^{'}_{f_{10}})<0$ for at least one or more feasible combinations of $f_{11},f_{00},\text{and} f_{01}$. This formulation can be extended to $UNZR_{f_{01}}$ by swapping the variables accordingly. \item Finally, by extension, we can say that $UNZR_{f_{11}}$ is not satisfied when either of these two conditions are met: (i) $\lim_{f_{11}\to 0} (\phi^{'}_{f_{11}})<0$ for any feasible combination of $f_{00},f_{10}, \text{and } f_{01}$ or, (ii) $\lim_{f_{11}\to 0} (\phi^{'}_{f_{11}})=0$ for all feasible combinations of $f_{00},f_{10}, \text{and } f_{01}$. This formulation can be extended to $UNZR_{f_{00}}$ by swapping the variables accordingly.\\ Similarly, we can say that $UNZR_{f_{10}}$ is not satisfied when either of these two conditions are met: (i) $\lim_{f_{10}\to 0} (\phi^{'}_{f_{10}})>0$ for any feasible combination of $f_{11},f_{00}, \text{and } f_{01}$ or, (ii) $\lim_{f_{10}\to 0} (\phi^{'}_{f_{10}})=0$ for all feasible combinations of $f_{11},f_{00}, \text{and } f_{01}$. This formulation can be extended to $UNZR_{f_{01}}$ by swapping the variables accordingly. \item At the overall metric level we say that $UNZR$ property is satisfied for a metric if the $UNZR_{f_{ij}}$ is satisfied $ \forall (ij)$ . We say that UNZR property is partially satisfied for a metric if $UNZR_{f_{ij}}$ is at least partially satisfied for all $f_{ij}$s. Finally, a metric fails to satisfy the UNZR property if one or more $UNZR_{f_{ij}}$s do not satisfy the property. \end{enumerate} \section{Illustrative example of the UNAI and UNZR framework using Lift } In this sections, we consider the behaviour of the popular interestingness measure, Lift under the UNAI and UNZR properties defined in the previous section. Lift is defined as follows: \begin{equation} \label{eqn:6} Lift(L) = \frac{P(A;B)}{P(A)P(B)} = \frac{f_{11}(f_{11} + f_{01} + f_{10} + f_{00})}{(f_{10} + f_{11})(f_{01}+f_{11})} \end{equation} Differentiating w.r.t to $f_{11}$ and simplifying, we get \begin{equation} \label{eqn:7} \frac{\partial(L)}{\partial f_{11}} = \frac{2f_{10}f_{11}f{01} + f_{10}f_{01}(f_{10} + f_{00} +f{01}) - f^2_{11}f_{00}}{(f_{10} +f_{11})^2(f_{01} + f_{11})^2} \end{equation} We check the UNAI property for Lift by considering the derivative as $f_{11} \rightarrow \infty$ \begin{equation} \label{eqn:8} L_{f_{11}}(\infty) = \lim_{f_{11} \longrightarrow \infty} \frac{\partial L}{\partial f_{11}} = \lim_{f_{11} \longrightarrow \infty} \frac{2f_{10}f_{11}f{01} + f_{10}f_{01}(f_{10} + f_{00} +f{01}) - f^2_{11}f_{00}}{(f_{10} + f_{11})^2(f_{01} +f_{11})^2} \end{equation} After algebraic simplification we can say that the above function is equal to zero for all feasible combinations of $f_{00}$, $f_{10}$ and $f_{01}$. Hence, We can say that Lift satisfies UNAI with respect to $f_{11}$. Similarly, we check for UNAI property with respect to $f_{00}$, $f_{10}$ , $f_{01}$. Hence, We can say that Lift satisfies UNAI with respect to $f_{11}$. Similarly, we check for UNAI property with respect to $f_{00}, f_{10}, f_{01}$. \begin{equation} \label{eqn:9} L_{f_{00}}(\infty) = \lim_{f_{00} \longrightarrow \infty} \frac{\partial L}{\partial f_{00}} = \frac{f_{11}}{(f_{01} +f_{11})(f_{10}+f_{11})} \end{equation} \begin{equation} \label{eqn:10} L_{f_{10}}(\infty) = \lim_{f_{10} \longrightarrow \infty} \frac{\partial L}{\partial f_{10}} = 0 \end{equation} \begin{equation} \label{eqn:11} L_{f_{01}}(\infty) = \lim_{f_{01} \longrightarrow \infty} \frac{\partial L}{\partial f_{01}} = 0 \end{equation} Here it is evident that this function is not equal to 0 for all possible values of $ f_{11}, f_{10}, f_{01}$. Hence, we say that $UNAI_{f_{00}}$ is not satisfied but I w.r.t to $UNAI_{f_{11}}, UNAI_{f_{01}}, UNAI_{f_{10}}$ is satisfied. We check for the UNZR property for $f_{11}$ by taking the partial derivative at $f_{11}=0$, we get, \begin{equation} \label{eqn:12} L_{f_{11}}(0) = \frac{\partial L}{\partial f_{11}}|_{f_{11}=0} = \frac{f_{10}+f_{00}+f_{01}}{f_{10}f_{01}} \end{equation} Similarly, taking the derivative with respect to $f_{00}, f_{10}, f_{01}$ at 0, we get \begin{equation} \label{eqn:13} L_{f_{00}}(0) = \frac{\partial L}{\partial f_{00}}|_{f_{00}=0} = \frac{f_{11}}{(f_{11}+f_{10})(f_{11}+f_{01})} \end{equation} \begin{equation} \label{eqn:14} L_{f_{10}}(0) = \frac{\partial L}{\partial f_{10}}|_{f_{10}=0} = - \frac{(f_{01} + f_{00})}{(f_{11}+f_{01})f_{11}} \end{equation} \begin{equation} \label{eqn:15} L_{f_{01}}(0) = \frac{\partial L}{\partial f_{01}}|_{f_{01}=0} = - \frac{(f_{10} + f_{00})}{(f_{11}+f_{10})f_{11}} \end{equation} We see that for all feasible combinations $UNZR_{f_{11}}$ , $UNZR_{f_{10}}$ and $UNZR_{f_{01}}$ are satisfied. However, $UNZR_{f_{00}}$ is only partially satisfied. From equation \ref{eqn:13} we can see that the following conditions are met: (i) For all feasible combinations of $f_{11}, f_{10}, f_{01}$, $L_{f_{00}}(0) > 0$. This passes the definition of partial satisfaction for UNZR as defined in the paper. At the same time this does not fully satisfy the $ UNZR_{f_{00}}$ property since there are values where it can be 0\footnote{substitute $f_{11}$ = 0, while giving the others positive values}. Figure \ref{fig:lift} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{fig_lift} \caption{Change in value of Lift on varying the frequency counts} \label{fig:lift} \end{figure} \section{Mapping UNAI and UNZR to commonly used measures and other properties} \label{sec:propertyofmetrics} This section is divided in two parts. The first part performs a detailed analysis that uses the proposed properties to classify commonly used measures. The second part then compares these classifications to the classification done by other popular properties in literature \cite{TK2}. This two-fold approach is used because it is important to show that a property can actually differentiate between measures (Subsection \ref{subsec4.1}), and that it classifies measures in a way that is different from other properties (Subsection \ref{subsec4.2}). \subsection{Classification of existing measures using UNAI and UNZR} \label{subsec4.1} In this section we classify 50 common measures across the two properties $UNAI$ and $UNZR$, at both the $f_{ij}$ level as well as the metric level. We use all 21 metrics from \cite{TK2} and also borrow popular metrics from \cite{all}. We consciously avoid metrics which are mathematically identical as suggested by \cite{all}, but choose to have metrics which could still be rank-wise indistinguishable. We do this because practitioners might make sense of an absolute score and the rate at which it increases or decreases. We also avoid metrics which need us to make any \textit{a priori} assumptions on probability distributions or cannot be abstracted as a function of $f_{ij}$s. The analysis is carried out in accordance to the definitions in Section \ref{sec:mathreq} and findings are summarized in Table \ref{tbl:maintable}. \begin{table*}[] \centering \caption{The $UNAI$ and $UNZR$ properties exhibited by 50 interestingness measures}\label{tbl:maintable} \resizebox{\textwidth}{!} {\begin{tabular}{l|P{1cm}|P{1cm}|P{1cm}|P{1cm}|P{.7cm}|P{1cm}|P{1cm}|P{1cm}|P{1cm}|P{.7cm}|} \textbf{Measure} & \scalebox{0.75}{$UNAI_{f_{11}}$} & \scalebox{0.75}{$UNAI_{f_{00}}$} & \scalebox{0.75}{$UNAI_{f_{10}}$} & \scalebox{0.75}{$UNAI_{f_{01}}$} & \scalebox{0.75}{$UNAI$} & \scalebox{0.75}{$UNZR_{f_{11}}$} & \scalebox{0.75}{$UNZR_{f_{00}}$} & \scalebox{0.75}{$UNZR_{f_{10}}$} & \scalebox{0.75}{$UNZR_{f_{01}}$} & \scalebox{0.75}{$UNZR$} \\ \hline Lift & Y & N & Y & Y & N & Y & P & P & P & P \\ Jaccard & Y & Y & Y & Y & Y & Y & N & P & P & N \\ Confidence & Y & Y & Y & Y & Y & Y & N & Y & N & N \\ Recall & Y & Y & Y & Y & Y & Y & N & N & Y & N \\ Specificity & Y & Y & Y & Y & Y & N & Y & N & Y & N \\ Precision & Y & Y & Y & Y & Y & Y & N & Y & N & N \\ Ganascia & Y & Y & Y & Y & Y & Y & N & Y & N & N \\ Kulczynski-1 & N & Y & Y & Y & N & Y & N & P & P & N \\ F-Measure & Y & Y & Y & Y & Y & Y & N & P & P & N \\ Causal Confidence & Y & Y & Y & Y & Y & Y & Y & Y & N & N \\ Odd's Ratio & N & N & Y & Y & N & P & P & P & P & P \\ Negative Reliability & Y & Y & Y & Y & Y & N & Y & N & Y & N \\ Sebag - Schoenauer & N & Y & Y & Y & N & Y & N & P & N & N \\ Accuracy & Y & Y & Y & Y & Y & P & P & P & P & P \\ Support & Y & Y & Y & Y & Y & Y & N & P & P & N \\ Coverage & Y & Y & Y & Y & Y & P & N & N & P & N \\ Prevalence & Y & Y & Y & Y & Y & P & N & P & N & N \\ Relative Risk & Y & N & Y & Y & N & Y & P & Y & P & P \\ Novelty & Y & Y & Y & Y & Y & Y & Y & Y & Y & Y \\ Yule's Q & Y & Y & Y & Y & Y & P & P & P & P & P \\ Yule's Y & Y & Y & Y & Y & Y & P & P & P & P & P \\ Cosine & Y & Y & Y & Y & Y & Y & N & Y & Y & N \\ Least Contradiction & Y & Y & N & Y & N & Y & N & Y & N & N \\ Odd Multiplier & Y & N & Y & Y & N & Y & P & P & Y & P \\ Descriptive Confirm & Y & Y & Y & Y & Y & Y & N & Y & N & N \\ Causal Confirm & Y & Y & Y & Y & Y & Y & Y & Y & N & N \\ Certainty Factor & Y & Y & Y & N & N & P & P & N & Y & N \\ Conviction & Y & Y & Y & Y & Y & P & P & P & Y & P \\ Informational Gain & Y & Y & Y & Y & Y & Y & Y & P & P & P \\ Laplace & Y & Y & Y & Y & Y & Y & N & Y & N & N \\ Klosgen & Y & Y & Y & Y & Y & P & N & N & N & N \\ Piatetsky - Shapiro & Y & Y & Y & Y & Y & Y & Y & Y & Y & Y \\ Zhang & Y & N & Y & N & N & Y & P & Y & P & P \\ Y and L's 1-way support* & Y & Y & Y & Y & Y & N & P & N & P & N \\ Y and L's 2-way support* & Y & Y & Y & Y & Y & N & P & Y & Y & N \\ Implication Index & Y & Y & Y & Y & Y & N & N & N & N & N \\ Leverage & Y & Y & Y & Y & Y & Y & P & Y & N & N \\ Kappa & Y & Y & Y & Y & Y & P & P & Y & Y & P \\ Causal Confirm Confidence & Y & Y & Y & Y & Y & Y & Y & Y & N & N \\ Examples and Counter Examples & Y & Y & N & Y & N & P & N & Y & N & N \\ Putative Casual Dependency & Y & Y & Y & Y & Y & P & P & Y & Y & P \\ Dependency & Y & Y & Y & Y & Y & P & P & P & P & P \\ J-measure & Y & Y & Y & Y & Y & N & N & Y & N & N \\ Collective Strength & Y & Y & Y & Y & Y & Y & Y & Y & Y & Y \\ Gini Index & Y & Y & Y & Y & Y & N & N & P & P & N \\ Goodman-Kruskal & N & N & N & N & N & N & N & N & N & N \\ Mutual Information & Y & Y & Y & Y & Y & N & N & Y & Y & N \\ Normalized Mutual Information & Y & Y & Y & Y & Y & N & N & N & N & N \\ Loevinger & Y & Y & Y & N & N & P & P & N & Y & N \\ Added value & N & Y & N & N & N & P & P & P & P & P \end{tabular} } \par \begin{tablenotes} \small \item \item Where, Y: Indicates that the Property is Satisfied, P: Indicates that the property is partially satisfied, and N: Indicates that the property is not satisfied\\ * These metric names are shortened to fit into the table: Y and L's stand for Yao and Liu's for both the shortened names\ \end{tablenotes} \end{table*} The results on the classification of these measures provide two important insights. First, that $UNAI$ property for the metrics as a whole is satisfied by a majority of the measures (37 of the 50). These numbers are even higher for the individual $UNAI_{f_{ij}}$ (ranging from 45 for $f_{11}$, 44 for $f_{00}$, 46 for $f_{10}$ and 45 for $f_{01}$ out of the 50 measures). This suggests that UNAI would be less useful as a tool to eliminate measures that nullify the unstable effect of one frequency count being particularly large. Instead, this property can be useful when due importance needs to be given when a frequency count is expected to be high and continues to grow. A classic scenario would be Lift. In certain contexts, an increase in co-absence in a sparse database should continue to increase the metric value since it makes co-presence even less probabilistic through random chance.\\ The second insight from the case of $UNZR$ is of a different nature. At the overall metric level, there are only 3 measures that fully satisfy the UNZR property, they are \textit{Novelty}, \textit{Piatetsky-Shapiro} and \textit{Collective Strength}. Of the remaining, 14 measures partially satisfy the property and 33 fail to satisfy the property. For each $f_{ij}$ the UNZR measures are more discerning. In the case of $f_{11}$, 25 satisfy the property, 9 for $f_{00}$, 22 for$f_{10}$ and 15 for $f_{01}$. These suggest that UNZR at the $f_{ij}$ level could be more meaningfully used to pick metrics, especially for the case of $f_{00}$, which is satisfied by only nine measures. A particular case could be when the practitioner expects an $f_{ij}$ to be low or close to zero and would like to see the metric impacted when presented with evidence of it. The use of $UNZR$ at the overall metric level could also be useful if the practitioner suspects that any of the frequency values can be close to zero but would like to see its presence or absence to have a meaningful impact on the metric. \subsection{Comparing the UNAI and UNZR mapping with other properties} \label{subsec4.2} In this section we compare the classification of measures done through $UNZR$ and $UNAI$, with the classification done through other properties in literature \cite{TK2}. This is important because, in addition to fulfilling other criteria, it is necessary that a property classifies measures differently from other pre-existing properties. Otherwise, there is a redundancy and one could question the need for the new property in question. We conduct our comparison on the properties proposed by \cite{TK2}. This includes five new properties proposed in that study, as well as three previous properties from \cite{PS}. In order to perform the analysis, we take all the 50 measures analyzed in Table \ref{tbl:maintable} which include the 21 measures analyzed by \cite{TK2}. We conduct an analysis that compares the classification of these measures across the two states of $UNAI$ and three states of $UNZR$ and compare it to the two states (satisfied or not satisfied) across the 8 properties presented in \cite{TK2}. This leads us to create the Contingency Table \ref{tbl:table3}. \begin{table*}[h] \centering \caption{Contingency Table: Relationship between classification of UNAI and UNZR and the classification of other prominent properties} \label{tbl:table3} \resizebox{\textwidth}{!} { \begin{tabular}{@{}llcc|ccc@{}} \midrule & & \multicolumn{2}{c|}{UNAI} & \multicolumn{3}{c}{UNZR} \\ \midrule & & Satisfied & Not Satisfied & Satisfied & Partially Satisfied & Not Satisfied \\ \multirow{2}{*}{P1: Statistical independence} & Satisfied & 15 & 4 & 2 & 8 & 9 \\ & Not Satisfied & 22 & 9 & 1 & 6 & 24 \\ \hline \multirow{2}{*}{P2:(Refer \cite{PS})} & Satisfied & 34 & 13 & 3 & 14 & 30 \\ & Not Satisfied & 3 & 0 & 0 & 0 & 3 \\ \hline \multirow{2}{*}{P3:(Refer \cite{PS})} & Satisfied & 27 & 11 & 3 & 14 & 21 \\ & Not Satisfied & 10 & 2 & 0 & 0 & 12 \\ \hline \multirow{2}{*}{O1: Symmetry under variable permutation } & Satisfied & 13 & 4 & 3 & 7 & 7 \\ & Not Satisfied & 24 & 9 & 0 & 7 & 26 \\ \hline \multirow{2}{*}{O2: Row and Column Scaling Invariance} & Satisfied & 2 & 1 & 0 & 3 & 0 \\ & Not Satisfied & 35 & 12 & 3 & 11 & 33 \\ \hline \multirow{2}{*}{O3: Antisymmetry row or column permutation} & Satisfied & 4 & 0 & 2 & 2 & 0 \\ & Not Satisfied & 33 & 13 & 1 & 12 & 33 \\ \hline \multirow{2}{*}{O3': Inversion Invariance} & Satisfied & 10 & 1 & 3 & 5 & 3 \\ & Not Satisfied & 27 & 12 & 0 & 9 & 30 \\ \hline \multirow{2}{*}{O4: Null Invariance} & Satisfied & 8 & 4 & 0 & 0 & 12 \\ & Not Satisfied & 29 & 9 & 3 & 14 & 21 \\ \midrule \end{tabular} } \end{table*} The findings from Table \ref{tbl:table3} suggest that the classification of measures through $UNAI$ and $UNZR$ are more or less independent of the classification done through all of the eight pre-existing properties. The few cases where we see low overlaps is also easily explainable by the low membership to a certain class and not a relationship between properties (for instance, observe that only 3 of the 50 measures satisfy the 'Row and Column Scaling Invariance' or fully satisfy UNZR). We do not, however, carry out a Chi-Square test to establish independence because in the case of some properties they are explicitly related. For instance, all Null Invariant properties have to fail UNZR by definition. It is therefore not entirely meaningful to perform such an analysis to look at statistical independence. The overarching conclusion from the Table \ref{tbl:table3} is that while some of these properties could be weakly related to each other, there is sufficient independence with pre-existing properties that can justify UNAI and UNZR as two new properties in-terms of classification of measures. \section{Empirical Studies } \label{sec:Casestudy} The work of \cite{all} has established that empirical clustering of measures bears no meaningful relationship to properties presented in \cite{TK2} (which also cover three properties originally presented in \cite{PS}). While the properties UNAI and UNZR have been constructed to intuitively convey a certain mathematical aspect of the measure, an important motivation and therefore requirement in design was that they have a meaningful map to the actual behavior of measures, empirically. Our studies across a wide range of datasets, both synthetic and real suggest that these two properties bear strong relationships with the empirical clusters. More interestingly, we find that the results are substantially more pronounced in certain environmental conditions. Specifically, we find that $UNZR_{f_{11}}$ and $UNAI_{f_{00}}$ are valuable in sparse datasets, and correspondingly $UNZR_{f_{00}}$ and $UNAI_{f_{11}}$ are better properties to consider in dense data. In the following sections, we do a detailed and illustrative analysis showing how the $UNZR_{f_{11}}$ classification of measures is useful in sparse datasets and $UNZR_{f_{00}}$ is useful in dense datasets. The motivation to choose the $UNZR$ properties over the $UNAI$ is the fact that the $UNZR$ creates groups of more or less equal sizes. For instance, $UNZR_{f_{11}}$ splits the measures with 25 of them satisfying the property, 15 of them partially satisfying it, and 10 of them failing to satisfy the property. Where as with $UNAI_{f_{00}}$ we see that 44 of the 50 measures satisfy this property. A similar comparison exists between $UNZR_{f_{00}}$ and $UNAI_{f_{11}}$. We conduct our empirical studies by first considering synthetic contingency tables that mimic sparse and dense datasets, and in each case we explore further by choosing a real world dataset that is sparse and dense, respectively. Based on the rule ranking of the measures in the two environmental conditions, we then cluster the measures into sets and see how they correlate with the property of interest. \subsection{Sparse datasets} \label{subsec5.1} Sparse datasets are characterized by having a relatively high $f_{00}$ count with respect to $f_{11}$, primarily, and to a lesser extent $f_{10}$, and $f_{01}$. As discussed in the previous section we choose to analyze the effect of the $UNZR_{f_{11}}$ property in this setting. We mimic the rules from a synthetic dataset using artificially created sets of rules in form of contingency tables. We do this specifically for the sparse settings. We achieve these environments by assigning low values to ${f_{11}}$, high values for ${f_{00}}$, while ${f_{10}}$, ${f_{01}}$ fall in between the two extremes. The ${f_{11}}$, ${f_{00}}$, ${f_{10}}$ and ${f_{01}}$ cells of the tables took the values \{0, 1, 10, 11\}, \{1000, 5000, 10000, 25000, 50000, 75000, 100000\}, \{10, 100, 250, 500, 600, 800, 1000\} and \{10, 100, 250, 500, 600, 800, 1000\} respectively. This resulted in $1372$ unique contingency tables, each representing a rule in a sparse dataset. For the real world dataset, we chose the fairly popular 'Adult' data set from the UCI Machine Learning archive \cite{UCI1}. This is essentially an extraction from a census database which has demographic and financial information of individuals. This includes features like age, employment, gender, native country, etc. In its native format there are a total of 14 features and more than 48,000 records. A detailed discretization and binarization of variables was carried out in conformance to the best practices suggested in \cite{tankumarbook}. These helps us create the transactional table. This table now has a total of 115 features. We confine the analysis to one-to-one rules. We use a basic support based pruning with a threshold close to 0, in order to get a full enumeration of all one-to-one rules but avoid a variable mapping to itself. This results in a total of $13000$ rules. Similar to the \cite{all} we choose a subset of the rules to compare. However, given the unique nature of our problem, unlike \cite{all} we do not randomly select the rules. Instead we choose a subset of rules that are typically encountered in sparse data sets, by selecting cases where $f_{11}$ is lower than $f_{00}$. This results in $764$ rules. In the next steps we follow the same procedure as \cite{all}. Each rule is evaluated using each measure, and a rank ordering of rules is done for each measure. Using Spearman's rank correlation, we create a matrix of pairwise distances between measures which acts as the adjacency matrix for a complete graph. We create clusters by using a threshold value of 0.8 on the correlation co-efficient. This process naturally creates groups of measures depending on the threshold used. While there are various other graph clustering algorithms that can be implemented, the simplicity of this approach is appealing. \begin{table}[t] \centering \caption{Empirical analysis - Sparse dataset} \label{tbl:table4} { \begin{tabular}{lcc|ccc} \midrule Dataset & Cluster & Measures & N & P & Y \\ \hline & & 50 & 10 & 15 & 25\\ \hline \multirow{3}{*}{Synthetic} & A & 21 & 0 & 4 & 17 \\ & B & 20 & 4 & 9 & 7 \\ & C & 9 & 6 & 2 & 1 \\ \hline \multirow{2}{*}{Adult} & A & 36 & 2 & 12 & 22 \\ & B & 14 & 8 & 3 & 3 \\ \end{tabular} } \end{table} Our study finds that there is a significant match between the three property states and the clusters that are formed for both the synthetic and real data sets. However, this is not a perfect overlap. We split the measures into three clusters in the synthetic setting and into two clusters in the 'Adult' dataset's rules. The cluster memberships are shown below: \textbf{Synthetic dataset}: \textbf{Cluster A}: \{ Recall, Precision, Confidence, Jaccard, F-Measure, Odd's Ratio, Sebag Schoenauer, Support, Lift, Ganascia, Kulczynski-1, Relative Risk, Yule's Q, Yule's Y, Cosine, Odd Multiplier, Information Gain, Laplace, Zhang, Leverage, Examples and Counter Examples \}, \textbf{Cluster B}: \{ Specificity, Negative Reliability, Accuracy, Descriptive Confirm, Causal Confirm, Piatetsky-Shapiro, Novelty, Causal Confidence, Certainty Factor, Loevinger, Conviction, Klosgen, 1-Way Support, 2-Way Support, Kappa, Putative Causal Dependency, Causal Confirm Confidence, Added Value, Collective Strength, Dependency \}, \textbf{Cluster C}: \{ Mutual Information, Coverage, Prevalence, Least Contradiction, Normalized Mutual Information, Implication Index, Gini Index, Goodman Kruskal, J-Measure \} \textbf{'Adult' dataset}: \textbf{Cluster A}: \{ Recall, Precision, Confidence, Jaccard, F-Measure, Odd's Ratio, Sebag Schoenauer, Support, Causal Confidence, Lift, Ganascia, Kulczynski-1, Relative Risk, Piatetsky-Shapiro, Novelty, Yule's Q, Yule's Y, Cosine, Odd Multiplier, Certainty Factor, Loevinger, Conviction, Information Gain, Laplace, Klosgen, Zhang, 1-Way Support, 2-Way Support, Leverage, Kappa, Putative Causal Dependency, Examples and Counter Examples, Causal Confirm Confidence, Added Value, Collective Strength, Dependency \}, \textbf{Cluster B}: \{ Mutual Information, Specificity, Negative Reliability, Accuracy, Coverage, Prevalence, Least Contradiction, Descriptive Confirm, Causal Confirm, Normalized Mutual Information, Implication Index, Gini Index, Goodman Kruskal, J-Measure \} The relationship between empirical cluster memberships and property affiliations is summarized in Table \ref{tbl:table4}. In the synthetic dataset, all of the 21 measures of cluster A satisfy $UNZR_{f_{11}}$, either completely of partially. The split is rather more even in cluster B, but cluster C is dominated by measures which do not satisfy $UNZR_{f_{11}}$. In the 'Adult' dataset, cluster A again overwhelmingly consists of measures which satisfy $UNZR_{f_{11}}$, either partially or completely (34 out of 36), whereas the properties that do not satisfy $UNZR_{f_{11}}$ tend to exist more in cluster B. \subsection{Dense datasets} \label{subsec5.2} We characterize dense dataset as one which has relatively higher ${f_{11}}$ count compared to ${f_{00}}$ count, primarily, and to a lesser extent $f_{10}$, and $f_{01}$. As discussed earlier, we choose to study the effect of $UNZR_{f_{00}}$ property in this environment. The motivation for using synthetic tables is the same as in the sparse case. The values chosen for ${f_{11}}$, ${f_{00}}$, ${f_{10}}$ and ${f_{01}}$ cells are \{1000, 5000, 10000, 25000, 50000, 75000, 100000\}, \{0, 1, 10, 11\}, \{10, 100, 250, 500, 600, 800, 1000\} and \{10, 100, 250, 500, 600, 800, 1000\} respectively. This resulted in $1372$ unique contingency tables. For the real world dataset, we chose 'Mushroom' data set from the UCI Machine Learning archive \cite{UCI1}. This data set includes descriptions of hypothetical samples corresponding to 23 species of gilled mushrooms in the Agaricus and Lepiota Family. The methodology of rule generation was identical to that of the 'Adult' dataset, with the focus to create rules from a dense environment (as opposed to the sparse environment in the Adult dataset). This process results in in $739$ rules being used for the purpose of rule ranking. \begin{table}[t] \centering \caption{Empirical analysis - Dense dataset} \label{tbl:table5} { \begin{tabular}{lcc|ccc} \midrule Dataset & Cluster & Measures & N & P & Y \\ \hline & & 50 & 23 & 18 & 9 \\ \hline \multirow{3}{*}{Synthetic} & A & 24 & 3 & 15 & 6 \\ & B & 19 & 14 & 2 & 3 \\ & C & 7 & 6 & 1 & 0 \\ \hline \multirow{4}{*}{Mushroom} & A & 23 & 2 & 15 & 6 \\ & B & 12 & 7 & 3 & 2 \\ & C & 12 & 11 & 0 & 1 \\ & D & 3 & 3 & 0 & 0 \\ \end{tabular} } \end{table} The synthetic dataset was split into 3 clusters while the 'Mushroom' dataset was split into 4 clusters. The cluster memberships are shown below: \textbf{Synthetic dataset:} \textbf{Cluster A:} \{ Recall, Odd's Ratio, Specificity, Negative Reliability, Lift, Coverage, Piatetsky-Shapiro, Novelty, Yule's Q, Yule's Y, Odd Multiplier, Certainty Factor, Loevinger, Conviction, Information Gain, Klosgen, Zhang, 1-Way Support, 2-Way Support, Kappa, Putative Causal Dependency, Added Value, Collective Strength, Dependency \}\textbf{Cluster B:} \{ Precision, Confidence, Jaccard, F-Measure, Sebag Schoenauer, Support, Accuracy, Causal Confidence, Ganascia, Kulczynski-1, Prevalence, Relative Risk, Cosine, Least Contradiction, Descriptive Confirm, Causal Confirm, Laplace, Examples and Counter Examples, Causal Confirm Confidence \}\textbf{Cluster C:} \{ Mutual Information, Normalized Mutual Information, Implication Index, Gini Index, Goodman Kruskal, Leverage, J-Measure \} \textbf{'Mushroom' dataset:} \textbf{Cluster A}: \{ Recall, Specificity, Negative Reliability, Lift, Piatetsky-Shapiro, Novelty, Yule's Q, Yule's Y, Odd Multiplier, Certainty Factor, Loevinger, Conviction, Information Gain, Klosgen, Zhang, 1-Way Support, 2-Way Support, Leverage, Kappa, Putative Causal Dependency, Added Value, Collective Strength, Dependency \} \textbf{Cluster B:} \{ Mutual Information, Odd's Ratio, Accuracy, Causal Confidence, Prevalence, Relative Risk, Least Contradiction, Descriptive Confirm, Causal Confirm, Normalized Mutual Information, Gini Index, J-Measure \} \textbf{Cluster C:} \{ Precision, Confidence, Jaccard, F-Measure, Sebag Schoenauer, Support, Ganascia, Kulczynski-1, Cosine, Laplace, Examples and Counter Examples, Causal Confirm Confidence \} \textbf{Cluster D:} \{ Coverage, Implication Index, Goodman Kruskal \} The results from this analysis is summarized in Table \ref{tbl:table5}. In the synthetic dataset, cluster A is populated by measures which satisfy the $UNZR_{f_{00}}$ (21 out of 24), either partially or completely. Clusters B (14 out of 19) and C (6 out of 7) are dominated by measures that do not satisfy $UNZR_{f_{00}}$. In the 'Mushroom' dataset, cluster A is again consisted of measures which satisfy $UNZR_{f_{00}}$, either partially or completely (21 out of 23). Cluster B is split between the measures that satisfy $UNZR_{f_{00}}$ and measure that don't (7 N's vs 3 P's and 2 Y's). Clusters C and D are overwhelmingly consisted of measures which don't satisfy $UNZR_{f_{00}}$, with only 1 measure satisfying the property among the 15 in both clusters combined. In general, it is evident that the clustering holds a clear mapping to the $UNZR_{f_{00}}$ property for the selected rules in a dense setting. \section{Conclusions and Future work} \label{sec:Conclusion} This study presents a new property-based framework (RCA) for analyzing interestingness measures. This framework uses the partial derivative of an IM with respect to a frequency count. This provides us with the insight of how the IM will change when the frequency count is increased or decrease. This approach is then used to create two specific properties, $UNAI$ and $UNZR$, which correspond to taking the partial derivative at two points, infinity and zero. The study then showcases the classification of a broad set of measures in accordance to these properties and also compares them to the classification done by other properties in literature. The properties proposed in this study classify the measures assigning memberships to all property states, suggesting that they might be discerning some meaningful differences in the measures. The classifications through these properties are also fairly independent of those done by other pre-existing properties, suggesting, that something new is being captured. Finally, the study showcases the utility of classification through the new properties by conducting empirical analyses on both synthetic and real-world data sets, which relate the rule ranking behavior of the measures with two of the properties proposed. The findings suggest that the rule ranking behavior holds a clear relationship to the classification done by the property. One of the major contributions of this research is the new framework (RCA) for analyzing measures using the rate of change idea through partial differentiation. This is markedly different from the property-based classification schemes that currently exist in literature. Given this, we feel that there could be a more extensions in the development of properties that build on this idea, which go beyond the two that are proposed in this study. Also, the idea of using differentiation as tool to defining properties opens up a plethora of characteristics that can be analyzed. One possible extension is to study the shape of the partial derivative curve (linear, polynomial, etc). Finally, the authors in this study agree with the view put forth in \cite{all} that meaningful classification of measures needs to, also, be driven by similarity (or dissimilarity) in rule ranking that can be seen on empirical data sets. We would like to extend this argument by stating that the value of mathematical properties, derived from principled arguments, can be benchmarked across-the-board in this fashion (this study performs such an analysis exclusively for the two properties proposed in this study). This can also be extended beyond Interestingness measures in ARM. We can see that classification metrics (some of which are included in this analysis like accuracy, recall, specificity, etc.) can also be defined by the same contingency table (for two class classification problems) and could therefore lend themselves to a representation and segmentation using a rate of change analysis. \section*{Acknowledgments}\label{sec:Acknowledgments} This work was supported by a funding from IIT Madras (CSE/14-15/831/RFTP/BRAV)
{ "timestamp": "2017-12-15T02:06:59", "yymm": "1712", "arxiv_id": "1712.05193", "language": "en", "url": "https://arxiv.org/abs/1712.05193" }
\section{Experimental Setup} Our qubit is encoded in the $^2\mathrm{S}_{1/2}$ hyperfine ground states of a single laser-cooled ${}^{171}{\textrm{Yb}}^{+}$ ion confined in a linear Paul trap, with the computational basis states defined as $\ket{0} \equiv \ket{\textrm{F}=0, \textrm{m}_\textrm{F} = 0}$ and $\ket{1} \equiv \ket{\textrm{F}=1, \textrm{m}_\textrm{F} = 0}$. Laser cooling, state initialization to $\ket{0}$ and detection are performed using a laser at 369.4\,nm, which is coupling the $\ket{{}^{2}{\textrm{S}}_{1/2}, \textrm{F}=1}$ ground state to the first excited state $\ket{^2\mathrm{P}_{1/2}, \textrm{F}=0}$. As the ion selectively fluoresces when it is projected to the upper, bright qubit state $\ket{1}$, we are able to distinguish between the two basis states by counting the number of emitted photons during the detection period. Further details about the state detection protocol, including a Bayesian inference procedure used to determine the state from both the number of counted photons and their arrival times, can be found in the \emph{Supplementary Materials} of reference \cite{Mavadia:2017}. Single-qubit rotations are driven via a microwave field near 12.6~GHz generated by a Keysight E8267D Vector Signal Generator (VSG). Using the built-in $IQ$ modulation, we can freely adjust the phase of the microwave signal to implement rotations around arbitrary equatorial axes on the Bloch sphere. Rotations about the $z$-axis are implemented as instantaneous, pre-calculated $IQ$ frame shifts and consequently are not susceptible to engineered detuning or amplitude errors. Quantum circuits of multiple Clifford operations are preloaded into the VSG and selectively compiled prior to the recording of each data set. The corresponding microwave pulses are switched using an inbuilt VSG protocol, RF blanking, which serves to minimize microwave leakage between operations and at the end of a circuit. In addition, the technique suppresses ringing in the pulse amplitude of the microwaves at the beginning and conclusion of an operation, which is caused by updates of the $IQ$ values defining both phase and amplitude. \section{Measurement Procedure and Engineered Noise Correlations} The experiments in this manuscript are performed using $k=50$ circuits each comprising $J=100$ operations. Here, the first $J-1=99$ are randomly composed Clifford operations $\mathcal{\hat{C}}_j$ and the final operation $\mathcal{\hat{C}}_J = (\prod_{j=1}^{J-1}\mathcal{\hat{C}}_j)^{\dagger}$ is selected such that the circuit implements the identity $\mathbb{I}$ in the absence of error. A full list of the Clifford operations and their physical implementation can be found in the \emph{Supplementary Materials} of reference \cite{Ball:2016}. Each circuit is executed in the presence of engineered detuning noise characterized by a temporal correlation length $\mathcal{M}_\mathrm{n}$. In particular, three cases are implemented: (1) fully correlated across the circuit ($\mathcal{M}_\mathrm{n} = J$), (2) fully uncorrelated between sequential gates, with noise values stochastically varying in blocks commensurate with the length of a physical $\pi/2$ rotation ($\mathcal{M}_\mathrm{n}\leq1$), and (3) a combination of both correlated and uncorrelated noise components ($\mathcal{M}_\mathrm{n}^\mathrm{cor} \geq \mathcal{M}_\mathrm{n}^\mathrm{uncorr}\leq1$). This type of noise process is informed by the realistic situation of a time varying magnetic field, which changes the qubit energy splitting, creating a detuning from the driving field. In particular, fluctuations at 50~Hz (resp. 60~Hz) can be commonly observed due to the presence of AC mains connections. Other strongly correlated slow frequency drifts are often related to changes associated with the ambient temperature of electrical equipment and duty cycle changes during operations, while fast fluctuations are usually caused by electrical noise in components that is insufficiently filtered or intrinsic to the qubit environment (such as TLS noise in superconducting qubits or anomalous heating in ion traps). In general, detuning noise will therefore both have a correlated (slow) component and a \enquote{fast} largely uncorrelated component. For each instance of engineered noise, $n=200$ noise realizations were sampled from a normal distribution $\delta\sim\mathcal{N}(0,\sigma^2)$ with rms $\sigma$. Here, $\delta = (\Delta/\Omega)$ is a fractional detuning expressed by the ratio of the frequency detuning $\Delta$ from the qubit transition frequency near 12.6~GHz normalized by the Rabi frequency $\Omega$ (coupling strength) of a driven rotation. Every combination of circuit and noise realization was repeated $r=220$ times to reduce the impact of quantum projection noise. \section{Dynamically Corrected Gates} \label{sec:DCG} Dynamically corrected gates (DCGs) are implemented by replacing \enquote{primitive} physical rotations with composite sequences comprised of multiple physical rotations \cite{Kabytayev2014}. In particular, we are investigating the \enquote{Compensation for Off-Resonance with a Pulse SEquence} (CORPSE) \cite{Cummins:2000,True} and \enquote{Walsh amplitude modulated filter} (WAMF) \cite{Ball:2014} approaches that abstract target rotations away from the underlying physical operations to a virtual gate level. In both cases, target $\theta_t = \pi$ and $\theta_t = \pi/2$ gates are constructed as 3 segment pulses with the segments' rotation angles $\theta_i$, Rabi frequencies $\Omega_i$ relative to some maximum frequency $\Omega$, and phase angles $\phi_i$ as indicated in the table below. \begin{table}[H] \begin{centering} \renewcommand{\arraystretch}{2} \begin{tabular}{|c||c|c|c|} \hline DCG type& ($\theta_1, \Omega_1, \phi_1$) &($\theta_2, \Omega_2, \phi_2$) & ($\theta_3, \Omega_3, \phi_3$) \\ \hline\hline CORPSE & ($2\pi + \theta_t/2 - k , \Omega, 0 $) & ($2\pi - 2k , \Omega, \pi $) & ($\theta_t/2 - k , \Omega, 0 $) \\ \hline WAMF & ($\tfrac{X_0 + X_3}{4} , \Omega, 0 $) & ($\tfrac{X_0 - X_3}{2} , \tfrac{X_0 - X_3}{X_0 + X_3} \Omega, 0 $) & ($\tfrac{X_0 + X_3}{4} , \Omega, 0 $) \\ \hline \end{tabular}\caption{Required parameters to construct a target $\hat{\sigma}_x$ CORPSE and WAMF rotation with target angle $\theta_t$. An additional $\pi/2$ shift in $\phi$ is required for $\hat{\sigma}_y$ rotations. Here, $k = \arcsin{(\tfrac{\sin{(\theta_t/2)}}{2})}$ and for WAMF DCGs, the target rotations $\theta_t = (\tfrac{\pi}{4}, \tfrac{\pi}{2}, \pi)$ have $X_0 = (2\tfrac{1}{4}, 2\tfrac{1}{2}, 3)\pi$ and $X_3 = (0.36, 0.64, 1)\pi$ determined explicitly.}\label{SuppTable:DCGs} \end{centering} \end{table} A schematic of the gates for a target rotation $\theta_t = \pi$ about the $x$-axis is shown in Fig. \ref{SuppFig:DCGs}. \begin{figure}[h] \centering \includegraphics[scale = 1]{FigDCGs.pdf} \caption{Construction of a CORPSE and WAMF DCG for target rotation $\theta_t = \pi$ about the $x$-axis. } \label{SuppFig:DCGs} \end{figure} To ensure that the error suppressing aspects of the DCGs are maintained for all Clifford gates, we implement their identities $\mathbb{I}$ by concatenating an $X_\pi$ rotation and its inverse $-X_\pi$ in the case of CORPSE and WAMF. While this again results in a net zero rotation, effectively identical to the simple wait time of \enquote{primitive} gates, it makes the identity operation first-order insensitive to detuning errors during its operation. \section{Linking Noise to Error Using the Error Vector} Any noisy operation, $\tilde{U}_j$, can be decomposed into an ideal component, $U_j$ and an error component, $\tilde{U}_{\varepsilon,j}$, such that \mbox{$\tilde{U}_j = \tilde{U}_{\varepsilon,j}U_j$}. The error operator is expressed as $\tilde{U}_{\varepsilon,j} = \textrm{exp}\{ \sum_{\alpha=1}^\infty [\vec{\textbf{a}}_j]_\alpha \cdot \vec{\sigma} \}$, where $\alpha$ denotes the order of the so-called Magnus expansion, $\vec{\sigma}$ is the vector of Pauli matrices associated with the operation, and $\vec{\textbf{a}}_j$ is the error vector characterizing the strength and nature (affected quadrature) of the error. Correlations in the error process manifest in its gate-by-gate evolution throughout a quantum circuit. At the level of physical gate rotations (\enquote{primitive} gates), one can observe a direct translation between correlations in an applied detuning error process during a quantum circuit and correlations arising in the magnitude of the error vector for each of the gates in the circuit. Supported by the filter-transfer-function framework \cite{ViolaFFF}, this suggests that primitive gates map noise correlations directly to error correlations. In the main text, we show this mapping by calculating the autocorrelation function of the first-order error vector magnitude for a single randomly composed circuit under one noise instance with varying block correlation length, $\mathcal{M}_\mathrm{n}$. In Figure~\ref{SuppFig:ErrorVect} below, this mapping is seen to be persistent, if not strengthened, even under averaging error vectors over randomly composed circuits and noise realizations. \begin{figure}[h!] \centering \includegraphics[scale = 1]{FigACF.pdf} \caption{\textbf{Mapping correlations in a noise process to correlations in circuit errors} Autocorrelation function (ACF) of the first-order error vector magnitude, $\vert \vec{\textbf{a}} \vert$, for the first 100 gates of a circuit comprising $J=1000$ randomly composed primitive Clifford operations under a detuning noise process with fractional detuning $\delta \sim \mathcal{N}(0,\sigma^2)$, which changes value every noise block length $\mathcal{M}_\mathrm{n}$. (A) ACF calculated for a single $J=1000$ gate circuit under one noise realization, and (B) ACF averaged over $k=50$ circuits and $n=50$ noise realizations to illustrate the persistence the mapping from noise correlations $\mathcal{M}_\mathrm{n}$ to error correlations $\mathcal{M}_\varepsilon$ for primitive gates.} \label{SuppFig:ErrorVect} \end{figure} \section{Modifications to Original Random Walk Model for Randomized Benchmarking} The theoretical model underlying this work was initially presented by Ball et al. in reference \cite{Ball:2016}, wherein the error process studied described an instantaneous phase error, $e^{i\delta\sigma_z}$, occuring after each Clifford operation in a randomly composed quantum circuit, such as those used in Randomized Benchmarking (RB). In this context, the dephasing magnitude was sampled from a zero-mean Gaussian distribution with rms $\sigma$, $\delta\sim\mathcal{N}(0,\sigma^2)$. The key finding of Ball et al. was that, to first order, it is possible to map between the errors occurring throughout the Clifford operation circuit and a walk in 3D Pauli error space, with the net walk length relating to the circuit fidelity. It was found that this error process results in noise-averaged circuit fidelities that are Gamma distributed, $\noiseAve{\mathcal{F} } \sim \Gamma(\alpha,\beta)$. The shape and scale parameters, $\alpha$ and $\beta$ respectively, can be calculated from first principles using the strength of the error process $\sigma^2$, the circuit length $J$, and the number of noise averages $n$. These parameters describe how the mean and width of the distribution change with noise averaging. The distributions for errors that are constant across a circuit (correlated) and errors that change randomly between sequential gates (uncorrelated) are respectively given by \begin{align} 1 - \langle \mathcal{F} \rangle_{n,\textrm{correlated}} &\sim \Gamma(\alpha = \tfrac{3}{2}, \beta = \tfrac{2}{3}J\sigma^2) \\ 1 - \langle \mathcal{F} \rangle_{n,\textrm{uncorrelated}} &\sim \Gamma(\alpha = \tfrac{3n}{2}, \beta = \tfrac{2}{3n}J\sigma^2) \end{align} where the distribution variance and expectation are given by $\mathbb{E} = \alpha\beta, \mathbb{V} = \alpha\beta^2$. Consequently, we see there is narrowing of the distribution with noise averaging soley for uncorrelated errors. In the following, we present a revised version of this theoretical model linking it to the experiments performed in the present manuscript. \subsection{Model Revision for Survival Probability Measurement} When applying the above theory to experimental results, it is necessary to consider how the measurement protocol differs from the analytic model. The original theory was based around the fidelity of a noisy circuit operation, with net operator $\tilde{\mathcal{S}}$, compared to the ideal circuit, $\mathcal{S} = \mathbb{I}$, \begin{align} \mathcal{F} &= \tfrac{1}{4} \vert \textrm{Tr} (\mathcal{S}^\dagger \tilde{\mathcal{S}}) \vert^2 \nonumber \\ &= \tfrac{1}{4} \vert \textrm{Tr} (\tilde{\mathcal{S}}) \vert^2. \end{align} In our experiment we measure the probability $P(\ket{1})$ of a qubit initially prepared in $\ket{0}$ \emph{not} to return to $\ket{0}$ but end up in state $\ket{1}$. As this is a projective measurement onto the $z$-axis of the Bloch sphere, it is insensitive to rotations about the $z$-axis, \emph{i.e.} it is phase-invariant. Consequently, we are insensitive to the component of the Pauli space walk in the $\hat{\sigma}_z$-direction. Indeed, the projective measurements actually probe a 2D projection of the walk onto the $\hat{\sigma}_x,\hat{\sigma}_y$-plane, and the Gamma distribution shape and scale parameters become \begin{align} \langle P(\ket{1}) \rangle_{n,\textrm{correlated}} &\sim \Gamma(\alpha = 1, \beta = \tfrac{2}{3}J\sigma^2) \label{eq:2DCorGamma} \\ \langle P(\ket{1})) \rangle_{n,\textrm{uncorrelated}} &\sim \Gamma(\alpha = n, \beta = \tfrac{2}{3n}J\sigma^2) \label{eq:2DUncorGamma}. \end{align} Further details can be found in the \emph{Supplementary Materials} of \cite{Mavadia:2017}. \subsection{Model Revision for Concurrent Detuning Error Processes} A second alteration to the model emerges from our method of noise engineering. The results presented here study a time varying or constant frequency detuning during the circuit's execution. Unlike in the original model, this induces multi-axis errors throughout the individual Clifford operations, not just between them. Due to the gates spanning different lengths for $\pi$ and $\pi/2$ rotations, they accumulate different amounts of phase from the detuning. As such, the analytic model must now consider gate-dependent errors. Such errors violate the original assumptions of randomized benchmarking as has recently been highlighted in reference \cite{Wallman:2017}. The adaptation to the theory for noisy Clifford gates is initially presented for two noise processes: (1) detunings that are constant across individual gates but vary randomly between gates, or (2) constant detunings across the entire circuit, giving maximal temporal correlation in the noise. Starting with the standard randomized benchmarking procedure we compile a circuit of randomly composed single qubit Clifford operations $\prod_{j-1}^J \hat{C}_j = \mathbb{I}$ such that in the absence of error the final state will be the same as the prepared state. \noindent The effectively implemented gates, $\tilde{C}_j$, differ from the ideal gates $\hat{C}_j$ by an error map $\Lambda_j$ that satisfies $\tilde{C}_j=\Lambda_j \hat{C}_j$. Then, the circuit is given by \begin{equation} \prod_{j=1}^J \Lambda_j\hat{C}_j=\tilde{S}. \end{equation} \noindent The single qubit Clifford gates are made up of rotations about the Bloch sphere \begin{equation} \hat{R}_{\hat{W}}(\theta)=e^{-i\tfrac{\theta}{2}\hat{W}}, \label{eq:idealrot} \end{equation} where $\hat{W}\in \{\mathbb{I},\hat{\sigma}_x,\hat{\sigma}_y,\hat{\sigma}_z\}$ and $\theta=\pi, \pm\tfrac{\pi}{2}$. \noindent The implemented rotations, with the engineered error, are \begin{equation} \tilde{R}_{\hat{W}}(\theta,\delta)=e^{-i\left(\tfrac{\theta}{2}\hat{W}+\tfrac{\lvert\theta\rvert}{2}\delta \hat{\sigma}_z\right)}, \label{eq:tilderot} \end{equation} except for $\hat{\sigma}_z$ rotations which are implemented as a passive frame change, hence error free. Using the standard definition of single qubit Clifford gates \cite{Ball:2016}, there is only one non $\hat{\sigma}_z$ rotation, hence one error map, per gate. \noindent We calculate the error map for the different rotations \begin{subequations} \label{eq:decomposed_all} \begin{align} \Lambda^{(\mathbb{I})}(\pi,\delta)&=\left(1-\tfrac{\pi^2 \delta^2}{8}\right)\mathbb{I}-i\tfrac{\pi\delta}{2}\hat{\sigma}_z+ \mathcal{O}(\delta^3),\\ \Lambda^{(\hat{X})}(\pi,\delta)&= \left(1-\tfrac{\delta ^2}{2}\right)\mathbb{I} -\tfrac{1}{4} i \pi \delta ^2 \hat{\sigma}_x +i \delta \hat{\sigma}_y+ \mathcal{O}(\delta^3),\\ \Lambda^{(\hat{X})}(\pm\tfrac{\pi}{2},\delta)&= \left(1-\tfrac{\delta ^2}{4}\right)\mathbb{I}\pm\tfrac{2-\pi}{8} i \delta ^2\hat{\sigma}_x\pm\tfrac{i \delta }{2}\hat{\sigma}_y-\tfrac{i \delta }{2}\hat{\sigma}_z+ \mathcal{O}(\delta^3),\\ \Lambda^{(\hat{Y})}(\pi,\delta)&=\left(1-\tfrac{\delta ^2}{2}\right)\mathbb{I}-i \delta \hat{\sigma}_x-\tfrac{1}{4} i \pi \delta ^2\hat{\sigma}_y+ \mathcal{O}(\delta^3),\\ \Lambda^{(\hat{Y})}(\pm\tfrac{\pi}{2},\delta)&=\left(1-\tfrac{\delta ^2}{4}\right)\mathbb{I}\mp\tfrac{i \delta }{2}\hat{\sigma}_x\pm\tfrac{2-\pi}{8} i \delta ^2\hat{\sigma}_y-\tfrac{i \delta }{2}\hat{\sigma}_z+ \mathcal{O}(\delta^3), \end{align} \end{subequations} which can be written in the general form \begin{equation} \Lambda_j=\mathbb{I}+i \delta_j \bm{\nu}_j \cdot \bm{\sigma}+ \delta^2(i \bm{\eta}_j\cdot \bm{\sigma} - a_j \mathbb{I}) + \mathcal{O}(\delta^3), \label{eq:genmap} \end{equation} where $\bm{\sigma}$ is a vector of Pauli matrices and the vectors $\bm{\nu}$, $\bm{\eta}$ and $a_j$ depend on which error map from \eqn{eq:decomposed_all} is used. \noindent The survival probability averaged over $n$ noise instances is calculated using \begin{equation} 1-\noiseAve{ P(\ket{1}) }=\noiseAve{\lvert\bra{0}\tilde{S}\ket{0}\rvert^2}. \end{equation} We use the method from \cite{Ball:2016} to approximate the circuit. Each error map can be translated to a step in Pauli space away from the ideal state, with the total, noise-averaged, random walk given by \begin{equation} \noiseAve{\bm{R}}=\tfrac{1}{n}\sum_{k=1}^n\sum_{j=1}^J \delta_{jk} \bm{r}_j. \end{equation} Here $\bm{r}_j$ is from the product of the preceeding and succeeding ideal gates modifying $\bm{\nu}_j$ as so $\hat{C}_{1...j-1}\bm{\nu}_j\cdot\bm{\sigma}\hat{C}_{1...j-1}^\dagger=\bm{r}_j\cdot\bm{\sigma}$. \noindent It can be shown \cite{Harris:2018} that the survival probability is given by \begin{equation} 1 - \noiseAve{ P(\ket{1}) }=1-\left(\noiseAve{ \lvert\bm{R}\rvert^2} - \noiseAve{\lvert R_z\rvert^2}+\mathcal{O}(\delta^3)\right), \end{equation} where $R_z$ is the walk along the $\hat{\sigma}_z$-axis in Pauli space. \noindent We calculate the characteristics of the survival probability from the statistics of the walk, weighting the contribution of each gate type by the gate-dependent step \mbox{$1\hat{n}_1, (\tfrac{1}{2}\hat{n}_1 + \tfrac{1}{2}\hat{n}_2), \tfrac{\pi}{2}\hat{n}_1$} for $\pi, \tfrac{\pi}{2}, \mathbb{I}$ gates respectively with $\hat{n}_1,\hat{n}_2 \in \{\hat{\sigma}_x,\hat{\sigma}_y,\hat{\sigma}_z\}$, and get the expectation value \begin{equation} \E{\noiseAve{ P(\ket{1}) }}\approx J \sigma^2 \tfrac{2}{3}\left(\tfrac{1}{2}+\tfrac{\pi^2}{96}\right). \label{eq:meanstdc} \end{equation} For the noise-averaged variance we need to take correlations into account due to the gate dependant nature of the error maps. The gate dependance gives us a random number of steps along a random axis, which leads to correlations even after averaging over different step lengths. \noindent When we have uncorrelated errors, we calculate the variance to be \begin{align} \Var{\noiseAve{ P(\ket{1}) }}\approx&\tfrac{J^2 \sigma^4}{n}\left(\tfrac{4}{9}\left(\tfrac{1}{2}+\tfrac{\pi^2}{96}\right)^2+\tfrac{1}{J}\left(3\left(\tfrac{7}{36}+\tfrac{\pi^4}{576}\right)\nonumber \right.\right.\\ &\left.\left.-\tfrac{8}{9}\left(\tfrac{1}{2}+\tfrac{\pi^2}{96}\right)^2\right)+\tfrac{(n-1)}{J}\left(\tfrac{7}{36}+\tfrac{\pi^4}{576}\right.\right.\nonumber \\ &\left.\left.-\tfrac{4}{9}\left(\tfrac{1}{2}+\tfrac{\pi^2}{96}\right)^2\right)\right), \label{eq:varstac} \end{align} noting that in the limit $n\rightarrow\infty$, the variance scaling saturates at a constant $\propto \tfrac{1}{J}$. \noindent For correlated errors we get \begin{align} \Var{\noiseAve{ P(\ket{1}) }}\approx&\tfrac{J^2 \sigma^4}{n}\left(\tfrac{12}{9}\left(\tfrac{1}{2}+\tfrac{\pi^2}{96}\right)^2+\tfrac{1}{J}\left(3\left(\tfrac{7}{36}+\tfrac{\pi^4}{576}\right)\nonumber \right.\right.\\ &\left.\left.-\tfrac{8}{3}\left(\tfrac{1}{2}+\tfrac{\pi^2}{96}\right)^2\right)+(n-1)\left(\tfrac{4}{9}\left(\tfrac{1}{2}+\tfrac{\pi^2}{96}\right)^2+\nonumber \right.\right.\\ &\left.\left.\tfrac{1}{J}\left(\tfrac{7}{36}+\tfrac{\pi^4}{576}-\tfrac{8}{9}\left(\tfrac{1}{2}+\tfrac{\pi^2}{96}\right)^2\right)\right)\right), \label{eq:varstdc} \end{align} again tending towards a constant which, however, now occurs at a significantly smaller number of noise averages than seen previously. Using the revised model, the noise-averaged survival probability distributions under correlated noise remain Gamma distributed, with an updated scale parameter. While this is yet to be shown explicitly for the uncorrelated case, we can approximate its behavior in the limit $n<J$ by modifying the distribution in \eqref{eq:2DUncorGamma}, yielding \begin{align} \langle P(\ket{1}) \rangle_{n,\textrm{correlated}} &\sim \Gamma(\alpha = 1, \beta = \tfrac{2}{3}J\sigma^2 (\tfrac{1}{2}+\tfrac{\pi^2}{96}) ) \label{eq:2DCorConGamma} \\ \langle P(\ket{1})) \rangle_{n,\textrm{uncorrelated}} &\sim \Gamma(\alpha = n, \beta = \tfrac{2}{3n}J\sigma^2 (\tfrac{1}{2}+\tfrac{\pi^2}{96}) ) \label{eq:2DUncorOneConGamma}. \end{align} The renormalized Gamma distributions for correlated error processes shown by solid gray lines in the main text Figs. 2C-E were calculated from first principles using \eqref{eq:2DCorConGamma} with no free parameters. The distributions for the uncorrelated error process in red were calculated from an altered version of \eqref{eq:2DUncorOneConGamma}, which was modified for higher bandwidth noise as explored below. We note that there is a deviation between the theory and the experiment for correlated noise at early values of $n$, as shown in main text Fig.~2F. However, crucially, even when we update the model with additional fit factors to account for this early $n$ scaling in main text Fig. 2F, the extracted values of $\sigma_\textrm{L}^2,\sigma_\textrm{S}^2$ for both DCGs in main text Fig. 3C are completely unaltered within the confidence bounds calculated in Section~\ref{sec:statistical_tests} below. \subsection{Higher Bandwidth Uncorrelated Noise Processes} The engineered uncorrelated noise process in this work have a higher bandwidth than that treated in the error model above. The noise was engineered to change stochastically every primitive $\pi/2$ time, leading to noise that took two values in primitive $\pi$ and $\mathbb{I}$ gates, and one value in primitive $\pi/2$ gates. In CORPSE DCGs, the noise took approximately 8 values in both $\pi$ and $\pi/2$ due to the increased length of the virtual gates and 16 values in (virtual) $\mathbb{I}$ gates, which were constructed as a composite sequence of $X_\pi$ followed by $-X_\pi$. As this work attempted to quantitatively extract error strengths from the variance scaling trends, it was necessary to update the model to account for this increased noise bandwidth relative to the gate length. \noindent We recalculate the error map for the $\pi$ gate using two sequential $\pi/2$ gates, each with a different fractional detuning $\delta_1, \delta_2\sim \mathcal{N}(0,\sigma^2)$, \begin{align} \Lambda^{(\hat{X})}(\pi,\delta_{1,2}) &= (\mathbb{I} + \tfrac{i (\delta_1 + \delta_2)}{2}\hat{\sigma}_y - \tfrac{i (\delta_1 - \delta_2)}{2}\hat{\sigma}_z+ \mathcal{O}(\delta^2)) \nonumber \\ &\equiv (\mathbb{I} + \tfrac{i \delta}{\sqrt{2}}\hat{\sigma}_y - \tfrac{i \delta}{\sqrt{2}}\hat{\sigma}_z+ \mathcal{O}(\delta^2)). \end{align} where $\delta \sim \mathcal{N}(0,\sigma^2)$. This equivalence occurs because $\delta_1,\delta_2$ are independent samples from a Gaussian distribution, meaning their combination is also Gaussian distributed, \begin{align} A\delta_1 \pm B\delta_2 &\sim \mathcal{N}(0,A^2\sigma^2) + \mathcal{N}(0,(\pm B)^2 \sigma^2) \nonumber \\ &= \mathcal{N}(0,(A^2 + B^2)\sigma^2). \label{eq:gaussSum} \end{align} Hence, $A=B=1$ and we can alternatively express this as \begin{align} \delta_1 \pm \delta_2 &\equiv \sqrt{2}\delta \nonumber \\ &\sim \sqrt{2}\mathcal{N}(0,\sigma^2) \nonumber \\ &= \mathcal{N}(0,2\sigma^2) \end{align} where $\delta \sim \mathcal{N}(0,\sigma^2)$. Therefore, we simply adjust the step length for a $\pi$ gate from 1 along a single axis to $\tfrac{1}{\sqrt{2}}$ along two axes, or 1 between two axes, when the noise bandwidth is increased to taking two values per gate. Similarly, the $\mathbb{I}$ gate error map can be rewritten as \begin{align} \Lambda^{(\mathbb{I})}(\pi,\delta)&=\mathbb{I}-i\tfrac{\pi(\delta_1+\delta_2)}{4}\hat{\sigma}_z+ \mathcal{O}(\delta^2) \nonumber \\ &\equiv \mathbb{I}-i\tfrac{\pi\delta}{2\sqrt{2}}\hat{\sigma}_z+ \mathcal{O}(\delta^2) \end{align} where $\delta \sim \mathcal{N}(0,\sigma^2)$. The effect of these results is to change the gate-dependent step lengths contributing to the statistics of the random walk. For this bandwidth of noise, the gate-dependent step lengths become \mbox{$(\tfrac{1}{\sqrt{2}}\hat{n}_1 + \tfrac{1}{\sqrt{2}}\hat{n}_2), (\tfrac{1}{2}\hat{n}_1 + \tfrac{1}{2}\hat{n}_2), \tfrac{\pi}{2\sqrt{2}}\hat{n}_1$} for $\pi, \tfrac{\pi}{2}, \mathbb{I}$ gates respectively with $\hat{n}_1,\hat{n}_2 \in \{\hat{\sigma}_x,\hat{\sigma}_y,\hat{\sigma}_z\}$. The updated expectation value of the distribution is \begin{equation} \E{\noiseAve{ P(\ket{1}) }}\approx J \sigma^2 \tfrac{2}{3}\left(\tfrac{1}{2}+\tfrac{\pi^2}{192}\right). \end{equation} and the variance for the uncorrelated higher bandwidth error becomes \begin{align} \Var{\noiseAve{ P(\ket{1}) }} &= \tfrac{J^2 \sigma^4}{n}\left(\tfrac{4}{9}\left(\tfrac{1}{2}+\tfrac{\pi^2}{192}\right)^2+\tfrac{1}{J}\left(3\left(\tfrac{1}{6}+\tfrac{\pi^4}{2304}\right)\nonumber \right.\right.\\ &\left.\left.-\tfrac{8}{9}\left(\tfrac{1}{2}+\tfrac{\pi^2}{192}\right)^2\right)+\tfrac{(n-1)}{J}\left(\tfrac{1}{6}+\tfrac{\pi^4}{2304}\right.\right.\nonumber \\ &\left.\left.-\tfrac{4}{9}\left(\tfrac{1}{2}+\tfrac{\pi^2}{192}\right)^2\right)\right). \end{align} Using these results, we update the approximated Gamma distribution for uncorrelated error processes shown in \eqref{eq:2DUncorOneConGamma} to account for this higher bandwidth noise, \begin{align} \langle P(\ket{1})) \rangle_{n,\textrm{uncorrelated}} &\sim \Gamma(\alpha = n, \beta = \tfrac{2}{3n}J\sigma^2 (\tfrac{1}{2}+\tfrac{\pi^2}{192}) ) \label{eq:2DUncorTwoConGamma}. \end{align} The renormalized Gamma distributions for uncorrelated error processes shown by solid red lines in the main text Figs. 2C-E were calculated from first principles using \eqref{eq:2DUncorTwoConGamma} with no free parameters. To increase this bandwidth for eight noise values in a gate, we study the effect of noise that changes every $\pi/8$ times for a $\pi$ gate and every $\pi/16$ times for a $\pi/2$ gate. In addition, for CORPSE gates, we will also need to consider noise that takes 16 values in an $\mathbb{I}$ gate (equivalent to changing every primitive $\pi/16$ time for a primitive $\mathbb{I}$, which is executed as a wait equivalent to the length of a $\pi$ pulse). The error maps to first order in $\delta$ for primitive $\pi$ gates with $\pi/8$ noise, and $\pi/2$ and $\mathbb{I}$ gates with $\pi/16$ noise can be calculated in terms of $\delta_{1,\dots,8}, \delta_{1,\dots,16}$. These are rewritten with a single $\delta\sim\mathcal{N}(0,\sigma^2)$ using the Gaussian distributed variable relation \eqref{eq:gaussSum}. \begin{subequations} \begin{align} \Lambda^{(\hat{X})}(\pi,\delta_{1,\dots,8} &\equiv \mathbb{I} - \tfrac{i}{\sqrt{2}} \left\{ \sqrt{ \left(4 - 2\sqrt{2+\sqrt{2}}\right) } \right\} \delta\hat{\sigma}_y + \tfrac{i}{\sqrt{2}} \left\{ \sqrt{ \left(4 - 2\sqrt{2+\sqrt{2}}\right) } \right\} \delta\hat{\sigma}_z + \mathcal{O}(\delta^2) \nonumber \\ &= \mathbb{I} - 0.390 i \delta \hat{\sigma}_y + 0.390 i \delta \hat{\sigma}_z + \mathcal{O}(\delta^2) \\ \Lambda^{(\hat{X})}(\tfrac{\pi}{2},\delta_{1,\dots,8})&= \mathbb{I} - 0.196 i \delta \hat{\sigma}_y + 0.196 i \delta \hat{\sigma}_z + \mathcal{O}(\delta^2) \\ \Lambda^{(\mathbb{I})}(\pi,\delta_{1,\dots,16})&= \mathbb{I} - \tfrac{i\pi}{32} \sum_{i=1}^{16} \delta_i + \mathcal{O}(\delta^2) \nonumber \\ &\equiv \mathbb{I} - \tfrac{i\pi}{8} \delta + \mathcal{O}(\delta^2) \end{align} \label{eq:eightValueErrorMaps} \end{subequations} As with the $\pi/2$ uncorrelated noise, the effect of increasing the bandwidth is to change the gate-dependent step contributions to the random walk. From the error maps in \eqref{eq:eightValueErrorMaps}, the step lengths are found to be \mbox{$(0.390 \hat{n}_1 + 0.390 \hat{n}_2), (0.196 \hat{n}_1 + 0.196 \hat{n}_2), \tfrac{\pi}{8}\hat{n}_1$} for $\pi, \tfrac{\pi}{2}, \mathbb{I}$ gates respectively with $\hat{n}_1,\hat{n}_2 \in \{\hat{\sigma}_x,\hat{\sigma}_y,\hat{\sigma}_z\}$. Finally, before calculating the expectation and variance for CORPSE gates, we need to take into account the relative gate lengths. For primitive gates, an $\mathbb{I}$ gate has the same duration as a $\pi$ gate and a $\pi/2$ gate has half the duration, $\tau_\mathbb{I} = \tau_\pi, \tau_{\pi/2}= \tfrac{1}{2}\tau_\pi$. However, due to the $X_{\pi,C}$ followed by $-X_{\pi,C}$ construction of the CORPSE $\mathbb{I}$, it has \emph{twice} the duration as a single CORPSE $\pi$ gate, and a $\pi/2$ has approximately the same duration, $\tau_{\mathbb{I} ,C}= 2\tau_{\pi, C},$ $\tau_{\pi/2, C}= 0.92 \tau_{\pi,C}$. As such, to account for their increase in duration relative to a $\pi$ gate, we weight the random walk step contribution from the $\mathbb{I}$ and $\pi/2$ gates by a factor of 2 and $8/(13/3) = 1.85$ respectively. The updated expectation value of the distribution is \begin{align} \E{\noiseAve{ P(\ket{1}) }} &\approx J \sigma^2 \left( \tfrac{1}{36}(2\times\tfrac{\pi}{8})^2 + \tfrac{1}{18} (0.390\sqrt{2})^2 + \tfrac{1}{9} ( 0.390 )^2 + \tfrac{2}{9} (8/(13/3)\times 0.196\sqrt{2})^2 + \tfrac{4}{9} (8/(13/3)\times 0.196)^2 \right) \nonumber \\ &= 0.167 J \sigma^2 \end{align} and the variance for the uncorrelated higher bandwidth error becomes \begin{align} \Var{\noiseAve{ P(\ket{1}) }} &= \tfrac{J^2 \sigma^4}{n} \left( 0.167^2 + \tfrac{1}{J}\left(3\left( 0.041 \right) -2(0.167)^2\right)+ \tfrac{(n-1)}{J} \left(0.041 - 0.167^2\right) \right) \nonumber \\ &= \tfrac{J^2 \sigma^4}{n} 0.028 + \tfrac{0.067}{J}+ 0.013\tfrac{(n-1)}{J}. \end{align} \\ \subsection{Simultaneous Correlated and Uncorrelated Error Processes} To extract the correlated and uncorrelated error strengths present during execution of a quantum circuit, we combine the previous results examining how the variance of the noise-averaged distribution changes with further noise averaging for different error processes. Consider two independent error processes experienced by a quantum circuit with different temporal correlation lengths: one long, $\delta_\textrm{L} \sim \mathcal{N}(0,\sigma_\textrm{L}^2)$, and one short, $\delta_\textrm{S} \sim \mathcal{N}(0,\sigma_\textrm{S}^2)$. The first process is taken to be maximally correlated across the length of a circuit, with block length $\mathcal{M}_\mathrm{n} = J$, whilst the second varies randomly every primitive $\pi/2$ time. This results in two simultaneous random walks in Pauli space, $\vec{\boldsymbol{R}}_\textrm{L} = \delta_\textrm{L} \vec{\boldsymbol{V}}_\textrm{L}$ and $\vec{\boldsymbol{R}}_\textrm{S}$. We expand the expression for survival probability using the 2D projection of these vectors in the $\hat{\sigma}_x \- \hat{\sigma}_y$-plane in Pauli space, $\vec{\boldsymbol{R}}_\textrm{L,2D}, \vec{\boldsymbol{R}}_\textrm{S,2D}$, \begin{align} 1- \noiseAve{ P(\ket{1}) } &= \noiseAve{ \norm{\vec{\boldsymbol{R}}_\textrm{S,2D} + \delta_\textrm{L}\vec{\boldsymbol{V}}_\textrm{L,2D}}^2 } \nonumber\\ &= \noiseAve{ \norm{\RvecSXY}^2 } + \noiseAve{ \delta_\textrm{L}^2 \norm{\VvecLXY}^2 } + \noiseAve{ 2\delta_\textrm{L} \vec{\boldsymbol{R}}_\textrm{S,2D} \cdot \vec{\boldsymbol{V}}_\textrm{L,2D} } \nonumber \\ &= \noiseAve{\norm{\RvecSXY}^2} + \sigma_\textrm{L}^2 \norm{\VvecLXY}^2 \end{align} using $\noiseAve{\delta_\textrm{L}} = 0$ for $\delta_\textrm{L}\sim\mathcal{N}(0,\sigma_\textrm{L}^2)$. Then, the variance is \begin{align} \Var{1- \noiseAve{ P(\ket{1}) }} &= \Var{ \noiseAve{\norm{\RvecSXY}^2} + \sigma_\textrm{L}^2 \norm{\VvecLXY}^2 } \nonumber \\ &= \Var{ \noiseAve{\norm{\RvecSXY}^2} } + \sigma_\textrm{L}^4 \Var { \norm{\VvecLXY}^2} + 2\sigma_\textrm{L}^2\Cov{\noiseAve{\norm{\RvecSXY}^2} , \norm{\VvecLXY}^2}. \end{align} For primitive gates (no scaling of gate lengths), the expression for error variance scaling under simultaneous error processes becomes \begin{align} \Var{\noiseAve{ P(\ket{1}) }} &= \left\{ \tfrac{J^2 \sigma_\textrm{S}^4}{n}\left( \tfrac{4}{9}\left(\tfrac{1}{2}+\tfrac{\pi^2}{192}\right)^2+\tfrac{1}{J}\left(3\left(\tfrac{1}{6}+\tfrac{\pi^4}{2304}\right) \nonumber \right.\right.\right. \\ &\left. -\tfrac{8}{9}\left(\tfrac{1}{2}+\tfrac{\pi^2}{192}\right)^2\right)+\tfrac{(n-1)}{J}\left(\tfrac{1}{6}+\tfrac{\pi^4}{2304} \right.\nonumber \\ &\left.\left.\left. -\tfrac{4}{9}\left(\tfrac{1}{2}+\tfrac{\pi^2}{192}\right)^2\right)\right) \right\} \\ &+ \left\{ \tfrac{J^2 \sigma_\textrm{L}^4}{n}\left(\tfrac{12}{9}\left(\tfrac{1}{2}+\tfrac{\pi^2}{96}\right)^2+\tfrac{1}{J}\left(3\left(\tfrac{7}{36}+\tfrac{\pi^4}{576}\right)\nonumber \right.\right.\right. \\ &\left. -\tfrac{8}{3}\left(\tfrac{1}{2}+\tfrac{\pi^2}{96}\right)^2\right)+(n-1)\left(\tfrac{4}{9} \left(\tfrac{1}{2}+\tfrac{\pi^2}{96}\right)^2+\nonumber \right.\\ &\left.\left.\left. \tfrac{1}{J}\left(\tfrac{7}{36}+\tfrac{\pi^4}{576}-\tfrac{8}{9}\left(\tfrac{1}{2}+\tfrac{\pi^2}{96}\right)^2\right)\right)\right) \right\} \\ &+ \left\{ 2J \sigma_\textrm{L}^2 \sigma_\textrm{S}^2 ( (\tfrac{1}{6} + \tfrac{\pi^4}{1152}) - \tfrac{4}{9}(\tfrac{1}{2}+\tfrac{\pi^2}{96})(\tfrac{1}{2}+\tfrac{\pi^2}{192})) \right\}. \end{align} For CORPSE gates, we combine 8$\times$ uncorrelated noise bandwidth calculations above with the original primitive correlated calculations, where the relative detuning contributions have been scaled by $1, 2$ or $8/(13/3)$ for $\pi$, $\mathbb{I}$ and $\pi/2$ gates respectively, yielding \begin{align} \Var{\noiseAve{ P(\ket{1}) }} &= \left\{ \frac{J^2 \sigma_\textrm{S}^4}{n} \left( 0.028 + \frac{0.067}{J}+ 0.013\frac{(n-1)}{J} \right) \right\} \nonumber \\ &+ \left\{ \frac{J^2 \sigma_\textrm{L}^4}{n} \left(3\times1.14^2+\frac{1}{J}(3\times 3.78 - 6\times1.14^2)+(n-1)(1.14^2+ \frac{1}{J}(3.78 - 2\times1.14)) \right) \right\} \nonumber \\ &+ \left\{ 2J \sigma_\textrm{L}^2 \sigma_\textrm{S}^2 \left( 0.318 -1.142\times0.167 \right) \right\}. \end{align} Fitting this result to the mean variance trajectories obtained in the main text found $\sigma_\textrm{S}^2 =7.3\times10^{-3}$, $\sigma_\textrm{L}^2 = 5.6 \times 10^{-6} $ for CORPSE gates and $\sigma_\textrm{S}^2 = 8.6\times10^{-3}$, $\sigma_\textrm{L}^2 = 1.3 \times 10^{-4}$ for WAMF. For the WAMF, we scale the relative detuning contributions by $1, 2$ or $1.57$ for $\pi$, $\mathbb{I}$ and $\pi/2$ gates respectively. Comparing the extracted error to the applied noise strengths \mbox{$\sigma_\textrm{S}^2 = 5.2\times10^{-4}$, $\sigma_\textrm{L}^2 = 2.1 \times 10^{-3} $}, we find a $370\times$ suppression in the correlated component from CORPSE and $16\times$ from WAMF. Even taking the largest value for $\sigma_\textrm{L}^2$ after applying CORPSE from the confidence intervals calculated in Section~\ref{sec:statistical_tests} shows a suppression of $254\times$. \newpage \section{Influence of Quantum Projection Noise} Quantum projection noise (QPN) describes the intrinsic uncertainty in qubit measurements due to the binomial nature of quantum state collapse \cite{Itano:1993} and its scaling with the number of samples. The variance of a measurement due to QPN is $\nicefrac{p(1-p)}{r}$, where p is the true state projection along the $z$-axis of the Bloch sphere and $r$ is the number of identical measurements performed. Our work studies variances over distributions of noise-averaged survival probabilities, and consequently it is necessary to demonstrate that we were not limited by QPN bounds. In order to ensure that our results are not measurement artefacts from quantum projection noise, we average each circuit and noise realization combination $r=220$ times. At this number of repetitions, the largest possible projection noise variance is given by $\nicefrac{0.5(1-0.5)}{220} = 1\times10^{-3}$. In addition to the worst case QPN, we compare the variance scaling results for the CORPSE DCG under simulataneously applied correlated and uncorrelated noise to the QPN given by the measured survival probabilities. Fig.~\ref{SuppFig:QPN} shows the mean trajectory for the CORPSE variance scaling under the combined noise process presented in main text Fig.~3C in dark blue. The dashed black line gives the worst case QPN and the two other sets of trajectories are calculated directly from the measured probabilities. For these, the QPN was calculated at each $n$ for 100 randomizations of noise realizations to reduce bias and the 100 values are plotted. The lower set of trajectories are divided by $(n\times r)$ rather than just $r$. Our results are well above this lower limit suggesting that this is the most valid measurement of setting our QPN limit. Furthermore, we note that the saturation observed at large values of $n$ is not set by any static QPN bound limiting our measurements. \begin{figure}[h] \centering \includegraphics[scale = 1]{FigQPN.pdf} \caption{\textbf{Quantum projection noise limits for measured survival probabilities with the CORPSE DCG} Comparison of mean CORPSE variance scaling from main text Fig. 3C to QPN variance limits given by $p(1-p)/r$. Dashed line is worst case QPN for $r=220$ when $p = 0.5$. Black lines show additional QPN limits where, for each $n$, $p(1-p)/r$ is calculated for 100 randomizations of noise realizations. The lower line scaling as $1/n$ is divided by $(n\times r)$ rather than $r$. } \label{SuppFig:QPN} \end{figure} \newpage \section{Statistical Analysis of Extracted Error Strengths} \label{sec:statistical_tests} In this work, we attempt to quantify the performance of CORPSE DCGs to suppress correlated error, claiming a factor of $\sim260\times$ suppression relative to the applied correlated noise strength and the correlated error strength that was experienced by primitive gates under the same applied noise. This is based off the analytic model developed from the random walk framework in \cite{Ball:2016}, which has been modified appropriately for our experimental framework. After applying noise with correlated component strength $\sigma_\textrm{L}^2 = 1.986\times10^{-3}$ and uncorrelated component strength $\sigma_\textrm{S}^2 = 0.517\times10^{-3}$, we extract corresponding error strengths of $\sigma_\textrm{L}^2 = 5.6\times10^{-6}$ and $\sigma_\textrm{S}^2 = 7.2\times10^{-3}$ respectively. We use the Akaike Information Criterion (AIC) \cite{Akaike:1974} to test how good the model of $\sigma_\textrm{L}^2 = 5.6\times10^{-6}$ is within the analytic framework provided. This is done by allowing $\sigma_\textrm{S}^2$ to vary freely whilst $\sigma_\textrm{L}^2$ is fixed at values increasing from 0, and the AIC is calculated using the maximum likelihood estimate, $\textrm{RSS}/n$, where RSS is the Residual Sum of Squares from the model. The AIC is given by \begin{equation} \textrm{AIC} = 2k + n\textrm{ln}(\textrm{RSS}) \end{equation} where the number of estimated parameters is $k=2$: $\sigma_\textrm{L}^2, \sigma_\textrm{S}^2$. From this, we can calculate the relative likelihood of each possible model $i$ using \begin{equation} \textrm{AIC}_\textrm{Rel} = \textrm{exp}((\textrm{AIC}_\textrm{Min} - \textrm{AIC}_i)/2). \end{equation} The relative likelihood is shown in Fig. \ref{SuppFig:AICBIC}A, and we find a 95\% likelihood for $\sigma_\textrm{L}^2 = (5.6\substack{+1.9 \\ -2.3})\times10^{-6}$. The Bayesian Information Criterion (BIC) \cite{Schwarz:1978} can be derived from the same framework as the AIC, but with an alternative prior. It is calculated in a similar manner, \begin{equation} \textrm{BIC} = \textrm{ln}(n)k + n\textrm{ln}(\textrm{RSS/n}) \end{equation} and shows strong model violation when $\Delta\textrm{BIC} \coloneqq \textrm{BIC} - \textrm{BIC}_\textrm{Min} > 10$. Here, this occurs outside the range $\sigma_\textrm{L}^2 = (5.6\substack{+2.6 \\ -3.2})\times10^{-6}$, as shown in Fig. \ref{SuppFig:AICBIC}B. \begin{figure}[h] \centering \includegraphics[scale = 1]{FigAICBIC.pdf} \caption{\textbf{Statistical analysis of extracted correlated error strength for CORPSE DCG} (A) Relative likelihood derived from the Akaike Information Criterion (AIC) with $k=2$ free parameters, shows that varying $\sigma_\textrm{L}^2$ gives a 95\% likelihood bound within the range $\sigma_\textrm{L}^2 =5.6\protect\substack{+1.9 \\ -2.3} \times 10^{-6}$ when applying the model presented in this work. The dashed line shows the 5\% relative likelihood cutoff, such that all values of $\sigma_\textrm{L}^2$ within the dotted lines have $\geq95\%$ likelihood. (B) Similarly, the Bayesian Information Criterion (BIC) shows strong model violation ($\Delta\textrm{BIC} > 10$) for \mbox{$\sigma_\textrm{L}^2 = (5.6\protect\substack{+2.6 \\ -3.2})\times10^{-6}$}. The dashed line indicates the strong model violation cutoff, with dotted lines showing the corresponding bounds of $\sigma_\textrm{L}^2$.} \label{SuppFig:AICBIC} \end{figure} \clearpage \end{widetext} \end{document}
{ "timestamp": "2017-12-15T02:00:22", "yymm": "1712", "arxiv_id": "1712.04954", "language": "en", "url": "https://arxiv.org/abs/1712.04954" }
\section{Ehrenfest equations of motion} Denoting by ${\bf n}_i(\Omega)$ the $i$-th principal axis of the rotor so that ${\bf n}_k(\Omega) \cdot \mathrm{I}(\Omega) {\bf n}_j(\Omega) = I_k \delta_{kj}$, the components of the angular momentum operator in the body fixed frame are given by $\widetilde{\op{J}}_k = {\bf n}_k \cdot \textsf{\textbf{J}} = \textsf{\textbf{J}} \cdot {\bf n}_k$ and in the space fixed frame by $\op{J}_k = {\bf e}_k \cdot \textsf{\textbf{J}}$. They obey the commutation relation $[\op{J}_j,\op{J}_k] = i \varepsilon_{jk\ell} \op{J}_\ell$, $[\widetilde{\op{J}}_j,\widetilde{\op{J}}_k] = -i \varepsilon_{jk\ell} \widetilde{\op{J}}_\ell$ and $[\op{J}_j,\widetilde{\op{J}}_k] = 0$. Their commutation relations with the rotation matrix $\mathrm{R}(\op{\Omega})$ can be expressed as \begin{align} \label{eq:commrel} [\op{J}_k, \mathrm{R}(\op{\Omega})] & = \frac{\hbar}{i} {\bf e}_k \times \mathrm{R}(\op{\Omega}) \\ [\widetilde{\op{J}}_k, \mathrm{R}(\op{\Omega}) ] & = \frac{\hbar}{i} {\bf n}_k(\op{\Omega}) \times \mathrm{R}(\op{\Omega}). \end{align} Using these commutators repeatedly one obtains~(2) from the master equations~(8) and~(10). For illustration, the dynamics of the first moment of the angular momentum operator due to (11) is \begin{eqnarray} \partial_t \langle \op{J}_k \rangle & = & \frac{2 D}{\hbar^2} \left \langle {\bf m}(\op{\Omega}) \cdot \op{J}_k {\bf m}(\op{\Omega}) - \op{J}_k \right \rangle - \frac{i {\it \Gamma}}{2 \hbar} \left \langle {\bf m}(\op{\Omega}) \cdot \op{J}_k {\bf m}(\op{\Omega}) \times \textsf{\textbf{J}} +\textsf{\textbf{J}} \times {\bf m}(\op{\Omega}) \cdot \op{J}_k {\bf m}(\op{\Omega}) \right \rangle + {\mathcal O} \left ( \frac{\hbar^2}{k_{\rm B} T I} \right ). \end{eqnarray} Using Eq.~\eqref{eq:commrel} with ${\bf m}(\op{\Omega}) = \mathrm{R}(\op{\Omega}) {\bf e}_z$, the first term vanishes and the second evaluates to $ - {\it \Gamma} \langle \op{J}_k \rangle$, in accordance with (2). The calculation of the second moments follows the same lines. \section{Linear and planar rotor thermal state} In order to determine the stationary state of the linear rotor we consider (10) in the angular momentum eigenbasis $\ket{\ell m}$ and evaluate the matrix elements $M_{\ell m \ell' m'}^{\ell'' m''}$ defined via \begin{equation} \matel{\ell m}{{\mathcal D} \rho_{\rm eq}}{\ell'm'} = \sum_{\ell'' = 0}^\infty \sum_{m'' = -\ell''}^{\ell''} \rho_{\rm eq}^{\ell'' m''} M_{\ell m \ell' m'}^{\ell'' m''}. \end{equation} Here we used that $\rho_{\rm eq}$ is diagonal in the angular momentum basis. The matrix elements $M_{\ell m \ell' m'}^{\ell'' m''}$ can be computed by using the properties of spherical harmonics, \begin{subequations} \begin{align} \textsf{\textbf{J}}_1 \ket{\ell m} & = \frac{\hbar}{2} \left ( c_+ \ket{\ell m+1} + c_- \ket{\ell m-1} \right ), \\ \textsf{\textbf{J}}_2 \ket{\ell m} & = \frac{\hbar}{2i} \left ( c_+ \ket{\ell m+1} - c_- \ket{\ell m-1} \right ), \\ \textsf{\textbf{J}}_3 \ket{\ell m} & = \hbar m \ket{\ell m}, \end{align} \end{subequations} with $c_{\pm} = \sqrt{ \ell (\ell + 1) - m (m \pm 1)}$, as well as the representation of matrix elements in terms of Wigner $3$-j symbols, \begin{align} \matel{\ell m}{Y_{\ell'',m''}(\upbeta,\upalpha)}{\ell' m'} = \sqrt{\frac{(2 \ell +1)(2 \ell' +1)(2 \ell'' +1)}{4 \pi}} \begin{pmatrix} \ell & \ell' & \ell'' \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} \ell & \ell' & \ell'' \\ -m & m' & m'' \end{pmatrix}. \end{align} The latter vanishes unless $m - m' - m'' = 0$ and $\ell + \ell' + \ell''$ is even, providing selection rules for the computation of the matrix elements. These selection rules imply that the off-diagonal elements of $\matel{\ell m }{{\mathcal D}\rho_{\rm eq}}{\ell'm'}$ vanish so that one has for all $\ell$, $m$ \begin{equation} \sum_{\ell''=0}^\infty \sum_{m'' = -\ell''}^{\ell''}\rho_{\rm eq}^{\ell'' m''} M_{\ell m \ell m}^{\ell'' m''} = 0. \end{equation} Only a finite number of terms $\rho_{\rm eq}^{\ell'' m''}$ are coupled due to the selection rules. Starting with the equation for $\ell = 0$ and $m = 0$ one can construct the solution iteratively, arriving at Eq.~(13). The same procedure can be used to calculate the stationary solution of the planar rotor. However, in this case one needs only the matrix elements \begin{equation} \matel{m}{\cos \upalpha}{m'} = \frac{1}{2} \left ( \delta_{m m+1} + \delta_{m m-1} \right ), \end{equation} along with $\op{p}_\alpha \ket{m} = \hbar m \ket{m}$. Again, this yields a set of equations that can be solved by iteration starting from $m=0$. \section{Thermalization of asymmetric rotors} We show that the Gibbs state of the asymmetric rotor is a stationary solution of (8) for large temperatures. Note that the limit of large temperatures, $\hbar^2 / k_{\rm B} T I_{\rm min} \to 0$ with $I_{\rm min}$ the smallest moment of inertia, is equivalent to the semiclassical limit. We first define the transformation \begin{equation} \label{eq:fdef} F(\textsf{\textbf{A}}_k) = e^{-{\mathsf H}/k_{\rm B} T} \textsf{\textbf{A}}_k e^{{\mathsf H}/k_{\rm B} T} = \sum_{n = 0}^\infty \frac{(-k_{\rm B} T)^{-n}}{n!} \left [ {\mathsf H}, \textsf{\textbf{A}}_k \right ]_n, \end{equation} where $[\op{A}, \op{B}]_n = [\op{A},[\op{A},\ldots,[\op{A},\op{B}]\ldots]]$ denotes the $n$-fold commutator. Note that $F(\textsf{\textbf{A}}_k \cdot \textsf{\textbf{A}}_\ell) = F(\textsf{\textbf{A}}_k) \cdot F(\textsf{\textbf{A}}_\ell)$ and $F(\textsf{\textbf{A}}_k^\dagger) \neq F(\textsf{\textbf{A}}_k)^\dagger$. With this mapping each summand of the dissipator (8) acting on the Gibbs state can be rewritten as \begin{align} \label{eq:dk} {\mathcal D}_k \frac{e^{-{\mathsf H} /k_{\rm B} T}}{Z} = & \frac{2 \widetilde{D}_k}{\hbar^2} \left ( \textsf{\textbf{A}}_k \cdot \frac{e^{-{\mathsf H} /k_{\rm B} T}}{Z} \textsf{\textbf{A}}_k^\dagger - \frac{1}{2} \textsf{\textbf{A}}_k^\dagger \cdot \textsf{\textbf{A}}_k \frac{e^{-{\mathsf H} /k_{\rm B} T}}{Z} - \frac{1}{2} \frac{e^{-{\mathsf H} /k_{\rm B} T}}{Z} \textsf{\textbf{A}}_k^\dagger \cdot \textsf{\textbf{A}}_k \right ) \nonumber \\ = & \frac{2 \widetilde{D}_k}{\hbar^2}\left [ \textsf{\textbf{A}}_k \cdot F(\textsf{\textbf{A}}_k^\dagger)- \frac{1}{2} \textsf{\textbf{A}}_k^\dagger \cdot \textsf{\textbf{A}}_k - \frac{1}{2} F(\textsf{\textbf{A}}_k^\dagger \cdot \textsf{\textbf{A}}_k) \right ] \frac{e^{-{\mathsf H} /k_{\rm B} T}}{Z}. \end{align} Inserting the expansion \eqref{eq:fdef} into \eqref{eq:dk} and sorting the terms in the square brackets in orders of $1/T$ shows that the zeroth and first order term vanish and, taking the temperature-dependence of the prefactor into account, the remainder decreases at least as $1/T$. \section{Fokker-Planck equation of rigidly connected classical particles} We consider $N$ point particles of mass $m_n$, position ${\bf r}_n$ and momentum ${\bf p}_n$, in an environment of temperature $T$. Denoting the friction and diffusion constant of the $n$-th particle by $\gamma_n$ and $D_n = k_{\rm B} T m_n \gamma_n$, respectively, the Fokker-Planck equation for the total phase space distribution function $f_t({\bf r}_1,\ldots,{\bf r}_N,{\bf p}_1,\ldots,{\bf p}_N)$ reads as \begin{equation} \label{eq:fppp} \partial_t^{\rm nc}f_t = \sum_{n = 1}^N \gamma_n \left [ \nabla_{{\bf p}_n} \cdot \left ({\bf p}_n f_t \right ) + k_{\rm B} T m_n \nabla_{{\bf p}_n}^2 f_t \right ]. \end{equation} This assumes that the diffusion process is isotropic. We now invoke that the particles are rigidly connected and that their center-of-mass is fixed at the origin, so that the positions ${\bf r}_n$ are determined by the rotation matrix, ${\bf r}_n = \mathrm{R}(\Omega){\bf r}_n^{(0)}$. One thus obtains for the momenta ${\bf p}_n = m_n \mathrm{I}^{-1}(\Omega){\bf J} \times {\bf r}_n$ with ${\bf J} = \sum_n {\bf r}_n \times {\bf p}_n$. Exploiting that \begin{equation} \nabla_{{\bf p}_n} = (\nabla_{{\bf p}_n}\otimes {\bf J})\nabla_{{\bf J}} = - {\bf r}_n \times \nabla_{\bf J} \end{equation} yields from \eqref{eq:fppp} the rotational Fokker-Planck equation (1) with the rigid rotor distribution $h_t(\Omega,{\bf J})$. The corresponding rotational diffusion tensor can thus be identified as \begin{equation} \label{eq:difftens2} \mathrm{D}(\Omega) = k_{\rm B} T \sum_{n = 1}^N m_n \gamma_n \left ( r_n^2 {\mathds 1} - {\bf r}_n \otimes {\bf r}_n \right ). \end{equation} It is related to the friction tensor by $\mathrm{D}(\Omega) = k_{\rm B} T \mathrm{\Gamma}(\Omega) \mathrm{I}(\Omega)$. Note that the eigenvalues of the rotational diffusion tensor \eqref{eq:difftens2} fulfill the inequality $D_i + D_j \geq D_k$ for $(i,j,k)$ permutations of $(1,2,3)$, as can be seen from tracing over \eqref{eq:difftens2} and deducing that \begin{equation} \sum_{n = 1}^N m_n \gamma_n {\bf r}_n \otimes {\bf r}_n = \frac{1}{2} \mathrm{\Tr}[ \mathrm{D}(\Omega)] {\mathds 1} - \mathrm{D}(\Omega) > 0. \end{equation} This constraint for the possible values of the diffusion coefficients can be relaxed by allowing for directed diffusion in Eq. \eqref{eq:fppp}. Specifically, replacing the second derivatives $\nabla_{{\bf p}_n}^2$ in the last term by $({\bf n}_n \cdot \nabla_{{\bf p}_n})^2$, so that the (particle- and orientation-dependent) unit vectors ${\bf n}_n$ define the direction of diffusion, results in the same Fokker-Planck equation (1) but with the diffusion tensor \begin{equation} \mathrm{D}(\Omega) = k_{\rm B} T \sum_{n = 1}^N \gamma_n m_n \left ( {\bf n}_n \times {\bf r}_n \right ) \otimes \left ( {\bf n}_n \times {\bf r}_n \right ) \end{equation} and the corresponding friction tensor. Its eigenvalues can take arbitrary, positive values, depending on the $m_n$, $\gamma_n$, ${\bf n}_n$ and ${\bf r}_n$. \section{Inversion symmetric particles} The master equation (10) presupposes that the particle-bath interaction is isotropic. An inversion-symmetric particle prepared in a coherent superposition of the opposite orientations ${\bf m}(\Omega)$ and $-{\bf m}(\Omega)$ is predicted to decohere because the localization rate (12) is not zero, even if these orientations are indistinguishable by the environment. Since this symmetry enters only on the quantum level it must not affect the semiclassical limit. The dissipator for inversion-symmetric particles can be obtained by generalizing the microscopic derivation of inversion symmetric angular momentum diffusion [Papendell \emph{et al.}, New J. Phys. {\bf 19}, 122001 (2017)]. The Lindblad operators must then be quadratic in the particle orientation in order to preserve inversion symmetry. This yields \begin{subequations} \label{eq:dissnLB2} \begin{equation} \label{eq:LBfull2} {\mathcal D} \rho = \frac{D}{\hbar^2} \mathrm{Tr} \left [ \mathrm{\sf B} \rho \mathrm{\sf B}^\dagger - \frac{1}{2} \left \{ \mathrm{\sf B}^\dagger \mathrm{\sf B}, \rho \right \} \right ], \end{equation} where $\mathrm{Tr}(\cdot)$ denotes the matrix trace (not to be confused with the operator trace) and the tensor Lindblad operators are \begin{equation} \label{eq:LB2} \mathrm{\sf B} = {\bf m}(\op{\Omega}) \otimes {\bf m}(\op{\Omega}) - \frac{i \hbar}{2 k_{\rm B} T} {\bf m}(\op{\Omega}) \otimes {\bf m}(\op{\Omega}) \times \mathrm{I}^{-1}(\op{\Omega}) \textsf{\textbf{J}}. \end{equation} \end{subequations} While the first term appears already in the article by Papendell \emph{et al.}, the second results from quantizing the time derivative $\partial_t[{\bf m}(\Omega)\otimes {\bf m}(\Omega)]$. The latter can be expressed as ${\bf m}(\Omega) \otimes {\bf m}(\Omega) \times \mathrm{I}^{-1}(\Omega) {\bf J}$ because of the matrix trace in \eqref{eq:LBfull2} without affecting diffusion and friction. The dissipator \eqref{eq:dissnLB2} preserves inversion symmetry and implies the moment equations of motion (2) as well as the thermalization (3) and (4). In addition, it also leads to the Fokker-Planck equation (1). The $T$-independent contribution of \eqref{eq:LB2} depends only on the orientation operator and thus leads to orientational decoherence and angular momentum diffusion. The corresponding decoherence rate \begin{equation} F(\Omega,\Omega') = \frac{k_{\rm B} T {\it \Gamma} I}{\hbar^2} \vert {\bf m}(\Omega)\times {\bf m}(\Omega')\vert^2, \end{equation} vanishes not only for $\Omega = \Omega'$ but also for a superpositions between opposite orientations. The quantum phase space dynamics of the inversion-symmetric planar rotor can be obtained from Eq.~(10) by replacing ${\it \Gamma}$ by ${\it \Gamma}/2$, $D$ by $D/4$ and $m \pm 1$ by $m \pm 2$. \twocolumngrid
{ "timestamp": "2018-07-27T02:07:44", "yymm": "1712", "arxiv_id": "1712.05163", "language": "en", "url": "https://arxiv.org/abs/1712.05163" }
\section{Introduction} The Magellanic Clouds have significantly advanced our understanding on galaxy evolution. Owing to their proximity, individual stars can be observed, providing important information about the spatially resolved star formation, and the origin and properties of their stellar populations. The Small Magellanic Cloud (SMC) is a dwarf irregular galaxy located at a distance of $\sim$60.6 kpc \citep{Hilditch05}. Simulations supported by observational evidence suggest that it evolved in tandem with its counterpart -- the Large Magellanic Cloud (LMC), thus sharing a common interaction and star formation history \citep[e.g., see][and references therein]{Besla12}. \citet{Yoshizawa03} performed N-body simulations of the tidal distortions and concluded that the two galaxies should have interacted over the past $\sim$0.2 Gyr. Their results are partially supported by \citet{Harris04}, who studied the spatially resolved star formation history of the SMC and showed that it underwent various periods of enhanced star formation $\sim$2.5, 0.4, and 0.06 Gyr ago. They are also in agreement with \citet{Chiosi06} and \citet{Glatt10}, who suggested that the close interaction between the two Clouds have resulted in the triggering of cluster formation activity. More recently, \citet{Besla07} and \citet{Kallivayalil13} challenged the scenarios where the Magellanic Clouds have already completed several orbits around the Galaxy, using current {\it Hubble Space Telescope} (HST) proper motion measurements; they suggested that the Clouds are in their first orbit passage about the Galaxy. Moreover, \citet{Besla12} studied the interaction history of those galaxies using numerical models constrained by the HST observations and showed that, while they have not interacted before with the Galaxy, the Magellanic Clouds must have experienced a direct collision some time 100-to-300 Myr ago. This seems to agree with the findings of \citet{Harris07}, who studied the stellar populations of the Magellanic Bridge -- the tidal stream of neutral gas and stars possibly associated with the interaction of the two galaxies -- and showed that the star formation in the Bridge commenced some time 200-300 Myr ago. A direct cloud-cloud collision would also explain the existence of a small population of SMC stars -- based on their peculiar kinematics and metallicities -- which were found in the LMC \citep{Olsen11}. In spite of all this progress, the question of whether the evolution of the Magellanic Clouds is driven by internal processes (i.e., the action of bars, morphological/dynamical quenching) or environmental mechanisms (i.e., galaxy interactions) is still unclear. One would expect that in the case of environmental evolution many of the properties of the two galaxies (e.g., the star formation history) would be correlated. A robust method to explore the formation and interaction histories of nearby galaxies, where individual stars can be resolved (such as the Magellanic Clouds), entails the study of the age distribution of their star clusters. Owing to modern instrumentation which allows us to estimate their ages and metallicities with high precision -- in contrast with field stars -- star clusters represent unique tools to constrain the star formation history of their host galaxies and to disentangle the special conditions they might have undergone. Despite the plethora of studies of the star clusters in the Magellanic Clouds, the lack of a statistically robust detection method that creates uniform and complete samples (as opposed to the visual identification methods that are usually applied) has posed significant limitations for the systematic study of the star cluster formation history of both galaxies. In \citet{Bitsakis17}, we presented a new fully-automated method to robustly detect and estimate the ages of star clusters in nearby galaxies. Using statistical analysis on high resolution maps of the LMC, we obtained a large, uniform sample of star clusters (in the central 49 deg$^{2}$) which we exploited to put constraints on the formation history of that galaxy. A similar analysis is followed in the current study using the same method and data surveys for the SMC. In Section 2, we describe the dataset we use in the current study. Section 3 contains a brief description of the cluster detection and age estimation codes (a more analytic description along with statistical tests can be found in \citealt{Bitsakis17}). The results are presented in Section 4, while in Section 5 we make a comparison of the SMC-LMC star cluster age distributions and derive useful conclusions about their interaction history. Finally, in Section 6 we summarize our findings. Throughout this work we assume a distance modulus to the SMC of 18.91 mag \citep{Hilditch05}. \begin{figure*} \begin{center} \includegraphics[scale=0.3]{fig1.jpg} \caption{(a) The Spitzer/IRAC 3.6$\micron$ \citep{Gordon11}, (b) the GALEX/NUV \citep{Simons14}, and (c) the SWIFT/UVOT \citet{Siegel14} mosaics of the SMC, respectively. The dashed blue box indicates the area covered by MCPS \citep{Zaritsky02}, which was also surveyed by our code. } \label{mcps_coverage} \end{center} \end{figure*} \section{The data} We have made use of archival data of the SMC at various bands. \citet{Simons14} presented the near-ultraviolet mosaic ($\lambda_{\rm eff}$=2275\AA) of that galaxy obtained by the Galaxy Evolution Explorer \citep[GALEX;][]{Martin05}. The median exposure time was 733 seconds, and the 5$\sigma$ depth of point sources varied between 20.8 and 22.7 mags. Although the mosaic covers a region of 63 deg$^{2}$, which contains the SMC bar, wing and tail, there are two sub-regions that were not observed, of $\sim$0.25 and 1 deg diameter, north east and south west from the center, respectively (see Figure~\ref{mcps_coverage}b). These holes in the coverage were compensated for with the Swift Ultraviolet-Optical Telescope (UVOT) Magellanic Clouds Survey \citep[SUMAC;][]{Siegel14}, which imaged the central 3.8 deg$^{2}$ of the galaxy (Figure~\ref{mcps_coverage}c) with deeper exposures of 3000 s in all three $NUV$ filters of the instrument ($UVW1$, $UVW2$, and $UVM2$). Our infrared data come from the ``Surveying the Agents of a Galaxy's Evolution SMC survey'' \citep[SAGE-SMC;][]{Gordon11}, that mapped the full SMC (30 deg$^{2}$) with both the Infrared Array Camera \citep[IRAC, Figure~\ref{mcps_coverage}a; ][]{Fazio04} and the Multiband Imaging Photometer \citep[MIPS;][]{Rieke04} on-board the Spitzer Space Telescope. It produced mosaics at 3.6, 4.5, 5.8, and 8.0$\micron$ with IRAC, and at 24, 70, and 160$\micron$ with MIPS, with integrated exposure times of 63 hours in the IRAC and $\sim$400 hours in the MIPS bands, respectively. Finally, we exploited the photometric information by \citet{Zaritsky02}, who presented the stellar catalog and extinction map of the SMC, as part of the Magellanic Cloud Photometric Survey (MCPS; marked with dashed blue lines in Figure~\ref{mcps_coverage}). They obtained 3.8-5.2 minute exposures of the central 18 deg$^{2}$ of the SMC in the Johnson $U$, $B$, $V$, and Gunn $i$ bands with the Las Campanas Swope Telescope under 1.5 arcsecond seeing conditions. The limiting magnitudes varied, depending on the filter, between 21.5 mag for $U$ and 23.0 mag for $i$. Using DAOPHOT II \citep{Stetson87}, they created a photometric catalog that contains 24.5 million sources in all the area covered by the MCPS (including the SMC, LMC and the Magellanic Bridge). They also estimated the line-of-sight extinctions to the stars in their catalog and produced an extinction map of the SMC. This was achieved by comparing the observed stellar colors with those derived from the stellar photospheric models of \citet{Lejeune97}. Thus they estimated the effective temperature ($T_{\rm eff}$) and measured the extinction ($A_{V}$) along the line of sight to each star, adopting a standard Galactic extinction curve. They produced two $A_{V}$ maps, one for hot (12000 K $<$ $T_{\rm eff}$ $\le$ 45000 K) and one for cool (5500 K $<$ $T_{\rm eff}$ $\le$ 6500 K) stars. In Figure~\ref{mcps_coverage}, we present the coverage of MCPS in comparison with that of other surveys we used for the detection of the star clusters; one can see that the central 18 deg$^{2}$ of the SMC are imaged. \section{The cluster detection and age estimation method} \begin{table*} \begin{minipage}{120mm} \begin{center} \caption{SMC star cluster catalog.} \label{tab_clusters} \begin{tabular}{lccccccc} \hline \hline & R.A.(J2000) & Dec(J2000) & Radius & log(Age) & Lower unc. & Upper unc. & Bica et al. (2008) \\ catalog ID & (deg) & (deg) & (deg) & (yr) & (yr) & (yr) & catalog ID \\ \hline SMC-NUV-484 & 14.0765 & -72.4634 & 0.0280 & 7.22 & 6.92 & 7.33 & 343 \\ SMC-M2-287 & 13.0482 & -72.5310 & 0.0065 & 7.99 & 7.77 & 8.01 & 258 \\ SMC-IR1-449 & 13.3365 & -73.1764 & 0.0130 & 7.96 & 7.88 & 8.06 & -- \\ ... & ... & ... & ... & ... & ... & ... \\ \hline \end{tabular} \end{center} {\bf Notes. } \\ The lower and upper uncertainty bounds are estimated at the 16$^{\rm th}$ and 84$^{\rm th}$ percentiles, respectively. (The full version is available online.)\\ \end{minipage} \end{table* The code we used here to automatically detect and estimate the ages of the SMC star clusters was analytically described in \citet{Bitsakis17}. Summarizing, the code makes use of the star counts method \citep[see][and references therein]{Schemja10}, which estimates the density of stars in a given region-of-interest and finds overdensities above some local background threshold ($\Sigma_{\rm det}$). To define the relation between $\Sigma_{\rm det}$ and the background density we performed Monte-Carlo simulations with artificial star clusters, having both Gaussian as well as uniform overdensity profiles (accounting for both compact and diffuse clusters), projected over various background values. The code is applied on a pixel-map conversion of the original image, where each star is represented by a single pixel. Only stars located in the overdensities are considered and a source detection is applied on the smoothed final image to define the center and radius of each candidate cluster. The method has been proven to be fast and accurate and was initially tested on the LMC with impressive results \citep[see][]{Bitsakis17}, yielding the discovery of 3500 new star clusters that have never been reported before. For the sake of consistency we use the same setup as for the LMC; we run the detection sequence on the ultraviolet (GALEX/NUV, SWIFT/UVM2) and near-infrared mosaics (Spitzer/IRAC 3.6) of the SMC in order to probe different cluster ages (e.g. young clusters are expected to host massive UV-emitting stars, while old clusters are dominated by low-mass stars emitting mostly in the near-IR part of the spectrum). We then use the MCPS catalog to obtain the photometric information of the stellar populations. The detection sequence yields a total of 2219 \emph{candidate} clusters and associations in the corresponding region. \begin{figure*} \begin{center} \includegraphics[scale=0.4]{clusters.jpg} \caption{Examples of clusters from our catalog presented on the Spitzer $IRAC~3.6\micron$ image. The dashed black lines mark the radii, as defined by the star-counts code. (a): cluster SMC-NUV-484, age 16.6$^{+8.2}_{-4.7}$ Myr; (b) SMC-IR1-665, age 48.5$^{+2.9}_{-2.0}$ Myr; (c) SMC-IR1-635, age 186$^{+55}_{-35}$ Myr; (d) SMC-IR1-358, age 512$^{+135}_{-124}$ Myr; (e) SMC-IR1-727, age 845$^{+284}_{-650}$ Myr; and (f) SMC-IR1-270, age 1.07$^{+0.23}_{-0.85}$ Gyr. The horizontal and vertical axes (plural) show, respectively, correspond to R.A. and Declination measured in degrees (J2000). } \label{fig_clusters} \end{center} \end{figure*} The age estimation algorithm (also presented in \citealt{Bitsakis17}) consists of a modified version of the code of Ram\'irez-Siordia et al. (in prep.). Briefly, this code uses a Bayesian approach to obtain the most likely theoretical isochrone that reproduces the observed CMD of each candidate cluster, while taking into account the cluster star memberships. The set of 80 model isochrones we used here is a byproduct of an independent project by Charlot \& Bruzual (in preparation)\footnote{The Charlot \& Bruzual isochrones are available to the interested user upon request.}, and was produced following the evolutionary tracks of \citet{Chen15} and accounting for the evolution of thermally pulsing asymptotic giant branch (TP-AGB) stars \citep{Marigo13}. The isochrones were calculated for a representative SMC metallicity of [Fe/H]=-0.70 \citep[i.e. $Z$=0.004;][]{Venn99}, and cover the range 6.9 $\le$ log(age) $<$ 9.7 yr. As anticipated above, we also perform field star decontamination. Our code uses a modified version of the method described in \citet{Mighell96}. According to this, the code produces the CMD of the candidate cluster as well of its surrounding field stars and estimates the probability of each candidate star to belong to the cluster. This membership probability is stored in a table containing all the cluster star information and is eventually used during the age estimation process mentioned above. In \citet{Bitsakis17} we showed that the method performs well even in high field star density environments (such as the LMC/SMC bar). Eventually, the code discards any candidate cluster with an insignificant number of stars ($n<$20) having high membership probability ($>$60\%), as well as those clusters that could not be fitted by our age estimation code. To ensure a more accurate age estimation we perform the CMD fitting in the ($U-V$) versus $V$, ($B-V$) versus $V$, and ($V-i$) versus $i$ bands for each cluster and then we combine the final results using equation 5 from \citet{Bitsakis17}, which takes into account the number of stars included, and how well the age is constrained in each fitting. In Figure~\ref{fig_isoc}, we present two examples of the best age estimation in the CMDs of clusters SMC-NUV-484 and SMC-IR1-727. The final catalog contains 1319 \emph{secure} clusters (40\% smaller than the initial \emph{candidate} cluster sample). These clusters are presented in Table~\ref{tab_clusters}; column (1) gives the cluster identifier (it consists of a reference to the band where each cluster was initially detected, i.e., $IR1$ refers to Spitzer/IRAC1, $NUV$ to GALEX/NUV, and $M2$ to SWIFT/UVM2, plus the serial number of the corresponding cluster); columns (2) and (3), respectively, contain the right ascension (R.A.) and declination (Dec.) of the cluster centers, in J2000 decimal equatorial coordinates; column (4) reports the cluster radii; columns (5), (6), and (7) contain, respectively, the best age estimation for each cluster, and its lower and upper uncertainty bounds (derived from the 16$^{\rm th}$ and 84$^{\rm th}$ percentiles of the probability distribution histogram produced by the code). Finally, column (9) contains -- if available -- the corresponding cluster identifier from the catalog of \citet{Bica08}. Some characteristic examples of clusters ordered by increasing age are presented in Figure~\ref{fig_clusters}. \begin{figure*} \begin{center} \includegraphics[scale=0.4]{isoc.jpg} \caption{Examples of the isochrone fitting process in the ($B-V$) versus $V$ field star decontaminated CMDs of the star clusters SMC-NUV-484 and SMC-IR1-727, presented in Fig.~\ref{fig_clusters}. Best fit isochrones are presented in green, upper and lower uncertainties in magenta and blue, respectively. } \label{fig_isoc} \end{center} \end{figure*} \section{Results} \begin{figure*} \begin{center} \includegraphics[scale=0.64]{comp_ages.eps} \caption{Comparison of the ages determined from our method ($\rm Age_{\rm current}$) for clusters we have in common with (a) \citet{Rafelski05}, (b) \citet{Glatt10}, and (c) \citet{Chiosi06}. The dashed black lines correspond to the one-to-one correlation, while the dotted red ones are the least square fits to the data. The Pearson correlation coefficients (R) are indicated in the upper left corner of each panel.} \label{fig_comp_ages} \end{center} \end{figure*} \subsection{Comparisons with other surveys} We compare our final catalog of star clusters with that of \citet{Bica08}. These authors have reported 515 clusters in the central 18 deg$^{2}$ of the SMC we surveyed, 211 of which (58\%) overlap our sample. In Figure~\ref{fig_comp_ages}, we compare our age estimates with those from other surveys. \citet{Rafelski05} compared the integrated colors of their star clusters, acquired from the MCPS survey, with models of simple stellar populations. Unfortunately, their technique is not able to decontaminate from field stars; hence, although these authors performed various tests to ensure the reliability of their estimates, their method can introduce significant biases, especially at high field star density regions (like the SMC bar). Thus, the comparison with their results yields a Pearson R-coefficient 0.74 (see also Figure~\ref{fig_comp_ages}a). On the other hand, \citet{Glatt10} visually fitted a set of isochrone models to the observed cluster CMDs. Although they used a field star decontamination technique, the large uncertainties introduced by visual identification of the main sequence turn-off are likely the origin of the large scatter between theirs and our age estimates, having R=0.82 (see Figure~\ref{fig_comp_ages}b). Similarly, \citet{Chiosi06} corrected for field star contamination, and used both visual and $\chi^{2}$ minimization methods; they divided the observed and model CMDs in bins of color and magnitude, and minimized their differences. Although we only have 11 clusters in common, the comparison yields R=0.77 (see Figure~\ref{fig_comp_ages}c). Finally, \citet{Parisi14} carefully calculated the ages of a small sample of 15 old SMC clusters using high spatial resolution data from the Very Large Telescope in Chile. For the only cluster we have in common (identified as L17 in their catalog, our SMC-IR1-226), we measure an age 1.22$^{+0.11}_{-0.40}$ Gyr, which is remarkably similar to their 1.25 Gyr estimate. \subsection{The age distribution of star clusters} \begin{figure} \begin{center} \includegraphics[scale=0.5]{clusters_histogram.eps} \caption{Age distribution of the SMC clusters. The fractions presented here are normalized to the total number of clusters found in that galaxy.} \label{fig_hist_ages} \end{center} \end{figure} In Figure~\ref{fig_hist_ages}, we present the age distribution of star clusters in the SMC. The bin size was optimized using the Freedman-Diaconis rule (bin size 0.136 dex). The main cluster formation event seems to have happened $\sim$240 Myr ago. The decline in the number of star clusters beyond the main peak could be associated both with cluster fading \citep[e.g.,][]{Boutloukos03}, and/or cluster dissolution due to a variety of mechanisms, such as $(i)$ residual gas expulsion, $(ii)$ two-body relaxation, $(iii)$ tidal heating from disc shocks, and $(iv)$ tidal harassment from giant molecular clouds \citep[see][and references therein]{Baumgardt13}. On the other hand, phenomena like the cluster disruption due to gas expulsion after the burst of star formation took place in the initial stages of cluster formation, and therefore in short time-scales \citep[$\sim$40 Myr for the Magellanic Clouds; see][]{deGrijs09}. \begin{figure} \begin{center} \includegraphics[scale=0.5]{clusters_histogram_gauss3.eps} \caption{The three component mixture model (dashed green line), and its individual constituents (solid black lines). The fractions presented here are normalized to the total number of clusters found in that galaxy.} \label{fig_mix} \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[scale=0.7]{bar_nobar.eps} \caption{Age distributions of the star clusters found in the SMC bar (left panel) and in the rest of the galaxy (right panel). We also display the three component mixture models in both figures (dashed green lines), and their individual constituents (solid black lines). The fractions presented here are normalized to the total number of clusters found in that galaxy.} \label{fig_hist_ages_bar} \end{center} \end{figure*} Since a star cluster formation event in our data could be represented by a single Gaussian distribution (due to the range of uncertainties in the estimation of the cluster ages), we use a Gaussian mixture model code, NMIX\footnote{Publicly available at \url{https://people.maths.bris.ac.uk/~mapjg/Nmix.}}, to derive the underlying number of such distributions in our data. This method reports the statistically motivated number of Gaussian distributions that can fit a given dataset by implementing the approach of \citet{Richardson97}. In Figure~\ref{fig_mix}, we present the results of the fitting; it is shown that our cluster age distribution can be successfully reproduced by a three component mixture model (having Bayes K-factors between that model and each one of the rejected univariate distributions $>$4.5), with peaks 30, 240, and 680 Myr ago. Based on their results, \citet{Glatt10} have visually identified and proposed two main periods of cluster formation 160 and 630 Myr ago, as well as a minor event $\sim$50 Myr ago (see Figure 5 of that work); this last event of star formation was also detected by \citet{Harris04}. Whereas the 50 and 630 Myr peaks from \citet{Glatt10} are consistent with our secondary cluster formation events, the 160 Myr one is significantly different from our main 240 Myr event. We note here that histogram peaks can be also the result of binning artifacts. This is not the case for our findings since NMIX fits models on the un-binned data. To test whether binning could be at the origin of the discrepancy with \citet{Glatt10}, we applied the Freedman-Diaconis rule to calculate the bin size for their sample; its value is 0.109 dex. Using this bin size, we produced an updated version of the \citet{Glatt10} histogram, which shows a major formation event 280 Myr ago, with minor ones appearing 20, 100, and 450 Myr ago. This exercise suggests that, in addition to the scatter mentioned in \S4.1, differences in the binning scheme also contribute to the different results obtained by \citet{Glatt10} and in the present work. \subsection{The spatial age distribution of star clusters} To further study the cluster formation history in the SMC, we present in the two panels of Figure~\ref{fig_hist_ages_bar} the age distributions of those clusters located in the bar (left panel) and everywhere else in the galaxy (hereafter referred to as ``outskirts''; right panel). The two distributions display important differences, having a Kolmogorov-Smirnov probability of being drawn from the same sample $<$10$^{-5}$. In contrast to the bar that had a major formation event around 200 Myr ago, with secondary peaks appearing at 20 and $\sim$800 Myr, the outskirts' major peak appeared $\sim$270 Myr ago, with secondary ones 40 Myr and 2 Gyr ago. These results are drawn from the 3-component NMIX models, having K-factors $>$3.9 (see Figure~\ref{fig_hist_ages_bar}). Although the two major peaks might be associated with the same cluster formation event, it is possible that the bar delayed its cluster formation with respect to the rest of the galaxy. Furthermore, the skewness of the outskirts distribution suggests a sudden termination of the cluster formation, contrary to the more continuous formation in the bar. The above results can be also confirmed from Figure~\ref{fig_sfh}, where we present the spatial distribution of clusters of different ages in our sample (the age ranges are as in \citealt{Bitsakis17}). Clusters younger than 100 Myr are solely located in the bar region, while clusters older than 355 Myr are mostly populating the outskirts. The bar is also associated with two prominent H{\rm I} supershells \citep{Stanimirovic99}, confirming the recent burst of star formation in that region. What is remarkable is the fact that, starting from the center of the SMC-bar, clusters of larger ages are gradually located outwards, with only very few old clusters ($>$750 Myr) found in the central region of the galaxy. This result suggests that an outside-in quenching of cluster formation occurred over the past Gyr in the SMC. \begin{figure*} \begin{center} \includegraphics[scale=0.4]{TOTAL_2.jpg} \caption{Spatial age distribution for all the star clusters in our sample (black dots). The coordinates in both axes are in degrees (J2000). From top-left to bottom-right, we present the positions of star clusters with: {\rm Age}$\le$20 Myr, 20$<${\rm Age}$\le$50 Myr, 50$<${\rm Age}$\le$100 Myr, 100$<${\rm Age}$\le$250 Myr, 250$<${\rm Age}$\le$355 Myr, 355$<${\rm Age}$\le$500 Myr, 500$<${\rm Age}$\le$750 Myr, and {\rm Age}$>$750 Myr. } \label{fig_sfh} \end{center} \end{figure*} \section{Discussion: Comparison between the LMC-SMC cluster ages and implications} As presented above, our method is able to create complete, uniform samples of star clusters which allow comparisons between different galaxies. In particular, the use of an identical set-up and data as in \citet{Bitsakis17} secures the robustness of the comparisons between the star cluster properties of the two Magellanic Clouds, namely the SMC and LMC. We compare the cluster age distributions of the two galaxies, presented in Figure~\ref{fig_hist_ages} of the current work for the SMC and in Figure~8 of \citet{Bitsakis17} for the LMC, and we discuss the implications. The comparison shows that both Clouds display enhanced cluster formation activity in the last 200-to-300 Myr. This is also consistent with the peaks of cluster formation in the bars of both galaxies; this age coincides with the epoch at which \citet{Besla12} estimated that a direct collision occurred between the two Clouds. Yet, owing to large differences in their sizes and masses, the effects of such a collision in the cluster formation history of the two galaxies should have been very different. This is evident in Figure~\ref{fig_age_dens}, where we present the median age distribution in bins $\sim$0.5 deg$^{2}$ for the LMC (left) and the SMC (right), respectively. It is shown that the star clusters in the SMC bar are younger than those in the LMC bar, where the most recent cluster formation occurred $>$50 Myr ago. In contrast, the SMC bar is experiencing an on-going cluster formation activity, with 8\% of its clusters (14\% of those located in the bar) having ages $<$50 Myr. This also agrees with the findings of \citet{Chiosi06} and \citet{Glatt10}, of very recent ($<$20 Myr) cluster formation activity in the SMC. This suggests the presence of cold molecular gas in the central region of that galaxy, as confirmed by \citet{Bolatto11}. \begin{figure*} \begin{center} \includegraphics[scale=0.32]{age_density.jpg} \caption{The spatially binned median age distribution of the LMC (left) and the SMC (right), respectively, overlaid on their Spitzer/IRAC 3.6$\micron$ mosaics. The color-scale covers the range 7.4$\leq$log(age/yr)$<$9.9. The coordinates in both axis are in degrees (J2000). The bin sizes are 1 deg in RA and 0.5 deg in Dec.} \label{fig_age_dens} \end{center} \end{figure*} The age distributions of the outskirts of both galaxies also show great differences. The SMC contains on average clusters older than 300 Myr ($\sim$15\% of them are older than a Gyr), while the LMC contains mostly clusters 150-to-500 Myr old (only 7\% have ages $>$1 Gyr). Despite those differences, both distributions seem to have peaked $\sim$300 Myr ago, suggesting that the aforementioned collision between the two Clouds not only affected their bars, but rather triggered cluster formation on a global scale in those galaxies. The secondary SMC peak at 680 Myr might be matched with the smaller $\sim$500 Myr peak of the LMC. These results would then be in agreement with the 0.6 Gyr star formation enhancement observed by \citet{Harris04,Harris09}, who studied the star formation histories of the two galaxies and, based on orbital simulations available at the time, associated such events with perigalactic passages of the Magellanic Clouds about the Galaxy. This difference in the old vs. young cluster spatial distributions suggests that the SMC may have ceased its star cluster formation in an outside-in fashion. This result is consistent with the findings of \citet{Cignoni13}, who studied the spatially resolved star formation history of six SMC regions and suggested the existence of an age gradient with all the star formation activity over the past 0.5 Gyr being concentrated in the central region. Such an age gradient has not been reported, however, for clusters older than 1 Gyr \citep[see][]{Parisi14}. This implies that its interaction with the LMC (or the Galaxy) could have affected (by stripping, shocks, or inflows towards the center) its \emph{outer} gas reservoir, thus preventing it from forming younger star clusters in the outskirts. \citet{Zhang12} studied the multi-band surface brightness profiles of 34 nearby dwarf irregular galaxies, and found an outside-in shrinking of the star formation that they attributed to environmental effects (i.e., interactions between galaxies). Arguably the LMC, being 50\% more massive than the SMC, did not suffer similar gas loss by galaxy-galaxy interactions, and hence retained its global cluster formation throughout its lifetime. The comparison of the spatial distributions of young clusters ($<$50 Myr) in the Magellanic Clouds is also puzzling. As shown in Figure~\ref{fig_age_dens}, clusters with these ages in the SMC are mostly located at the bar, preferentially at the bar-``arm''\footnote{The ``arms" are here intended as those HI features of the Magellanic Clouds resembling classical spiral arms, although their actual nature is still under debate, as described in the text.} junction points, while in the LMC they lie mostly along its arms. In the case of the LMC, HI arms are found north-east and south-west of the bar \citep{Kim03}, whereas in the SMC they trace an elongated structure located south-east of the bar \citep{Stanimirovic99, Dickey00}. Interestingly, \citet{Ochsendorf17} showed that the most active star forming regions at present in the LMC, namely the 30Dor and N79, are located where the LMC bar joins the HI arms. Such locations are very likely to enhance star formation due to the high concentrations of gas and to shocks induced by the internal dynamics, and very young stars/clusters have been observed there in various other galaxies \citep[e.g.,][]{Beuther17}. The absence of young clusters in the outskirts of the SMC is likely due to the overall scarcity of gas in the last few Myr. The hypothesis of outside-in stripping of the gas in the SMC is also consistent with the cold molecular gas distribution \citep[see][]{Bolatto11}. In the SMC, molecular gas is mostly confined to the bar, and indeed its youngest clusters overlap the densest molecular gas in the north-eastern portion of the bar at its intersection with the aforementioned HI feature. Using the Spitzer/MIPS 24$\micron$ images we confirm that the locations of the young clusters coincide with those of the warm dust clouds too. It is plausible that many of those clusters are still embedded in the progenitor clouds, thus explaining their very young ages. Regarding the LMC, since many of the clusters younger than 50 Myr seem to trace both HI arms \citep{Bitsakis17}, we have considered the possibility that star formation there is related to a long-lived spiral density wave not connected to an interaction with the SMC. This hypothesis, however, is disproven by the absence of a corresponding density enhancement in the old stellar disk as traced by the near-IR, as reported by \citet{Marel01} and confirmed by our own multiwavelength analysis. On the other hand, if the LMC bar was excited or enhanced by an interaction with the SMC a few Myr ago, the present-day star formation in the LMC should still be traced back to that interaction, if indeed star formation is triggered by shocks in the bar-arm interface, especially when the pattern speeds of bar and arms are different \citep{Beuther17,MartinezGarcia11}. Our results suggest that, in spite of the asymmetries in the cluster formation histories of the two galaxies, their overall evolution is a combination of both internal and environmental mechanisms. \citet{Harris04, Harris09} suggested that the star formation histories of the Clouds are dominated by correlated -- thus environmental -- mechanisms. Our findings agree with their conclusions that the interactions between the Magellanic Clouds and the Galaxy were predominant in shaping uniquely the star cluster formation history in the Clouds. \section{Conclusions} We applied our new method to detect and estimate the ages of star clusters in nearby galaxies \citep[originally presented in][]{Bitsakis17} on the multi-band, high resolution data of the SMC. We apply the same set-up and procedure to analogous data of the two galaxies, and compare the results. Our conclusions are summarized below. \begin{enumerate} \item[(a)] We detect 1319 star clusters in the central 18 deg$^{2}$ of the SMC we surveyed. 1108 of these clusters have never been reported before. \item[(b)] The distribution of cluster ages suggests major star cluster formation $\sim$240 Myr ago. Studying the corresponding distributions of the SMC bar and outskirts, we find that they have significant differences, with the cluster formation peaking at the bar $\sim$200 Myr ago, while for the rest of the galaxy the average age is $\sim$270 Myr ago. Moreover, the skewness of the age distribution in the galaxy outskirts suggests a termination of the cluster formation over the past few Myr. \item[(c)] The spatially resolved age distribution of the star clusters in the SMC suggests that the inner part of the galaxy was formed more recently, and that an outside-in quenching of cluster formation occurred over the past Gyr. \item[(d)] A comparison between the above results and those derived previously for the LMC shows that both galaxies have experienced an intense star cluster formation event at $\sim$300 Myr ago, consistent with a direct collision scenario proposed by model simulations. \item[(e)] Most of the youngest clusters in both Magellanic Clouds are found where their bars meet the HI arms (or similar elongated features), suggesting that cluster formation there is triggered by internal dynamical processes. \item[(f)] Our results suggest that the interactions between Magellanic Clouds are the major driver of their large-scale star cluster formation and overall evolution. \end{enumerate} \acknowledgments The authors wish to thank the anonymous referee for her/his thorough review and valuable comments that helped improve significantly this article. TB would like to acknowledge support from the CONACyT Research Fellowships program. We gratefully acknowledge support from the program for basic research of CONACyT through grant number 252364. GM acknowledges support from CONICYT, Programa de Astronom\'ia/PCI, FONDO ALMA 2014, Proyecto No 31140024. GB acknowledges support for this work from UNAM through grant PAPIIT IG100115. This research made use of TOPCAT, an interactive graphical viewer and editor for tabular data. IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation.
{ "timestamp": "2017-12-15T02:00:42", "yymm": "1712", "arxiv_id": "1712.04974", "language": "en", "url": "https://arxiv.org/abs/1712.04974" }
\section{Methods} \textbf{Experimental procedure.} To perform our experiments we prepare a Bose-Einstein condensate of $\approx\num{80e3}$ $^{87}\mathrm{Rb}$ atoms, spin polarized in the $\ket{5\mathrm{S}_{1/2}, F=2, m_F=2}$ state, in a crossed optical dipole trap with trapping frequencies $\omega_{x,y,z} = 2\pi\times(\num{61},\num{61},\num{51})\si{Hz}$. We adiabatically transfer these atoms into a blue detuned 3D optical lattice with lattice constants $a_{x,y,z} = (374,374,529)\si{nm}$ and lattice depth $S=\SI{8}{\erec}$ with an exponential ramp with time constant \SI{20}{ms}. The lattice depth is then linearly increased within \SI{50}{\us} to a final depth of $S=\SI{30}{\erec}$, while simultaneously switching off the underlying harmonic trapping potential. The coupling light at \SI{297}{\nm} is generated from a frequency doubled dye laser and the stated atomic Rabi frequencies are calibrated from a different set of measurements (see supplementary material). It is linearly polarized parallel to the quantization axis of the system. We ramp it to it's final power within the same \SI{50}{\us} as increasing the lattice depth. After a variable hold time $\tau$, we switch off the lattice potential as well as the coupling laser instantaneously and take a time-of-flight image of the dropping atomic cloud. Additionally we record the time resolved ion signal during illumination using a small electric field ($E \approx \SI{0.2}{\V\per\cm}$). Ions are created from excited Rydberg states mainly through photoionization from the lattice beams at a small rate compared to the natural decay or the coupling strength. \textbf{Determination of the on-site interaction.} As we switch off the underlying trapping potential the collapse and revival dynamics are on top of Bloch-Oscillations in the lattice potential. Thus to extract the super-fluid visibility $\mathscr{V}$, for a given hold time $\tau$, we extract the maximum density of the atomic cloud from a time-of-flight image. From this central peak we take the usual approach defining four boxes around the center containing the super-fluid peaks, as well as four boxes turned by 45 degrees with respect to the first four boxes. We define the visibility $\mathscr{V}$ as the pixel sum from the first set of boxes minus the pixel sum of the second set of boxes divided by the pixel sum of all eight. A perfect super fluid state would thus have a visibility equal to one and a completely collapsed state equal to zero. To extract the on-site interaction $U$ we fit the measured visibility using: \begin{equation} \begin{split} \mathscr{V} &\propto \left| \bra{\alpha(t)}\hat a\ket{\alpha(t)} \right|^2 \mathrm{e}^{-\tau/\tau_0}\\ &= \left|\sqrt{\overline n}\exp{(\overline n(\mathrm{e}^{i(-U/\hbar\,\tau+\phi)}-1))}\right|^2 \mathrm{e}^{-\tau/\tau_0}. \end{split} \end{equation} The exponential decay factor accounts for any imperfections and losses in the system. $\overline n$ is the average atom number per site and $\phi$ a phase offset imprinted by the finite ramp length to the collapse depth. For all measurements we simultaneously record a reference measurement without coupling to the molecular state to compensate for day to day drifts and alignment imperfections in the lattice beams and extract the interaction shift $\Delta U = U - U_\mathrm{ref}$. \section{Acknowledgements} We would like to thank M. Eiles for helpful discussions and I. Fabrikant for providing the $e^-$-Rb scattering phase-shifts used to calculate the molecular potential. H.O. acknowledges financial support by the DFG within the SFB/TR 49 and the SFB/TR 185. C.L. acknowledges financial support by the DFG within the SFB/TR 185. O.T. acknowledges financial support by the DFG within the SFB/TR 49 and the MAINZ graduate school. \section{Author Contribution} O.T., C.L. and T.E. performed the experiments. O.T. analysed the data and prepared the manuscript. H.O. supervised the project. All authors contributed to the data interpretation and manuscript preparation. \widetext \clearpage \begin{center} \textbf{\large Supplementary Material: Experimental Realization of a Rydberg optical Feshbach resonance in a quantum many-body system} \end{center} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} \setcounter{page}{1} \makeatletter \renewcommand{\theequation}{S\arabic{equation}} \renewcommand{\thefigure}{S\arabic{figure}} \section{Rabi frequency calibration} \label{sup:RabiFrequency} To calibrate the Rabi frequency $\Omega$ of the excitation system we investigate the light shift from the excitation light on the $\ket{F=2, m_F=+2}$ ground state. To do so we prepare an atomic cloud in the $\ket{F=1, m_F=+1}$ state and illuminate it with the excitation light blue detuned to the $\ket{F=2, m_F=+2}$ to $\ket{25\mathrm{P}_{3/2}}$ transition while probing the lower transition with a microwave pulse. From the resonance positions of the $\ket{F=2, m_F=+2}$ state for different intensities and detuning $\Delta$ (in the range of \num{50} to \SI{200}{\MHz}) of the excitation light we calibrate the Rabi frequency to the excited state using the shift in a 2-level system for large detunings of $\Delta E = \hbar\frac{\Omega^2}{4\Delta}$. \section{Molecular states} To calculate the potential energy curves we follow the procedure from our previous work described in \,\cite{Niederpr2016spinFlip} including s- and p-wave scattering as well as the ground state hyperfine interaction. This procedure yields the potential energy curves (PEC's) in Born Oppenheimer approximation. The bound state energies as well as the nuclei wave functions are then obtained using a shooting method. \section{Franck-Condon factor} To calculate the Franck-Condon factor for a transition we approximate the ground state wave function with the ground state of an isotropic harmonic oscillator not taking into account any interaction effects between two atoms in a lattice site and taking the geometric mean of the trapping frequencies in the different lattice axes. We can then analytically write the relative ground state wave function of two atoms as: \begin{equation} \begin{split} \Psi_g(R, \Theta, \Phi) &= Y_{00}(\Theta, \Phi) \mathscr{F}_g(R)/R \\ &= \frac{1}{\sqrt{4\pi}} \frac{2}{\pi^{1/4}}\left(\frac{\mu\omega}{\hbar}\right)^{3/4}\mathrm{e}^{-\frac{\mu\omega}{2\hbar}R^2}. \end{split} \end{equation} Using this and the numerically integrated molecular nuclei wave function $\mathscr{F}_\mathrm{mol}(R)$ we evaluate the Franck-Condon factor using \begin{equation} FC = \int dR \mathscr{F}_\mathrm{mol}^*(R) \mathscr{F}_g(R). \end{equation} \section{Equivalence of bound-bound and free-bound transitions in the limit of vanishing lattice potential} \label{sup:equivalenceBoundBoundFreeBound} \textbf{Bound-Bound:} In a simplified two level bound-bound model we can express the change in scattering length as $\Delta a = \frac{\Delta U}{U_0}a_{bg}$, where $\Delta U$ is the shift through the light field $\Delta U \approx \hbar \frac{\Omega_\mathrm{mol}^2}{4\Delta}$ (for $\Delta\gg\Omega_\mathrm{mol}$), $U_0$ the unperturbed on-site interaction and $a_{bg}$ the background s-wave scattering length. First we evaluate the on-site interaction for a spherical symmetric lattice site in harmonic approximation. We can then express the Wannier function as a Gaussian of the form: \begin{equation} \omega_0(x) = \left( \frac{m\omega}{\pi\hbar}\right)^{1/4} \exp\left(-\frac{m\omega}{2\hbar}x^2\right) \end{equation} where $\omega$ is the trapping frequency of the potential. The on-site interaction is then given as \begin{align} U_0 &= \frac{4\pi\hbar^2}{m}a_{bg}\int dxdydz \left|w_0(x)w_0(y)w_0(z)\right|^4\\ &= \frac{4\pi\hbar^2}{m}a_{bg}\left(\frac{m\omega}{2\pi\hbar}\right)^{3/2}. \end{align} To compare the change in scattering length to the free-bound case it is necessary to split the Rabi frequency into several parts. We thus write \begin{align} \Omega_\mathrm{mol} &= \frac{d_\mathrm{mol}E}{\hbar} FC \end{align} where $d_\mathrm{mol}$ is the molecular dipole matrix element, $E$ is the electric field strength of the excitation light and $FC$ is the bound-bound Franck-Condon factor. As before the relative ground state wave function is hereby approximated as a Gaussian wave function in relative coordinates \begin{equation} \mathscr{F}_g(R) = \frac{2}{\pi^{1/4}}\left(\frac{\mu\omega}{\hbar}\right)^{3/4}R\exp\left(-\frac{\mu\omega}{2\hbar}R^2\right), \label{equ:boundState} \end{equation} with $\mu = m/2$ the reduced mass of the two atoms. We ignore any interaction between the two particles leading to a separation of $\approx a_{bg}$ for $R\rightarrow0$, as it is a cumbersome job to take into account, without adding any beneficial insight into the problem. Putting this together we end up with \begin{equation} \begin{split} \Delta a &= \frac{\Delta U}{U_0} a_{bg}\\ &\approx \frac{d_\mathrm{mol}^2E^2}{4\Delta\hbar} \frac{m}{4\pi\hbar^2} \left(\frac{2\pi\hbar}{m\omega}\right)^{3/2} \left|\int\mathscr{F}_\mathrm{mol}(R)\mathscr{F}_g(R) dR\right|^2\\ &= \frac{d_\mathrm{mol}^2E^2\mu}{2\Delta\hbar^3} \left|\int\mathscr{F}_\mathrm{mol}(R)R\exp\left(-\frac{\mu\omega}{2\hbar}R^2\right) dR\right|^2\\ &\approx \frac{d_\mathrm{mol}^2E^2\mu}{2\Delta\hbar^3} \left|\int\mathscr{F}_\mathrm{mol}(R)R dR\right|^2. \label{equ:boundbound} \end{split} \end{equation} Where in the last step we approximated $\exp\left(-\frac{\mu\omega}{2\hbar}R^2\right) \approx 1$ for $\omega \rightarrow 0$. \textbf{Free-Bound:} For a free-bound transition on the other hand we adapt the method from ref.\,\cite{Nicholson2015OFR}. Here the interaction shift is given by $\Delta a = l_{opt}\frac{\Delta\gamma}{\Delta^2+\gamma^2/4}$ with $l_{opt}$ the optical coupling strength and $\gamma$ the decay from the excited state. We will evaluate this in the limit $\Delta \gg \gamma$ neglecting the second term in the denominator leading to $\Delta a = l_{opt} \gamma/\Delta$. The optical coupling strength is given by Fermi's golden rule \begin{equation} l_{opt} = \frac{\Gamma(k)}{2k\gamma} = \frac{1}{2k\gamma}\frac{2\pi}{\hbar}\left|\bra{n} V_{opt} \ket{E} \right|^2 \end{equation} where $\ket{n}$ is the normalized excited state and $\ket{E}$ is an energy normalized free state given by \begin{equation} \braket{R|E} = \sqrt{\frac{2\mu}{\pi\hbar^2k}}\sin(kR). \end{equation} We again have ignored any interaction effects for $R\rightarrow0$, and $V_{opt}$ is the optical coupling potential. We rewrite \begin{equation} \begin{split} \bra{n} V_{opt} \ket{E} &= \frac{\hbar\Omega_\mathrm{mol}}{2} = \frac{d_\mathrm{mol}E}{2} FC\\ &= \frac{d_\mathrm{mol}E}{2} \sqrt{\frac{2\mu}{\pi\hbar^2k}} \int \mathscr{F}_\mathrm{mol}(R) \sin(kR) dR\\ &\approx \frac{d_\mathrm{mol}E}{2} \sqrt{\frac{2\mu k}{\pi\hbar^2}} \int \mathscr{F}_\mathrm{mol}(R)RdR \end{split} \end{equation} where the last step is valid for $k\rightarrow0$, as is the case for ultra cold atomic clouds. Putting all this together we end up with \begin{equation} \begin{split} \Delta a &\approx \frac{1}{2k\Delta} \frac{2\pi}{\hbar} \frac{d_\mathrm{mol}^2E^2}{4} \frac{2\mu k}{\pi\hbar^2} \left|\int \mathscr{F}_\mathrm{mol}(R)R dR \right|^2\\ &= \frac{d_\mathrm{mol}^2E^2\mu}{2\Delta\hbar^3} \left|\int\mathscr{F}_\mathrm{mol}(R)R dR\right|^2 \end{split} \end{equation} which is equivalent to equation \ref{equ:boundbound} for a bound-bound description.
{ "timestamp": "2017-12-15T02:08:52", "yymm": "1712", "arxiv_id": "1712.05263", "language": "en", "url": "https://arxiv.org/abs/1712.05263" }
\section{Introduction} Flavor-changing neutral current (FCNC) processes are sensitive probes for new physics beyond the Standard Model (SM), since in the SM there are no FCNC processes at the tree level, and they are suppressed further by the Glashow-Iliopoulos-Maiani (GIM) mechanism. One of the FCNC observables, the ${C\hspace{-0.2mm}P}$-violating~ ratio \ensuremath{\varepsilon'/\varepsilon_K}\,in neutral kaon decays into two pions, has attracted attentions recently because of a discrepancy between the experimental data and the theoretical predictions based on the first lattice calculation of the hadronic parameters $B_6^{(1/2)}$ and $B_8^{(3/2)}$ by the RBC-UKQCD collaboration~\cite{Bai:2015nea,Blum:2011ng, Blum:2012uk, Blum:2015ywa}.\footnote{ In contrast, the chiral perturbation theory predicts $B_6^{(1/2)} \approx 1.5$, which is a relatively larger value than the lattice result, and a consistent value with the measured \ensuremath{\varepsilon'/\varepsilon_K}\,is predicted~\cite{Pallante:2000hk,Pallante:2001he,Hambye:2003cy,Mullor}.} The next-to-leading order (NLO) prediction for \ensuremath{\varepsilon'/\varepsilon_K}\,has been calculated in Ref.~\cite{Buras:2015yba}, and it has been confirmed by an improved calculation in Ref.~\cite{Kitahara:2016nld}. The latter result is given by \begin{align} \left(\ensuremath{\varepsilon'/\varepsilon_K}\right)^{\mathrm{SM}} = (1.06 \pm 5.07) \times 10^{-4}, \end{align} which deviates from the experimental data~\cite{Batley:2002gn,AlaviHarati:2002ye,Abouzaid:2010ny,Olive:2016xmw} \begin{align} \textrm{Re}\left(\ensuremath{\varepsilon'/\varepsilon_K}\right)^{\mathrm{exp}} = (16.6 \pm 2.3) \times 10^{-4} , \end{align} at the $2.8\,\sigma$ level. The theoretical result which is much smaller than the data is supported by analyses in the large-$N_c$ dual QCD approach~\cite{Buras:2015xba,Buras:2016fys}. Note that improvements of the lattice calculation and independent confirmations of the result by other lattice collaborations are highly important to establish the presence of new physics in \ensuremath{\varepsilon'/\varepsilon_K}. In this paper, we study \ensuremath{\varepsilon'/\varepsilon_K}\,in the minimal supersymmetric standard model (MSSM) with introducing large off-diagonal entries in the trilinear couplings of the down-type squarks to the Higgs boson. The off-diagonal couplings generate gluino contributions to the flavor-changing $Z$ penguin which affects \ensuremath{\varepsilon'/\varepsilon_K}\,via the $I=2$ amplitude. Although such a scenario has been studied in Ref.~\cite{Tanimoto:2016yfy}, top-Yukawa contributions to $\Delta F = 2$ observables have not been taken into account. In the scenario, \ensuremath{\varepsilon_K}\,receives those contributions from the $Z$ penguin through the renormalization group (RG) running from the new physics scale to the electroweak (EW) scale, and through the matching onto the low-energy FCNC operators at the EW scale~\cite{Endo:2016tnu,Bobeth:2017xry}. They can be comparable in size to ordinary gluino box contributions. Moreover, since the LHC experiment is pushing up the lower bounds on the squark and gluino masses~\cite{Sirunyan:2017cwe,ATLAS}, the situation changes: larger trilinear couplings are required to explain the $\varepsilon'/\varepsilon_K$ discrepancy. The large off-diagonal trilinear couplings also affect other FCNC observables. We consider constraints on the couplings as well as on other MSSM parameters from the branching ratios of $K_L\to \mu^+\mu^-$, $\bar{B}\to X_s\gamma$ and $\bar{B}\to X_d\gamma$ in addition to \ensuremath{\varepsilon_K}. Furthermore, such large trilinear couplings can make the EW vacuum unstable. Although the vacuum instability was overlooked in Ref.~\cite{Tanimoto:2016yfy}, we investigate the vacuum (meta-)stability condition in detail and show that the constraint is significant. In Ref.~\cite{Endo:2016aws}, the vacuum condition has been studied in another scenario with large off-diagonal trilinear couplings of the up-type squarks, which bring chargino contributions to the $Z$ penguin. An alternative scenario for the explanation of the \ensuremath{\varepsilon'/\varepsilon_K}\,discrepancy in the MSSM has been proposed in Ref.~\cite{Kitahara:2016otd,Crivellin:2017gks}. The discrepancy in \ensuremath{\varepsilon'/\varepsilon_K}\,requires large ${C\hspace{-0.2mm}P}$-violating~ phases in the off-diagonal trilinear couplings. They also contribute to the branching ratios of $K^+\to\pi^+\nu\bar\nu$ and $K_L\to\pi^0\nu\bar\nu$, the effective branching ratio of $K_S\to \mu^+\mu^-$~\cite{DAmbrosio:2017klp,Chobanova:2017rkj} and the ${C\hspace{-0.2mm}P}\hspace{-0.4mm}~$ asymmetry difference $\Delta A_{\mathrm{CP}}(b\to s\gamma)$. We investigate SUSY effects on these observables in our scenario, and examine if the effects can be observed at current and/or near-future experiments. This paper is organized as follows. In Section \ref{sec:effL} we summarize the effective Lagrangian together with the RG equations and the one-loop matching conditions that are relevant to our analysis. Top-Yukawa contributions are also explained. In Section \ref{sec:SUSY} we present the gluino contributions associated with the $Z$ penguin. In Section \ref{sec:obs} we explain how each FCNC observable receive gluino contributions. In Section \ref{sec:vacuum} we discuss the constraints from the vacuum stability condition. In Section \ref{sec:analysis} we present our numerical analysis. Our conclusions are drawn in Section~\ref{sec:conclusion}. \section{Effective Lagrangian and top-Yukawa contributions} \label{sec:effL} In this paper, we study flavor-changing processes via the gluino one-loop contributions and the $Z$-boson exchanges. The latter is described by higher dimensional operators in the SM effective field theory (SMEFT), where the gauge invariance is guaranteed. The effective Lagrangian is defined as \begin{align} \mathcal{L}_{\rm eff} = \mathcal{L}_{\rm SM} + \sum_i \mathcal{C}_i \mathcal{O}_i, \end{align} where the first term in the right-hand side is the SM Lagrangian, and the second one is composed by higher dimensional operators~\cite{Grzadkowski:2010es}. In particular, those relevant to the $\Delta F=1$ $Z$-boson penguin are given by \begin{align} [\mathcal{O}_{HQ}^{(1)}]_{ij} &= (H^\dagger i\overleftrightarrow{D_\mu} H) (\overline{q}_{i} \gamma^\mu q_{j}), \\ [\mathcal{O}_{HQ}^{(3)}]_{ij} &= (H^\dagger i\overleftrightarrow{D^a_\mu} H) (\overline{q}_{i} \tau^a \gamma^\mu q_{j}), \\ [\mathcal{O}_{HD}]_{ij} &= (H^\dagger i\overleftrightarrow{D_\mu} H) (\overline{d}_{i} \gamma^\mu d_{j}). \end{align} Here, $q$ is the (left-handed) SU(2) quark doublets and $d$ is the (right-handed) down-type quark singlets with quark-flavor indices, $i,j$, and an SU(2) index, $a$. The Higgs doublet carries a hypercharge $+1/2$, and thus, has a vacuum-expectation value (VEV), $\langle H \rangle = (0,v/\sqrt{2})^T$, with $v \simeq 246\,\,\textrm{GeV}$ after the EW symmetry breaking (EWSB). The covariant derivative is defined for the Higgs doublet as \beq D_{\mu} = \partial_{\mu} +i g_2 \frac{\tau^a}{2} W^a_{\mu} + i \frac{g_Y}{2} B_{\mu}, \label{eq:covariantdel} \eeq and \beq H^\dagger\overleftrightarrow{D^a_\mu} H \equiv H^{\dagger} \tau^a D_{\mu} H - \left( D_{\mu} H\right)^{\dagger} \tau^a H. \eeq On the other hand, $\Delta F=2$ processes are described by the following four-Fermi operators, \begin{align} [\mathcal{O}_{QQ}^{(1)}]_{ijkl} &= (\overline{q}_{i} \gamma_\mu q_{j})(\overline{q}_{k} \gamma^\mu q_{l}), \\ [\mathcal{O}_{QQ}^{(3)}]_{ijkl} &= (\overline{q}_{i} \tau^a \gamma_\mu q_{j})(\overline{q}_{k} \tau^a \gamma^\mu q_{l}), \\ [\mathcal{O}_{DD}]_{ijkl} &= (\overline{d}_{i} \gamma_\mu d_{j})(\overline{d}_{k} \gamma^\mu d_{l}), \\ [\mathcal{O}_{QD}^{(1)}]_{ijkl} &= (\overline{q}_{i} \gamma_\mu q_{j})(\overline{d}_{k} \gamma^\mu d_{l}), \\ [\mathcal{O}_{QD}^{(8)}]_{ijkl} &= (\overline{q}_{i} \gamma_\mu T_A q_{j})(\overline{d}_{k} \gamma^\mu T^A d_{l}). \end{align} The Wilson coefficients develop from the SUSY scale down to the EW one. Let us define their beta functions as \begin{align} b_i = (4\pi)^2 \frac{d\,\mathcal{C}_i}{d \ln\mu}. \end{align} For the $\mathcal{O}_{HQ}$ and $\mathcal{O}_{HD}$ operators, the relevant terms are~(cf.,~Refs.~\cite{Jenkins:2013zja, Jenkins:2013wua, Alonso:2013hga}) \begin{align} [b_{HQ}^{(1)}]_{12} &= 6 Y_t^2 [\mathcal{C}_{HQ}^{(1)}]_{12}, \notag \\ [b_{HQ}^{(3)}]_{12} &= 6 Y_t^2 [\mathcal{C}_{HQ}^{(3)}]_{12}, \label{eq:Yt1} \\ [b_{HD}]_{12} &= 6 Y_t^2 [\mathcal{C}_{HD}]_{12}, \notag \end{align} where $Y_t$ is the top-quark Yukawa coupling. It is noticed that there are no $\mathcal{O}(\alpha_s)$ corrections at the one-loop level. The operators also contribute to $\Delta S=2$ four-quark operators as \begin{align} [b_{QQ}^{(1)}]_{1212} &= \lambda_t Y_t^2 [\mathcal{C}_{HQ}^{(1)}]_{12} + \dots, \notag \\ [b_{QQ}^{(3)}]_{1212} &= -\lambda_t Y_t^2 [\mathcal{C}_{HQ}^{(3)}]_{12} + \dots, \label{eq:Yt2} \\ [b_{QD}^{(1)}]_{1212} &= \lambda_t Y_t^2 [\mathcal{C}_{HD}]_{12} + \dots, \notag \end{align} where $[\lambda_t]_{ij} = V_{ti}^*V_{tj}$ and $\lambda_t = [\lambda_t]_{12}$. In the first leading logarithm approximation, the Wilson coefficients after the RG running from $\Lambda$ to $\mu$ ($\Lambda > \mu$) are estimated as \begin{align} \mathcal{C}_i(\mu) = \mathcal{C}_i(\Lambda) - \frac{1}{(4\pi)^2} b_i(\Lambda) \ln\frac{\Lambda}{\mu}. \label{eq:Yt3} \end{align} Irrelevant operator mixings and higher-order corrections during the evolutions are neglected. In particular, $\mathcal{C}_{QQ}^{(1)}$, $\mathcal{C}_{QQ}^{(3)}$ and $\mathcal{C}_{QD}^{(1)}$ are generated by $\mathcal{C}_{HQ}$ and $\mathcal{C}_{HD}$. After the EWSB, $\mathcal{O}_{HQ}$ and $\mathcal{O}_{HD}$ are matched to the flavor-changing $Z$ couplings through the expansion, \begin{align} H^\dagger i\overleftrightarrow{D_\mu} H &= \frac{g_Z}{2}v^2 Z_{\mu} + G^- i \overleftrightarrow{\partial_\mu} G^+ - g_2 v \left( W^+_{\mu} G^- + W^-_{\mu} G^+ \right) + \dots,\\ H^\dagger i\overleftrightarrow{D_\mu^3} H &= - \frac{g_Z}{2}v^2 Z_{\mu} + G^- i \overleftrightarrow{\partial_\mu} G^+ + \dots \end{align} with $g_Z = \sqrt{g_2^2 + g_Y^2}$, where the terms irrelevant for the matching onto the $\Delta S=2$ operators are omitted. The operators also contribute to $\Delta F=2$ observables through the effective Hamiltonian, \begin{align} \mathcal{H}_{\rm eff} = \sum_{i=1}^5 \mathcal{C}_i \mathcal{O}_i + \sum_{i=1}^3 \mathcal{C}'_i \mathcal{O}'_i + \textrm{H.c.}, \end{align} where the effective operators are \begin{align} [\mathcal{O}_1]_{ij} &= (\bar d_i^\alpha \gamma_\mu P_L d_j^\alpha)(\bar d_i^\beta \gamma^\mu P_L d_j^\beta),\\ [\mathcal{O}_2]_{ij} &= (\bar d_i^\alpha P_L d_j^\alpha)(\bar d_i^\beta P_L d_j^\beta),\\ [\mathcal{O}_3]_{ij} &= (\bar d_i^\alpha P_L d_j^\beta)(\bar d_i^\beta P_L d_j^\alpha),\\ [\mathcal{O}_4]_{ij} &= (\bar d_i^\alpha P_L d_j^\alpha)(\bar d_i^\beta P_R d_j^\beta),\\ [\mathcal{O}_5]_{ij} &= (\bar d_i^\alpha P_L d_j^\beta)(\bar d_i^\beta P_R d_j^\alpha), \end{align} with color indices $\alpha, \beta$. In this paper, chirality-flipped operators and their Wilson coefficients are denoted with a prime. At the tree level, the SMEFT operators are matched at the weak scale to these operators as \cite{Aebischer:2015fzz} \begin{align} [\mathcal{C}_1]_{ij}^{(0)} &= - \left( [\mathcal{C}_{QQ}^{(1)}]_{ijij} + [\mathcal{C}_{QQ}^{(3)}]_{ijij} \right),~~~ [\mathcal{C}'_1]_{ij}^{(0)} = - [\mathcal{C}_{DD}]_{ijij}, \label{eq:SMEFT4Q1} \\ [\mathcal{C}_4]_{ij}^{(0)} &= [\mathcal{C}_{QD}^{(8)}]_{ijij}, \label{eq:SMEFT4Q2} \\ [\mathcal{C}_5]_{ij}^{(0)} &= 2[\mathcal{C}_{QD}^{(1)}]_{ijij} - \frac{1}{N_c} [\mathcal{C}_{QD}^{(8)}]_{ijij}, \label{eq:SMEFT4Q3} \end{align} where $N_c=3$ is the number of colors. In addition, these low-energy $\Delta F = 2$ operators are generated by the $\Delta F = 1$ ones in the SMEFT through the one-loop matchings at the weak scale~\cite{Aebischer:2015fzz}. The conditions for $\mathcal{C}_{HQ}$ and $\mathcal{C}_{HD}$ at the scale $\mu_W$ are approximated as~\cite{Endo:2016tnu, Bobeth:2017xry} \begin{align} [\mathcal{C}_1]_{ij}^{(1)} &= \frac{\alpha [\lambda_t]_{ij}}{\pi s_W^2} \left[ [\mathcal{C}_{HQ}^{(1)}]_{ij}\,I_1(x_t,\mu_W) - [\mathcal{C}_{HQ}^{(3)}]_{ij}\,I_2(x_t,\mu_W) \right], \label{eq:matching1} \\ [\mathcal{C}_5]_{ij}^{(1)} &= -\frac{2\alpha [\lambda_t]_{ij}}{\pi s_W^2} [\mathcal{C}_{HD}]_{ij}\, I_1(x_t,\mu_W), \label{eq:matching2} \end{align} with $x_t = m_t^2/m_W^2$. These results are gauge-independent. The loop functions are defined as \begin{align} I_1(x,\mu) &= \frac{x}{8} \left[ \ln\frac{\mu}{m_W} - \frac{x-7}{4(x-1)} - \frac{x^2-2x+4}{2(x-1)^2}\ln x \right], \\ I_2(x,\mu) &= \frac{x}{8} \left[ \ln\frac{\mu}{m_W} + \frac{7x-25}{4(x-1)} - \frac{x^2-14x+4}{2(x-1)^2}\ln x \right]. \end{align} Here, we discarded box contributions which are suppressed by CKM factors or by $m_{c,u}^2/m_W^2$ in the $\Delta S = 2$ case (see Ref.~\cite{Bobeth:2017xry}). The RG equations in Eqs.~\eqref{eq:Yt1} and \eqref{eq:Yt2} and the matching conditions in Eqs.~\eqref{eq:matching1} and \eqref{eq:matching2} are proportional to $Y_t^2$, and hence, we call them the top-Yukawa contributions. \section{SUSY contributions} \label{sec:SUSY} \begin{figure}[t] \begin{center} \subfigure[]{ \raisebox{9.5mm}{\includegraphics[width=0.2\textwidth, bb= 0 0 137 91]{MSSM-box.pdf}} } \subfigure[]{ \includegraphics[width=0.2\textwidth, bb= 0 0 137 130]{MSSM-box-B.pdf} } \subfigure[]{ \includegraphics[width=0.2\textwidth, bb= 0 0 137 130]{MSSM-box-W.pdf} } \subfigure[]{ \includegraphics[width=0.2\textwidth, bb= 0 0 137 130]{MSSM-box-g.pdf} } \subfigure[]{ \raisebox{3mm}{\includegraphics[width=0.2\textwidth, bb= 0 0 137 83]{SMEFT.pdf}} } \subfigure[]{ \includegraphics[width=0.2\textwidth, bb= 0 0 133 95]{SMEFT-B.pdf} } \subfigure[]{ \includegraphics[width=0.2\textwidth, bb= 0 0 131 110]{SMEFT-B-2.pdf} } \subfigure[]{ \includegraphics[width=0.2\textwidth, bb= 0 0 133 95]{SMEFT-W.pdf} } \subfigure[]{ \includegraphics[width=0.2\textwidth, bb= 0 0 133 110]{SMEFT-W-2.pdf} } \subfigure[]{ \includegraphics[width=0.2\textwidth, bb= 0 0 133 110]{SMEFT-g-2.pdf} } \caption{ Feynman diagrams relevant for the matchings onto the operators $[\mathcal{O}_{HD}]_{12}$, where the external gauge bosons are attached to each of the cross marks. Diagrams (a)--(d) are the one-loop gluino contributions, and (e)--(j) are the diagrams in the SMEFT. The diagrams contributing to $[\mathcal{O}_{HQ}^{(1,3)}]_{12}$ are similarly obtained. } \label{fig:diagrams} \end{center} \end{figure} At the one-loop level, $\mathcal{O}_{HQ}$ and $\mathcal{O}_{HD}$ are generated by gluino loops in the MSSM. When the squark (quark) flavor is violated by scalar trilinear soft-breaking parameters, the dominant contributions are calculated from Fig.~\ref{fig:diagrams} as \begin{align} [\mathcal{C}_{HQ}^{(1)}]_{12} &= -\frac{\alpha_s}{12\pi}\frac{\cos^2\beta}{m_{\tilde g}^4} (T_D)_{13}^* (T_D)_{23}\, Z(x_{L1}, x_{L2}, x_{R3}), \label{eq:CHQ}\\ [\mathcal{C}_{HQ}^{(3)}]_{12} &= -\frac{\alpha_s}{12\pi}\frac{\cos^2\beta}{m_{\tilde g}^4} (T_D)_{13}^* (T_D)_{23}\, Z(x_{L1}, x_{L2}, x_{R3}),\label{eq:CHQ3} \\ [\mathcal{C}_{HD}]_{12} &= \frac{\alpha_s}{6\pi}\frac{\cos^2\beta}{m_{\tilde g}^4} (T_D)_{31} (T_D)_{32}^*\, Z(x_{R1}, x_{R2}, x_{L3}), \label{eq:CHD} \end{align} with $x_i=m_{\tilde d_i}^2/m_{\tilde g}^2$. Here, $m_{\tilde d_{L(R)i}}$ is the left- (right-) handed squark soft mass for the $i$-th generation, $m_{\tilde g}$ is the gluino mass, and $T_D$ is the scalar trilinear coupling of the down-type squarks. In this paper, the SUSY Les Houches Accord (SLHA) notation~\cite{Skands:2003cj, Allanach:2008qq} is used, and flavor violations are discussed in the basis where the Yukawa matrix of the down-type quark is diagonalized. The Wilson coefficients are set at the SUSY scale.\footnote{ If the trilinear couplings $(T_D)_{13,23,31,32}$ are set in a scale higher than the SUSY scale, the flavor-violating squark soft masses $(m_{\tilde d_{L(R)}})_{12,21}$ are generated via RG corrections. They can be sizable and contribute to the kaon FCNCs when the input scale is much higher than the SUSY scale. } The loop function is defined as \begin{align} Z(x,y,z) =&~ -\frac{x^2\ln x}{(x-1)(x-y)(x-z)^2} + \frac{y^2\ln y}{(y-1)(x-y)(y-z)^2} \notag\\ &~ - \frac{z}{(z-1)(x-z)(y-z)} + \frac{(2xy-yz-xz-xyz+z^3)z\ln z}{(z-1)^2(x-z)^2(y-z)^2}. \end{align} In the limit of $y,z \to x$, it becomes \begin{align} Z(x) = \frac{2+3x-6x^2+x^3+6x\ln x}{6x(x-1)^4}. \end{align} Other SUSY contributions are explained in the next section. Note that, in literature, e.g., Ref.~\cite{Bertolini:1990if}, it has been argued that gluino-mediated contributions to EW penguin are suppressed compared to the other penguins, by assuming that the gluino contributions to the EW penguin are proportional to those to the photon penguin. However, this is not the case in our scenario, where the SU(2$)\times$U(1) symmetry is broken by large scalar trilinear couplings. Such couplings can generate the $Z$ penguin significantly via double-mass insertion contributions, as was pointed out in Ref.~\cite{Colangelo:1998pm} and explicitly shown in this section. \section{Observables} \label{sec:obs} \subsection{$\boldsymbol{\ensuremath{\varepsilon'/\varepsilon_K}}$} The direct ${C\hspace{-0.2mm}P}\hspace{-0.4mm}~$ violation of the $K \to \pi \pi $ decays, \ensuremath{\varepsilon'/\varepsilon_K}, includes the SM and SUSY $Z$-penguin contributions, \begin{align} \left( \ensuremath{\varepsilon'/\varepsilon_K} \right) = \left( \ensuremath{\varepsilon'/\varepsilon_K} \right)^{\rm SM} + \left( \ensuremath{\varepsilon'/\varepsilon_K} \right)^{\rm SUSY}. \end{align} The latter contribution is approximated to be (cf., Ref.~\cite{Buras:2015jaq})\footnote{ Another SUSY contribution is produced from chromomagnetic-dipole diagrams \cite{Masiero:1999ub, Babu:1999xf, Khalil:1999ym, Baek:1999jq, Barbieri:1999ax, Buras:1999da, Baek:2001kc, Chobanova:2017rkj}. The Wilson coefficient is obtained by replacing $b \to s$ and $d_{i} \to d$ in Eq.~\eqref{eq:gluinochromo}. In our analyses, such a contribution is negligible because the squark mixings between the fist two generations are assumed to be suppressed. } \begin{align} \left( \ensuremath{\varepsilon'/\varepsilon_K} \right)^{\rm SUSY} &= - {B_8^{(3/2)}(m_c)}\bigg[ 5.91 \times 10^7\,\textrm{GeV}^2\,{\rm Im} \left([\mathcal{C}_{HQ}^{(1)}]_{12}+[\mathcal{C}_{HQ}^{(3)}]_{12}\right) \notag \\ &\qquad\qquad\qquad~~~ + 1.97 \times 10^8\,\textrm{GeV}^2\,{\rm Im}\,[\mathcal{C}_{HD}]_{12} \bigg], \label{eq:epsPrime} \end{align} where the Wilson coefficients are estimated at the $Z$-boson mass scale, $\mu=m_Z$. By using lattice simulations~\cite{Blum:2011ng, Blum:2012uk, Blum:2015ywa}, $B_8^{(3/2)}(m_c) = 0.76 \pm 0.05$ is obtained~\cite{Buras:2015yba, Buras:2015qea}. Here, \ensuremath{\varepsilon_K}\,in the denominator is evaluated by the experimental value. The right-handed contribution is amplified by $c_W^2/s_W^2 \simeq 3.33$ compared to the left-handed one. Currently, the SM prediction deviates from the experimental result at the $2.8\,\sigma$ level. In this paper, the discrepancy of \ensuremath{\varepsilon'/\varepsilon_K}\,is required to be explained within the $1\,\sigma$ range, \begin{align} 10.0\times 10^{-4} < \left( \ensuremath{\varepsilon'/\varepsilon_K} \right)^{\rm SUSY} < 21.1 \times 10^{-4}, \label{eq:eps_limit} \end{align} where Ref.~\cite{Kitahara:2016nld} is used for the SM prediction at the NLO level. \subsection{$\boldsymbol{\ensuremath{\varepsilon_K}}$} Both the SM and SUSY affect to the indirect ${C\hspace{-0.2mm}P}\hspace{-0.4mm}~$ violation of the neutral kaon system, \beq \ensuremath{\varepsilon_K} = e^{i \varphi_{\varepsilon}} \left( \ensuremath{\varepsilon_K}^{\rm SM} + \ensuremath{\varepsilon_K}^{\rm SUSY} \right), \eeq where $\varphi_{\varepsilon} = ( 43.51\pm 0.05 )^{\circ}$. $\ensuremath{\varepsilon_K}^{\rm SUSY}$ is composed by gluino box diagrams as well as $\mathcal{C}_{HQ}$ and $\mathcal{C}_{HD}$. In our scenario, although the gluino box contributions are sizable, their dominant contributions arise as dimension-ten operators in the SMEFT. In order to include them in our formalism, we separately calculate them in the broken phase, where the Higgs VEV is involved.\footnote{ Equations~\eqref{eq:SMEFT4Q1}--\eqref{eq:SMEFT4Q3} are not used for evaluating the gluino box contributions to the $\Delta S=2$ observables. } At the one-loop level, they are obtained as~\cite{Hagelin:1992tc} \begin{align} [\mathcal{C}_1]_{ij} &= \frac{\alpha_s^2}{m_{\tilde g}^2} \mathcal{R}_{ri}^{d*}\mathcal{R}_{rj}^{d} \mathcal{R}_{si}^{d*}\mathcal{R}_{sj}^{d} \left[ \frac{1}{9} B_0(x_r,x_s) + \frac{11}{36} B_2(x_r,x_s) \right], \\ [\mathcal{C}_2]_{ij} &= \frac{\alpha_s^2}{m_{\tilde g}^2} \mathcal{R}_{r,i+3}^{d*}\mathcal{R}_{rj}^{d} \mathcal{R}_{s,i+3}^{d*}\mathcal{R}_{sj}^{d} \left[ \frac{17}{18} B_0(x_r,x_s) \right], \\ [\mathcal{C}_3]_{ij} &= \frac{\alpha_s^2}{m_{\tilde g}^2} \mathcal{R}_{r,i+3}^{d*}\mathcal{R}_{rj}^{d} \mathcal{R}_{s,i+3}^{d*}\mathcal{R}_{sj}^{d} \left[ - \frac{1}{6} B_0(x_r,x_s) \right], \\ [\mathcal{C}_4]_{ij} &= \frac{\alpha_s^2}{m_{\tilde g}^2}\Bigg\{ \mathcal{R}_{ri}^{d*}\mathcal{R}_{rj}^{d} \mathcal{R}_{s,i+3}^{d*}\mathcal{R}_{s,j+3}^{d} \left[ \frac{7}{3} B_0(x_r,x_s) - \frac{1}{3} B_2(x_r,x_s) \right] \notag \\ &~~~~~~~~~~ + \mathcal{R}_{ri}^{d*}\mathcal{R}_{r,j+3}^{d} \mathcal{R}_{s,i+3}^{d*}\mathcal{R}_{sj}^{d} \left[ - \frac{11}{18} B_2(x_r,x_s) \right] \Bigg\}, \\ [\mathcal{C}_5]_{ij} &= \frac{\alpha_s^2}{m_{\tilde g}^2}\Bigg\{ \mathcal{R}_{ri}^{d*}\mathcal{R}_{rj}^{d} \mathcal{R}_{s,i+3}^{d*}\mathcal{R}_{s,j+3}^{d} \left[ \frac{1}{9} B_0(x_r,x_s) + \frac{5}{9} B_2(x_r,x_s) \right] \notag \\ &~~~~~~~~~~ + \mathcal{R}_{ri}^{d*}\mathcal{R}_{r,j+3}^{d} \mathcal{R}_{s,i+3}^{d*}\mathcal{R}_{sj}^{d} \left[ -\frac{5}{6} B_2(x_r,x_s) \right] \Bigg\}, \end{align} at the SUSY scale ($\mu_{\textrm{SUSY}}$) with generation indices $i \neq j$ and $x_r = m_{\tilde d_r}^2/m_{\tilde g}^2$, where $\mathcal{R}^d_{r i }$ for $r = 1,2,\ldots,6$ is the squark rotation matrix defined in the SLHA notation~\cite{Skands:2003cj, Allanach:2008qq}. $\mathcal{C}^{\prime}_{1,2,3}$ are obtained by flipping the chirality of $\mathcal{R}_{ri}^{d (*)}$ in $\mathcal{C}_{1,2,3}$. The loop functions are defined as \begin{align} B_0(x,y) &= \frac{x \ln x}{(x-y)(x-1)^2}+\frac{y \ln y}{(y-x)(y-1)^2}+\frac{1}{(x-1)(y-1)}, \\ B_2(x,y) &= \frac{x^2 \ln x}{(x-y)(x-1)^2}+\frac{y^2 \ln y}{(y-x)(y-1)^2}+\frac{1}{(x-1)(y-1)}. \end{align} From $\mu_{\textrm{SUSY}}$ to the hadronic scale, we solve the RG equations at the NLO level \cite{Buras:2001ra} and use the hadronic matrix elements in Ref.~\cite{Garron:2016mva}. Additionally, $[\mathcal{C}_1]_{ij}$ and $[\mathcal{C}_5]_{ij}$ receive the top-Yukawa contributions depending on $\mathcal{C}_{HQ}$ and $\mathcal{C}_{HD}$ as \begin{align} [\mathcal{C}_1]_{ij} &= \frac{\alpha [\lambda_t]_{ij}}{\pi s_W^2} \left[ [\mathcal{C}_{HQ}^{(1)}]_{ij}\,I_1(x_t,\mu_{\rm SUSY}) - [\mathcal{C}_{HQ}^{(3)}]_{ij}\,I_2(x_t,\mu_{\rm SUSY}) \right], \label{eq:epsKtop1} \\ [\mathcal{C}_5]_{ij} &= -\frac{2\alpha [\lambda_t]_{ij}}{\pi s_W^2} [\mathcal{C}_{HD}]_{ij}\, I_1(x_t,\mu_{\rm SUSY}), \label{eq:epsKtop2} \end{align} at the $Z$-boson mass scale. These results are derived as follows: The Wilson coefficients are evolved by solving the RG equations with the beta function \eqref{eq:Yt2} in the first leading logarithm approximation \eqref{eq:Yt3}, and then, matched onto the low-scale operators at the weak scale \eqref{eq:SMEFT4Q1}--\eqref{eq:SMEFT4Q3}. Also, the one-loop matchings, \eqref{eq:matching1} and \eqref{eq:matching2}, are taken into account to include the additional contributions of $\mathcal{C}_{HQ}$ and $\mathcal{C}_{HD}$ at the weak scale (see Ref.~\cite{Bobeth:2017xry}).\footnote{ The results are independent of the matching scale $\mu_W$ by including the one-loop matching conditions. Consequently, the logarithmic function becomes $\ln(\mu_{\rm SUSY}/m_W)$. } Equivalently, the same results are reproduced by substituting $\mu_W \to \mu_{\rm SUSY}$ in Eqs.~\eqref{eq:matching1} and \eqref{eq:matching2}. This is because the logarithmic scale dependence of the one-loop matching conditions has the same origin as the one-loop beta functions (see Ref.~\cite{Endo:2016tnu}). It is also noticed that, in Eq.~\eqref{eq:epsKtop1}, the logarithmic dependence of $\mu_{\rm SUSY}$ cancels out because of $[\mathcal{C}_{HQ}^{(1)}]_{12} = [\mathcal{C}_{HQ}^{(3)}]_{12}$ in Eqs.~\eqref{eq:CHQ} and \eqref{eq:CHQ3}. On the other hand, the scale dependence in Eq.~\eqref{eq:epsKtop2} remains, and thus, $[\mathcal{C}_5]_{ij}$ is sensitive to $\mu_{\rm SUSY}$. The SM value is estimated to be \begin{align} \ensuremath{\varepsilon_K}^{\rm SM} = (2.12 \pm 0.18) \times 10^{-3}, \label{eq:epsKSM} \end{align} where the input SM parameters are found in Ref.~\cite{Jang:2017ieg} (cf., Ref.~\cite{Bailey:2015tba}). Especially, the Wolfenstein parameters are determined by the angle-only fit~\cite{Bevan:2013kaa}, and $|V_{cb}|$ obtained from inclusive semileptonic $B$ decays $(\bar{B} \to X_c \ell^{-} \bar{\nu})$ \cite{Amhis:2016xyh} is used.\footnote{ Recently, there are debates about systematic uncertainties of the exclusive determinations of $|V_{cb}|$ ~\cite{Bigi:2017njr,Grinstein:2017nlq,Bernlochner:2017xyx}. } We use lattice results for the $\xi_0$ parameter~\cite{Bai:2015nea}, which parametrizes the absorptive part of long-distance effects, and refrain from relying on the experimental result of \ensuremath{\varepsilon'/\varepsilon_K}, because we consider SUSY contributions to \ensuremath{\varepsilon'/\varepsilon_K}. On the other hand, the experimental result is~(cf., Ref.~\cite{Olive:2016xmw}) \begin{align} |\ensuremath{\varepsilon_K}^{\rm exp}| = (2.228 \pm 0.011) \times 10^{-3}. \end{align} Therefore, the SUSY contributions are required to be within the range, \begin{align} -0.25 \times 10^{-3} < \ensuremath{\varepsilon_K}^{\rm SUSY} < 0.47 \times 10^{-3}, \end{align} at the $2\,\sigma$ level.\footnote{ In our analysis, the gluino contributions are much less constrained by the mass difference of the neutral kaons, $\Delta M_K$, because hadronic uncertainties are large. } \subsection{$\boldsymbol{K \to \pi\nu\bar\nu}$} The $Z$-penguin contributions induce the decays, $K^+ \to \pi^+\nu\bar\nu$ and $K_L \to \pi^0\nu\bar\nu$. They are expressed as \cite{Buras:2015jaq, Buras:2015qea} \begin{align} \mathcal{B}(K^+\to\pi^+\nu\bar\nu) &= \kappa_+ \left[ \left( \frac{{\rm Im}\,X_{\rm eff}}{\lambda^5} \right)^2 + \left( \frac{{\rm Re}\,\lambda_c}{\lambda}P_c(X) + \frac{{\rm Re}\,X_{\rm eff}}{\lambda^5} \right)^2 \right], \\ \mathcal{B}(K_L\to\pi^0\nu\bar\nu) &= \kappa_L \left[ \frac{{\rm Im}\,X_{\rm eff}}{\lambda^5} \right]^2, \end{align} where $\lambda = |V_{us}|$, $\lambda_c = V_{cd}^*V_{cs}$, $\kappa_+ = (5.157 \pm 0.025) \times 10^{-11}(\lambda/0.225)^8$, $\kappa_L = (2.231 \pm 0.013) \times 10^{-10}(\lambda/0.225)^8$, and the charm contribution gives $P_c(X)= (9.39 \pm 0.31)\times 10^{-4} /\lambda^4 + (0.04 \pm 0.02)$. In terms of $\mathcal{C}_{HQ}$ and $\mathcal{C}_{HD}$, $X_{\rm eff}$ is approximated to be (cf., Ref.~\cite{Endo:2016tnu}) \begin{align} {\rm Re}\,X_{\rm eff} &= -4.83\times10^{-4} -5.62\times10^6\,\textrm{GeV}^2\, {\rm Re}\,\mathcal{C}_{H+}, \\ {\rm Im}\,X_{\rm eff} &= 2.12\times10^{-4} +5.62\times10^6\,\textrm{GeV}^2\, {\rm Im}\,\mathcal{C}_{H+}, \label{eq:KLpinn} \end{align} where the first terms in the right-hand sides are the SM contributions in each equation, and \begin{align} \mathcal{C}_{H+} = [\mathcal{C}_{HQ}^{(1)}]_{12}+[\mathcal{C}_{HQ}^{(3)}]_{12}+[\mathcal{C}_{HD}]_{12}. \end{align} The Wilson coefficients are estimated at the $Z$-boson mass scale. The SM predictions are known to be~\cite{Endo:2016tnu} \begin{align} \mathcal{B}(K^+\to\pi^+\nu\bar\nu)^{\rm SM} &= (8.5 \pm 0.5) \times 10^{-11}, \\ \mathcal{B}(K_L\to\pi^0\nu\bar\nu)^{\rm SM} &= (3.0 \pm 0.2) \times 10^{-11}, \end{align} while the experimental results are \cite{Artamonov:2008qb, Ahn:2009gb} \begin{align} \mathcal{B}(K^+\to\pi^+\nu\bar\nu)^{\rm exp} &= (17.3^{+11.5}_{-10.5}) \times 10^{-11}, \\ \mathcal{B}(K_L\to\pi^0\nu\bar\nu)^{\rm exp} & < 2.6 \times 10^{-8}.~~~[90\%~\mbox{C.L.}] \end{align} These experimental values will be improved in the near future. The NA62 experiment at CERN has already started the physics run and aims to measure $ \mathcal{B}(K^+\to\pi^+\nu\bar\nu)$ with a precision of $10 \%$ relative to the SM prediction~\cite{NA62:2017rwk}. The KOTO experiment at J-PARC aims to measure $\mathcal{B}(K_L\to\pi^0\nu\bar\nu)$ around the SM sensitivity by 2021~\cite{KOTOfuture1,KOTOfuture2}. \subsection{$\boldsymbol{K_L \to \mu^+\mu^-}$} The decay rate of $K_L \to \mu^+\mu^-$, which is a ${C\hspace{-0.2mm}P}$-conserving~ process, is sensitive to a real component of the flavor-changing $Z$ couplings. There are large theoretical uncertainties from a long-distance (LD) contribution. In addition, an unknown sign of $\mathcal{A}\left( K_L \to \gamma \gamma \right)$ conceals a relative sign between the LD and a short-distance (SD) amplitudes. One can, therefore, estimate only the SD branching ratio, which is expressed as~\cite{Buras:2015jaq,Gorbahn:2006bm,Bobeth:2013tba} \begin{align} \mathcal{B}(K_L \to \mu^+\mu^-)_{\rm SD} &= \kappa_\mu \left( \frac{{\rm Re}\,\lambda_c}{\lambda}P_c(Y) + \frac{{\rm Re}\,Y_{\rm eff}}{\lambda^5} \right)^2, \end{align} where $\kappa_\mu=(2.01\pm0.02)\times 10^{-9}(\lambda/0.225)^8$, and the charm-quark contribution is $P_c(Y)= (0.115 \pm 0.018)\times (0.225/\lambda)^4$. Here, $Y_{\rm eff}$ is approximately given as~(cf., Ref.~\cite{Endo:2016tnu}) \begin{align} {\rm Re}\,Y_{\rm eff} &= -3.07\times10^{-4}- 5.62\times10^6\,\textrm{GeV}^2\,{\rm Re}\,\mathcal{C}_{H-}, \end{align} where the first term in the right-hand side is the SM contribution, and \begin{align} \mathcal{C}_{H-} = [\mathcal{C}_{HQ}^{(1)}]_{12}+[\mathcal{C}_{HQ}^{(3)}]_{12}-[\mathcal{C}_{HD}]_{12} . \end{align} The Wilson coefficients are estimated at the $Z$-boson mass scale. The SM value is obtained as~\cite{Endo:2016tnu} \begin{align} \mathcal{B}(K_L \to \mu^+\mu^-)_{\rm SD}^{\rm SM} = (0.83 \pm 0.10) \times 10^{-9}. \end{align} It is challenging to extract the SD contribution from the experimental value. An upper bound is estimated as \cite{Isidori:2003ts} \begin{align} \mathcal{B}(K_L \to \mu^+\mu^-)_{\rm SD}^{\rm exp} < 2.5 \times 10^{-9}. \end{align} Since the constraint is much weaker than the SM uncertainties, we simply impose a bound, \begin{align} -1.81 \times 10^{-10}~(\textrm{GeV})^{-2} < {\rm Re}\,\mathcal{C}_{H-} < 4.85 \times 10^{-11} ~(\textrm{GeV})^{-2}. \end{align} \subsection{$\boldsymbol{K_S \to \mu^+\mu^-}$} The decay, $K_S \to \mu^+\mu^-$, proceeds via LD ${C\hspace{-0.2mm}P}$-conserving~ P-wave and SD ${C\hspace{-0.2mm}P}$-violating~ S-wave processes. Since the decay rate is dominated by the former, whose uncertainty is large, the sensitivity to the imaginary component of the flavor-changing $Z$ couplings is diminished~\cite{Ecker:1991ru,Isidori:2003ts,Mescia:2006jd}. Interestingly, the SD contribution is enhanced through an interference between the $K_L$ and $K_S$ states in the neutral kaon beam~\cite{DAmbrosio:2017klp}. The effective branching ratio of $ K_S \to \mu^+ \mu^- $ after including the interference is expressed as (cf., Ref.~\cite{DAmbrosio:2017klp}) \begin{align} \mathcal{B} ( K_S \to \mu^+ \mu^- )_{\rm eff} = \mathcal{B} ( K_S \to \mu^+ \mu^- ) + D \cdot \mathcal{B} ( K_S \to \mu^+ \mu^- )_{\rm int}, \end{align} where a dilution factor $D$ is an initial asymmetry between the numbers of $K^0$ and $\overline{K}{}^0$, \begin{align} D = \left( K^0 - \overline{K}{}^0 \right) / \left( K^0 + \overline{K}{}^0 \right). \end{align} In the right-hand side, the branching ratio is approximated to be \begin{align} \mathcal{B} ( K_S \to \mu^+ \mu^- ) &= 4.99 \times 10^{-12} + 3.30 \times 10^{8}\,\textrm{GeV}^4 \left[ 2.39 \times 10^{-11} \,\textrm{GeV}^{-2} + {\rm Im}\,\mathcal{C}_{H-}\right]^2, \label{eq:KSmmBr} \end{align} where the first and second terms in the right-hand side come from the LD and SD contributions, respectively. Here, the Wilson coefficients are estimated at the $Z$-boson mass scale. On the other hand, the interference contribution is given as \begin{align} \mathcal{B}( K_S \to \mu^+ \mu^- )_{\textrm{int}} = \left\{ \begin{array}{ll} - 7.69\times10^{7}\,\textrm{GeV}^4 \left[ 2.39 \times 10^{-11}\,\textrm{GeV}^{-2} + \textrm{Im}\,\mathcal{C}_{H-} \right] & \\ \qquad\qquad\qquad~~~~ \times \left[ 1.73\times 10^{-9}\,\textrm{GeV}^{-2} - \textrm{Re}\,\mathcal{C}_{H-} \right], & (\eta_\mathcal{A}=+) \\ 7.69\times10^{7}\,\textrm{GeV}^4 \left[ 2.39 \times 10^{-11}\,\textrm{GeV}^{-2} + \textrm{Im}\,\mathcal{C}_{H-} \right] & \\ \qquad\qquad\qquad~~~~ \times \left[ 1.86\times 10^{-9}\,\textrm{GeV}^{-2} + \textrm{Re}\,\mathcal{C}_{H-} \right]. & (\eta_\mathcal{A}=-) \end{array} \right. \label{eq:Breff} \end{align} The Wilson coefficients are estimated at the $Z$-boson mass scale. The unknown relative sign between the LD and SD contributions in $K_L \to \mu^+ \mu^-$ gives two different predictions of $\mathcal{B} \left( K_S \to \mu^+ \mu^- \right)_{\rm int}$, which are expressed by $\eta_\mathcal{A}$, (see Ref.~\cite{Cirigliano:2011ny,DAmbrosio:2017klp}) \begin{align} \eta_\mathcal{A} = \textrm{sgn} \left[ \frac{\mathcal{A}\left( K_L \to \gamma \gamma \right)}{\mathcal{A} \left( K_L \to (\pi^0)^{\ast} \to \gamma \gamma \right)} \right]. \end{align} Here, scalar operator contributions are discarded in the above formulae: they can be significant especially when $\tan \beta$ is large and $m_A$ is small~\cite{Chobanova:2017rkj}. The SM prediction depends on $D$ and $\eta_\mathcal{A}$, which are determined by experiments. For $D=0$, it is obtained as~\cite{Ecker:1991ru,Isidori:2003ts,DAmbrosio:2017klp} \begin{align} \mathcal{B}(K_S \to \mu^+ \mu^- )^{\rm SM} = \left( 5.18 \pm 1.50 \right) \times 10^{-12}, \label{eq:KSMUMU_SMD0} \end{align} while for $D=1$ and $\eta_\mathcal{A}=-1$, the SM prediction becomes~\cite{DAmbrosio:2017klp} \begin{align} \mathcal{B}(K_S \to \mu^+ \mu^- )_{\rm eff}^{\rm SM} = \left( 8.59 \pm 1.50 \right) \times 10^{-12}. \label{eq:KSMUMU_SMD1} \end{align} On the other hand, the current experimental bound based on the LHCb Run-1 result using the integrated luminosity 3\,fb${}^{-1}$ is~\cite{Aaij:2017tia} \begin{align} \mathcal{B}(K_S \to \mu^+ \mu^- )^{\rm exp} < 0.8 \times 10^{-9}. ~~~[90\%~\mbox{C.L.}] \end{align} The experimental sensitivity is expected to reach $\mathcal{B}(K_S \to \mu^+ \mu^- ) = \mathcal{O}(10^{-11})$ by the end of the LHCb Run-2, and the Run-3 project is aiming to achieve the sensitivity as precise as the SM level \cite{LHCbupgrade}. \subsection{$\boldsymbol{b\to d\gamma}$ and $\boldsymbol{b\to s\gamma}$} In this paper, we consider flavor-violations in the scalar trilinear couplings. They contribute to the decays of $b\to d_i\gamma$ $(d_i=d,s)$ at the one-loop level.\footnote{ They also contribute to the ($CP$-violating) $B_{d,s}$ mixings. In the parameter regions of our interest, gluino box contributions to them are smaller than the current experimental and theoretical uncertainties. Also, the $CP$-violating scalar trilinear couplings can contribute to the electric dipole moments (EDMs) e.g., of the neutron. Since the $CP$ phases are introduced in the flavor off-diagonal components, the gluino contributions to the EDMs satisfy the experimental limits. } The decays are described by the effective Hamiltonian, \begin{align} \mathcal{H}_{\rm eff} = -\frac{4G_F}{\sqrt{2}} [\lambda_t]_{i3} \Big[ \mathcal{C}_{7\gamma} \mathcal{O}_{7\gamma} + \mathcal{C}_{8g} \mathcal{O}_{8g} \Big] + (L\leftrightarrow R), \end{align} where the effective operators are defined as \begin{align} \mathcal{O}_{7\gamma} = \frac{e}{16\pi^2} m_b\, \bar d_i \sigma^{\mu\nu} P_R b\, F_{\mu\nu},~~~ \mathcal{O}_{8g} = \frac{g_3}{16\pi^2} m_b\, \bar d_i \sigma^{\mu\nu} T^a P_R b\, G_{\mu\nu}^a, \end{align} where $e>0$ and $g_3 >0$, and the covariant derivatives for the quark and squark follow the same sign convention as Eq.~\eqref{eq:covariantdel}. At the one-loop level, the gluino contributions are obtained as \begin{align} \mathcal{C}_{7\gamma} &= \frac{\sqrt{2}\pi\alpha_s}{4G_F[\lambda_t]_{i3}m_{\tilde g}^2} \bigg[ \mathcal{R}_{ri}^{d*}\mathcal{R}_{r3}^{d} \left(\frac{8}{9}D_1(x_r)\right) - \frac{m_{\tilde g}}{m_b} \mathcal{R}_{ri}^{d*}\mathcal{R}_{r6}^{d} \left(\frac{8}{9}D_2(x_r)\right) \bigg], \\ \mathcal{C}_{8g} &= \frac{\sqrt{2}\pi\alpha_s}{4G_F[\lambda_t]_{i3}m_{\tilde g}^2} \bigg[ \mathcal{R}_{ri}^{d*}\mathcal{R}_{r3}^{d} \left(\frac{1}{3}D_1(x_r) - 3D_3(x_r)\right) \notag \\ &\qquad\qquad\qquad\qquad - \frac{m_{\tilde g}}{m_b} \mathcal{R}_{ri}^{d*}\mathcal{R}_{r6}^{d} \left(\frac{1}{3}D_2(x_r) - 3D_4(x_r)\right) \bigg], \label{eq:gluinochromo} \end{align} where $x_r = m_{\tilde d_r}^2/m_{\tilde g}^2$, and the loop functions are defined to be \begin{align} D_1(x) &= \frac{-x^3+6x^2-3x-2-6x\ln x}{6(1-x)^4}, \\ D_2(x) &= \frac{x^2-1-2x\ln x}{(1-x)^3}, \\ D_3(x) &= \frac{2x^3+3x^2-6x+1-6x^2\ln x}{6(1-x)^4}, \\ D_4(x) &= \frac{3x^2-4x+1-2x^2\ln x}{(1-x)^3}. \end{align} Also, $\mathcal{C}^{\prime}_{7\gamma}$ and $ \mathcal{C}^{\prime}_{8g}$ are obtained by flipping the chirality of $\mathcal{R}_{ri}^{d (*)}$ in $\mathcal{C}_{7\gamma}$ and $ \mathcal{C}_{8g}$, respectively. In the analysis, an approximation formula in Ref.~\cite{Malm:2015oda} is used to estimate the SUSY contributions to the branching ratio of $b\to s\gamma$, where the Wilson coefficients are set at $\mu_b = 4.8$\,\textrm{GeV}. For $\mathcal{B}(\bar B \to X_d \gamma)$, the formula in Refs.~\cite{Hurth:2003dk,Evans:2016lzo} is used, where the SUSY contributions to the Wilson coefficients at the top-mass scale are needed. The latest results of the SM values are~\cite{Misiak:2015xwa} \begin{align} \mathcal{B}(\bar B \to X_s \gamma)^{\rm SM} &= (3.36 \pm 0.23) \times 10^{-4}, \\ \mathcal{B}(\bar B \to X_d \gamma)^{\rm SM} &= (1.73^{+0.12}_{-0.22}) \times 10^{-5}, \end{align} for $E_\gamma>1.6\,{\rm GeV}$. On the other hand, the experimental results are~\cite{Amhis:2016xyh,delAmoSanchez:2010ae,Crivellin:2011ba} \begin{align} \mathcal{B}(\bar B \to X_s \gamma)^{\rm exp} &= (3.32 \pm 0.15) \times 10^{-4}, \\ \mathcal{B}(\bar B \to X_d \gamma)^{\rm exp} &= (1.41 \pm 0.57) \times 10^{-5}, \end{align} for $E_\gamma>1.6\,{\rm GeV}$. In the analysis, the theoretical prediction including the SM and SUSY contributions is required to be consistent with the experimental result at the $2\sigma$ level. $CP$ violations of $b\to d_i\gamma$ are sensitive to the imaginary parts of flavor-violating scalar trilinear couplings. Long-distance effects tend to spoil the sensitivity~\cite{Benzke:2010tq}. This could be resolved by taking a difference of the ${C\hspace{-0.2mm}P}\hspace{-0.4mm}~$ asymmetries~\cite{Benzke:2010tq}, \begin{align} \Delta A_{\rm CP}(b\to s\gamma) &= A_{\rm CP}(B^-\to X_s^- \gamma)-A_{\rm CP}(\bar B^0\to X_s^0 \gamma) \notag \\ &= 4\pi^2\alpha_s(\mu_b)\,\frac{\widetilde\Lambda_{78}}{m_b}\,{\rm Im} \left[ \frac{\mathcal{C}_{7\gamma}^* \mathcal{C}_{8g} + \mathcal{C}_{7\gamma}^{\prime*} \mathcal{C}'_{8g}}{|\mathcal{C}_{7\gamma}|^2+|\mathcal{C}'_{7\gamma}|^2} \right], \label{eq:DACP} \end{align} where the right-handed contributions are taken into account~\cite{Kagan:1998bh}. The hadronic parameter $\widetilde\Lambda_{78}$ introduces an uncertainty to the analysis and is estimated to be $12\,\textrm{MeV} < \widetilde\Lambda_{78} < 190\,\textrm{MeV}$~\cite{Malm:2015oda}. We take an average value, $\widetilde\Lambda_{78} = 89\,\textrm{MeV}$, in the analysis. The Wilson coefficients include both the SM and SUSY contributions, which are evaluated at the scale $\mu_b = 2\,\textrm{GeV}$. The SM prediction is expected to be much suppressed, $\Delta A_{\rm CP}(b\to s\gamma)^{\rm SM} \approx 0$ ~\cite{Benzke:2010tq}. On the other hand, the experimental result is~\cite{Lees:2014uoa} \begin{align} \Delta A_{\rm CP}(b\to s\gamma)^{\rm exp} = (5.0 \pm 3.9_{\rm stat} \pm 1.5_{\rm syst})\% \label{eq:ExpDAcp} \end{align} from the BaBar experiment. The Belle experiment also published a result on $\Delta A_{\rm CP}(B \to K^*\gamma)$~\cite{Horiguchi:2017ntw}, \begin{align} \Delta A_{\rm CP}(B \to K^*\gamma)^{\rm exp} = (2.4 \pm2.8_{\rm stat} \pm 0.5_{\rm syst})\%. \end{align} The asymmetry of the inclusive decay is expected to be comparable to that of the exclusive mode~\cite{Ishikawa}. Both results are consistent with a null asymmetry difference. Since the uncertainties are large, the SUSY parameters will not be constrained in the region of our interest. In future, the uncertainty is projected to achieve 0.37\% for $\Delta A_{\rm CP}(b\to s\gamma)$ at Belle II with $50\,{\rm ab}^{-1}$~\cite{Sandilya:2017mkb}.\footnote{ Although the experimental uncertainty of the direct ${C\hspace{-0.2mm}P}\hspace{-0.4mm}~$ asymmetry $A_{\rm CP}(b \to s\gamma)$ is also projected to be sub-percent level~\cite{Sandilya:2017mkb}, long-distance contributions as well as hadronic uncertainties spoil the SM prediction~\cite{Benzke:2010tq}. } \section{Vacuum stability} \label{sec:vacuum} The Wilson coefficients in Eqs.~\eqref{eq:CHQ}--\eqref{eq:CHD} are enhanced by large off-diagonal trilinear couplings, $\left(T_D\right)_{i3}$ and $\left( T_D\right)_{3i}$ $(i=1,2)$. Such large trilinear couplings tend to generate dangerous charge and color breaking (CCB) global minima in the scalar potential \cite{Park:2010wf}. Hence, they are limited by the vacuum (meta-)stability condition: the lifetime of the EW vacuum must be longer than the age of the Universe. In this section, we will investigate the vacuum stability conditions of $\left(T_D\right)_{i3}$ and $\left( T_D\right)_{3i}$. The vacuum decay rate per unit volume is represented by $\Gamma/V = A \exp \left( - S_E \right)$, where $S_E$ is the Euclidean action of the bounce solution~\cite{Coleman:1977py}. \texttt{CosmoTransition} 2.0.2~\cite{Wainwright:2011kj} is used to estimate $S_E$ at the semiclassical level. The prefactor $A$ cannot be determined unless radiative corrections are taken into account~\cite{Callan:1977pt, Endo:2015ixx}. We adopt an order-of-magnitude estimation, $A \sim \left( 100\,\textrm{GeV}\right)^4$. By requiring $(\Gamma/V)^{1/4}$ to be smaller than the current Hubble parameter, the lifetime of the EW vacuum becomes longer than the age of the Universe. The condition corresponds to $S_E \gtrsim 400$. In this paper, thermal effects and radiative corrections to the vacuum transitions are discarded. The bounce solution and $S_E$ are determined by the scalar potential. The potential relevant for the vacuum decay generated by $\left(T_D\right)_{13}$ and/or $\left(T_D\right)_{31}$ is \begin{align} V &= \frac{1}{2} m_{11}^2 \, h_d^2 + \frac{1}{2} m_{22}^2 \, h_u^2 - m_{12}^2 \, h_d h_u \notag \\ &~~ + \frac{1}{2} m_{\tilde Q,1}^2 \,\tilde d_L^2 + \frac{1}{2} m_{\tilde Q,3}^2 \,\tilde b_L^2 + \frac{1}{2} m_{\tilde D,1}^2 \,\tilde d_R^2 + \frac{1}{2} m_{\tilde D,3}^2 \,\tilde b_R^2 \notag \\ &~~ + \frac{1}{\sqrt{2}} \left[ \left(T_D\right)_{33} h_d - y_b \mu h_u \right]\tilde b_L \tilde b_R + \frac{1}{\sqrt{2}} \left(T_D \right)_{13} h_d \tilde d_L \tilde b_R + \frac{1}{\sqrt{2}} \left(T_D \right)_{31} h_d \tilde b_L \tilde d_R \notag \\ & ~~ + \frac{1}{4} y_b^2 (\tilde b_L^2 \tilde b_R^2 + \tilde b_L^2 h_d^2 + \tilde b_R^2 h_d^2) \notag \\ &~~ + \frac{1}{24} g_3^2 (\tilde d_L^2 + \tilde b_L^2 - \tilde d_R^2 - \tilde b_R^2)^2 + \frac{1}{32} g_2^2 (h_u^2 - h_d^2 + \tilde d_L^2 + \tilde b_L^2)^2 \notag \\ &~~ + \frac{1}{32} g_Y^2 \left(h_u^2 - h_d^2 + \frac{1}{3}\tilde d_L^2 + \frac{1}{3}\tilde b_L^2 + \frac{2}{3}\tilde d_R^2 + \frac{2}{3}\tilde b_R^2 \right)^2, \label{eq:scalarPT} \end{align} where the coefficients are \begin{align} m_{11}^2 &= m_A^2 \sin^2\beta - \frac{1}{2} m_Z^2 \cos 2\beta, \\ m_{22}^2 &= m_A^2 \cos^2\beta + \frac{1}{2} m_Z^2 \cos 2\beta, \\ m_{12}^2 &= \frac{1}{2} m_A^2 \sin 2\beta. \end{align} Here, $h_d$, $h_u$, $\tilde d_L$, $\tilde b_L$, $\tilde d_R$, $\tilde b_R$ are real scalar fields with $\langle h_d \rangle = v \cos\beta $ and $\langle h_u \rangle = v \sin\beta $ at the EW vacuum. In this potential, all coefficients can be rotated to be real by rephasing the fields. The terms proportional to light flavor Yukawas are discarded, because those contributions are negligible. The scalar potential for $\tilde s_{L}$, $\tilde s_{R}$ is obtained by substituting $\tilde d_{L,R}$ $\to $ $\tilde s_{L,R}$, $\left(T_D\right)_{13}$ $\to $ $\left(T_D\right)_{23}$, and $\left(T_D\right)_{31}$ $\to $ $\left(T_D\right)_{32}$. \begin{figure}[t] \begin{center} \includegraphics[width=0.6\textwidth, bb= 0 0 546 343]{vacuum_A.pdf} \caption{ The upper bound on $\left| \left(T_D \right)_{i3} \right|$ for $i=1,2$ from the vacuum stability condition as a function of $m_{\tilde{Q}} $. Here, $\tan \beta =5,$ 10, 30, 50 are taken. The solid lines are in the case of $m_A = m_{\tilde{Q},i} = m_{\tilde{D},3} \equiv m_{\tilde{Q}}$, while the dashed lines represent the decoupling limit of the heavy Higgs multiplets, $ m_A \gg m_{\tilde{Q},i} = m_{\tilde{D},3} \equiv m_{\tilde{Q}}$. } \label{fig:TD13bound} \end{center} \end{figure} Let us first consider the vacuum stability condition when only $\left(T_D\right)_{13}$ is large. The scalar potential is simplified to be \begin{align} V &= \frac{1}{2} m_{11}^2 \, h_d^2 + \frac{1}{2} m_{22}^2 \, h_u^2 - m_{12}^2 \, h_d h_u + \frac{1}{2} m_{\tilde Q,1}^2 \,\tilde d_L^2 + \frac{1}{2} m_{\tilde D,3}^2 \,\tilde b_R^2 + \frac{1}{\sqrt{2}} \left(T_D\right)_{13} h_d \tilde d_L \tilde b_R \\ &~~ + \frac{1}{4} y_b^2 \tilde b_R^2 h_d^2 + \frac{1}{24} g_3^2 (\tilde d_L^2 - \tilde b_R^2)^2 + \frac{1}{32} g_2^2 (h_u^2 - h_d^2 + \tilde d_L^2)^2 + \frac{1}{32} g_Y^2 \left(h_u^2 - h_d^2 + \frac{1}{3}\tilde d_L^2 + \frac{2}{3}\tilde b_R^2 \right)^2. \notag \end{align} When $m_A \sim m_{\tilde{Q},1} \sim m_{\tilde{D},3}$, CCB vacua appear around a $h_d$--$\tilde d_L$--$\tilde b_R$ plane. In Fig.~\ref{fig:TD13bound}, the solid lines show upper bounds on $\left| \left(T_D \right)_{13} \right|$ for $\tan \beta = 5$, 10, 30, and 50. We assumed $m_A = m_{\tilde{Q},1} = m_{\tilde{D},3}$. It is shown that the upper bounds are proportional to $m_{\tilde Q}$. Also, the results depend on $\tan \beta$ slightly. This is because the scalar potential is stabilized by a quartic coupling $y_b^2 \tilde b_R^2 h_d^2 \sim \left( 2 m_b^2 /v^2\right) \tan^2 \beta \tilde b_R^2 h_d^2 $, when $\tan \beta$ is large. \begin{figure}[t] \begin{center} \includegraphics[width=0.6\textwidth, bb= 0 0 341 216]{vacuum_B.pdf} \caption{The vacuum stability condition of $\left| \left(T_D \right)_{i3} \right|$ for $i=1,2$ as a function of $m_A$. Here, $m_{\tilde{Q},i} = m_{\tilde{D},3} = 10\,\textrm{TeV}$, and $\tan \beta$ = 5 and 30 are taken. } \label{fig:TD13_MA} \end{center} \end{figure} When $m_A$ is larger than $m_{\tilde{Q},1} \sim m_{\tilde{D},3}$, the position of the CCB vacuum approaches to a $H$--$\tilde d_L$--$\tilde b_R$ plane, where $H$ includes the SM-like Higgs boson, $H = h_{\rm SM}+v$. In Fig.~\ref{fig:TD13_MA}, the $m_A$ dependence of the upper bound is shown. Here, $\tan \beta =5$ and 30 are taken. We found that the vacuum stability condition is relaxed for large $m_A$. In the decoupling limit of the heavy Higgs bosons ($m_A^2 \gg m_Z^2, \alpha \to \beta - \pi/2$), the scalar potential can be expressed by $H$, $\tilde d_L$, and $\tilde b_R$ as \begin{align} V &= - \frac{1}{4} m_Z^2 \cos^2 2\beta\, H^2 + \frac{1}{2} m_{\tilde Q,1}^2 \,\tilde d_L^2 + \frac{1}{2} m_{\tilde D,3}^2 \,\tilde b_R^2 + \frac{1}{\sqrt{2}} \left(T_D\right)_{13} \cos\beta\, H \tilde d_L \tilde b_R \notag \\ &~~ + \frac{1}{4} y_b^2 \tilde b_R^2 H^2 \cos^2\beta + \frac{1}{24} g_3^2 (\tilde d_L^2 - \tilde b_R^2)^2 + \frac{1}{32} g_2^2 (H^2 \cos2\beta - \tilde d_L^2)^2 \notag \\ &~~ + \frac{1}{32} g_Y^2 \left( H^2 \cos2\beta - \frac{1}{3}\tilde d_L^2 - \frac{2}{3}\tilde b_R^2 \right)^2. \label{eq:Vdecoupled} \end{align} The upper bounds on $\left| \left(T_D \right)_{13} \right|$ are shown by the dashed lines in Fig.~\ref{fig:TD13bound}.\footnote{ In this scalar potential, the SM-like Higgs boson is lighter than 125\,\textrm{GeV}. The vacuum stability condition can be evaluated naively by adding top-stop radiative corrections, $\left(g_2^2 + g_Y^2 \right) \delta_H^{(t)} \sin^4 \beta H^4 /8 $,~\cite{Hisano:2010re, Kitahara:2012pb, Kitahara:2013lfa, Carena:2012mw} to Eq.~\eqref{eq:Vdecoupled} in order to achieve the 125\,\textrm{GeV}~SM-like Higgs boson at the EW vacuum. We found that Eq.~\eqref{eq:vacuumfit} is barely changed. Dedicated studies are needed to fully include the radiative corrections (see Ref.~\cite{Endo:2015ixx}). } Again, they are proportional to $m_{\tilde Q}$. In contrast to the case of $m_A \sim m_{\tilde{Q}}$, the result is almost proportional to $\tan \beta$. This is understood by $\cos \beta$ associated to $\left(T_D\right)_{13}$. A fitting formula of the vacuum stability condition in the large $m_A$ limit with $m_{\tilde{Q},1} = m_{\tilde{D},3} \equiv m_{\tilde{Q}}$ is derived as \beq \frac{\left| \left(T_D\right)_{13}\right| }{\tan \beta} \lesssim - 0.186 \,\textrm{TeV}+ 1.675\, m_{\tilde{Q}}, \label{eq:vacuumfit} \eeq where the phase of $\left(T_D\right)_{i3}$ is taken into account. This formula works well for $m_{\tilde{Q}} > 1\,\textrm{TeV}$. Let us next turn on $\left(T_D\right)_{23}$ in addition to $\left(T_D\right)_{13}$. The scalar trilinear term becomes \begin{align} V \supset \frac{1}{\sqrt{2}} \left[ \left(T_D\right)_{13} \tilde d_L + \left(T_D\right)_{23} \tilde s_L \right] \tilde b_R h_d. \end{align} Here, $\left(T_D\right)_{13,23}$ are taken to be real by rephasing the scalar fields. By mixing $\tilde d_L$ and $\tilde s_L$, one can obtain \begin{align} V \supset \frac{1}{\sqrt{2}} \left[\left(T_D\right)_{13}^2 + \left(T_D\right)_{23}^2\right]^{1/2}\, \tilde d_L' \tilde b_R h_d, \end{align} where $\tilde d_L = \tilde d_L' \cos\theta - \tilde s_L'\sin\theta$ and $\tilde s_L = \tilde d_L' \sin\theta + \tilde s_L'\cos\theta$ with $\tan\theta = \left(T_D\right)_{23}/\left(T_D\right)_{13}$. When $m_{\tilde Q,1}^2 = m_{\tilde Q,2}^2 \equiv m_{\tilde Q}^2$, the scalar potential of $\tilde d_L'$ is obtained from that of $\tilde d_L$ by substituting $\left(T_D\right)_{13} \to \left[\left(T_D\right)_{13}^2 + \left(T_D\right)_{23}^2\right]^{1/2}$ as well as $\tilde d_L \to \tilde d_L'$. Therefore, the vacuum stability condition \eqref{eq:vacuumfit} is extended to be \beq \frac{ \sqrt{|\left(T_D\right)_{13}|^2 + |\left(T_D\right)_{23}|^2} }{\tan \beta} \lesssim - 0.186 \,\textrm{TeV}+ 1.675\, m_{\tilde{Q}}, \label{eq:vacuumfit2} \eeq where the phases of $\left(T_D\right)_{13,23}$ are taken into account appropriately. The formula is valid when $m_{\tilde{Q}} \equiv m_{\tilde{Q},1} = m_{\tilde{Q},2} = m_{\tilde{D},3} > 1\,\textrm{TeV}$ and $m_A$ is decoupled.\footnote{ We have validated the formula \eqref{eq:vacuumfit2} explicitly by analyzing the bounce action of the scalar potential of $H$, $\tilde d_L$, $\tilde s_L$, and $\tilde b_R$. } When only $\left(T_D\right)_{31}$ is large, the potential becomes \begin{align} \label{eq:potential2} V &= \frac{1}{2} m_{11}^2 \, h_d^2 + \frac{1}{2} m_{22}^2 \, h_u^2 - m_{12}^2 \, h_d h_u + \frac{1}{2} m_{\tilde Q,3}^2 \,\tilde b_L^2 + \frac{1}{2} m_{\tilde D,1}^2 \,\tilde d_R^2 + \frac{1}{\sqrt{2}} \left(T_D\right)_{31} h_d \tilde b_L \tilde d_R \\ &~~ + \frac{1}{4} y_b^2 \tilde b_L^2 h_d^2 + \frac{1}{24} g_3^2 (\tilde b_L^2 - \tilde d_R^2)^2 + \frac{1}{32} g_2^2 (h_u^2 - h_d^2 + \tilde b_L^2)^2 + \frac{1}{32} g_Y^2 \left( h_u^2 - h_d^2 + \frac{1}{3}\tilde b_L^2 + \frac{2}{3}\tilde d_R^2 \right)^2. \notag \end{align} By repeating the above procedure, one can obtain quantitatively the same fitting formula for $\left(T_D\right)_{3i}$ as Eq.~\eqref{eq:vacuumfit2}, \beq \frac{ \sqrt{|\left(T_D\right)_{31}|^2 + |\left(T_D\right)_{32}|^2} }{\tan \beta} \lesssim - 0.186 \,\textrm{TeV}+ 1.675\, m_{\tilde{Q}}, \label{eq:vacuumfit3} \eeq where $m_{\tilde{Q}} \equiv m_{\tilde{Q},3} = m_{\tilde{D},1} = m_{\tilde{D},2} > 1\,\textrm{TeV}$ and $m_A$ is decoupled. \section{Numerical analysis} \label{sec:analysis} In this section, we study gluino contributions to \ensuremath{\varepsilon'/\varepsilon_K}\,via the $Z$ penguin. They are enhanced by large scalar trilinear couplings as shown in Sec.~\ref{sec:SUSY}. Since $(T_D)_{13,23,31,32}$ are complex variables, there are 8 degrees of freedom. For simplicity, we restrict the parameter space such that two of $(T_D)_{13,23,31,32}$ are real. When $(T_D)_{23,32}$ are real, we checked that wide parameter regions to explain the discrepancy of \ensuremath{\varepsilon'/\varepsilon_K}\,are tightly excluded by $\mathcal{B}(\bar B \to X_{d,s} \gamma)$. Therefore, we consider the cases when $(T_D)_{13,31}$ are real. The scalar trilinear coupling are parameterized as \begin{align} [(T_D)_{13},(T_D)_{23},(T_D)_{31},(T_D)_{32}] = [\gamma_L,\alpha_L + i \beta_L,\gamma_R,\alpha_R + i \beta_R], \label{eq:TDset} \end{align} where $\alpha_i$, $\beta_i$ and $\gamma_i$ are real parameters. Then, one obtains (see Sec.~\ref{sec:SUSY}) \begin{align} {\rm Im}\,[\mathcal{C}_{HQ}^{(1,3)}]_{12} &\propto -{\rm Im}\,\left[(T_D)_{13}^* (T_D)_{23}\right] = -\beta_L \gamma_L, \\ {\rm Im}\,[\mathcal{C}_{HD}]_{12} &\propto +{\rm Im}\,\left[(T_D)_{31} (T_D)_{32}^* \right]= -\beta_R \gamma_R. \end{align} The $L$ variables contribute to the left-handed Wilson coefficients, and the $R$ variables to the right-handed ones. In order to evaluate the observables, we scan the whole parameter region of $\alpha_i$, $\beta_i$, and $\gamma_i$ where the vacuum stability conditions are satisfied.\footnote{ We checked that the constraint from $\mathcal{B}(K_L\to\mu^+\mu^-)$ is weaker than the other constraints in the parameter region of our interest. } When $\beta_L\gamma_L > 0$ and $\beta_R\gamma_R > 0$, the SUSY contribution to \ensuremath{\varepsilon'/\varepsilon_K}\,is maximized, because the left-handed contribution, $\mathcal{C}_{HQ}$, constructively interferes with the right-handed one, $\mathcal{C}_{HD}$. In this case, $\mathcal{B}(K_L\to\pi^0\nu\bar\nu)$ cannot exceed the SM prediction, because positive $\beta_L\gamma_L$ and $\beta_R\gamma_R$ tends to decrease the branching ratio, as can be seen from Eq.~\eqref{eq:KLpinn}. We consider this case in Sec.~\ref{Sec:posiposi}. In contrast, \ensuremath{\varepsilon'/\varepsilon_K}\, cannot be accommodated with the result \eqref{eq:eps_limit} for $\beta_L\gamma_L < 0$ and $\beta_R\gamma_R < 0$. When either $\beta_L\gamma_L$ or $\beta_R\gamma_R$ is negative, the discrepancy of \ensuremath{\varepsilon'/\varepsilon_K}\, can also be explained. Because the right-handed contribution to \ensuremath{\varepsilon'/\varepsilon_K}\,is larger than the left-handed one, $\beta_R\gamma_R > 0$ is favored to amplify \ensuremath{\varepsilon'/\varepsilon_K}. At the same time, $\mathcal{B}(K_L\to\pi^0\nu\bar\nu)$ can be enhanced and may exceed the SM value. Hence, we consider the case when $\beta_L\gamma_L < 0$ and $\beta_R\gamma_R > 0$ in Sec.~\ref{Sec:negaposi}. Before proceeding to the analysis, let us summarize assumptions on model parameters. Since the vacuum stability condition is relaxed by large $m_A$, the heavy Higgs bosons are supposed to be decoupled. The squark masses are set to be degenerate, $m_{\tilde{Q}} \equiv m_{\tilde{Q},1} = m_{\tilde{Q},2} = m_{\tilde{Q},3} = m_{\tilde{D},1} = m_{\tilde{D},2} = m_{\tilde{D},3}$, for simplicity. The Higgsino mass parameter is also equal to $m_{\tilde Q}$, though dependences of the observables on it are weak. We take $\tan\beta=5$, though the following results are insensitive to the choice, because the observables as well as the vacuum stability condition depend on it dominantly in a combination of $T_D\cos\beta$. \subsection{$\boldsymbol{\beta_L\gamma_L > 0}$ and $\boldsymbol{\beta_R\gamma_R > 0}$} \label{Sec:posiposi} In Fig.~\ref{fig:gluinog12}, the maximal values of the SUSY contributions to \ensuremath{\varepsilon'/\varepsilon_K}\,are shown for $\beta_L\gamma_L > 0$ and $\beta_R\gamma_R > 0$ as a function of $m_{\tilde{Q}}$. There is a peak structure for each line. In smaller squark mass regions, the maximal value is determined by $\mathcal{B}(\bar B \to X_d \gamma)$. Defining the squark mixing parameter, $\delta_D = (T_D)_{ij}v\cos\beta /m_{\tilde Q}^2$, the SUSY contributions to \ensuremath{\varepsilon'/\varepsilon_K}\,depend on it as $( \ensuremath{\varepsilon'/\varepsilon_K} )^{\rm SUSY} \sim \delta_D^2$, whereas those to $\mathcal{B}(\bar B \to X_d \gamma)$ is $\sim \delta_D/m_{\tilde Q}$, where $m_{\tilde g} \sim m_{\tilde Q}$ is supposed. Thus, the maximal value of \ensuremath{\varepsilon'/\varepsilon_K}\,increases as $m_{\tilde Q}$ becomes larger. In larger squark mass regions, the maximal value is determined by \ensuremath{\varepsilon_K}, $\mathcal{B}(\bar B \to X_s \gamma)$ and the vacuum stability condition as well as $\mathcal{B}(\bar B \to X_d \gamma)$. In particular, the gluino box contribution to \ensuremath{\varepsilon_K}\,depends on $\delta_D$ as $\sim \delta_D^4/m_{\tilde Q}^2$, whereas the SUSY contributions via $\mathcal{C}_{HQ}$ and $\mathcal{C}_{HD}$ are not suppressed by $m_{\tilde Q}$, i.e., behaves as $\sim \lambda_t \delta_D^2 / m_Z^2$. When $m_{\tilde Q}$ is small, the latter contribution can be canceled enough by the former one. However, as $m_{\tilde Q}$ increases, the cancellation becomes weaker in the parameter region allowed by the other constraints. Hence, the bounds on the trilinear couplings become severer to satisfy the constraint of \ensuremath{\varepsilon_K}. Consequently, the maximal value of \ensuremath{\varepsilon'/\varepsilon_K}\,decreases. In the figures, $\gamma_i/\beta_i$ or $m_{\tilde{g}}/m_{\tilde{Q}}$ is also varied. On the black line, $\gamma_R/\beta_R=\gamma_L/\beta_L=1$ and $m_{\tilde{g}}/m_{\tilde{Q}}=1$ are chosen. In the left plot, $\gamma_R/\beta_R=\gamma_L/\beta_L=0.6, 0.8, 1.2$ with $m_{\tilde{g}}/m_{\tilde{Q}}=1$ from left to right of the red lines. On the other hand, $m_{\tilde{g}}/m_{\tilde{Q}}=1.8, 1.4, 0.8$ with $\gamma_R/\beta_R=\gamma_L/\beta_L=1$ from left to right of the green lines in the right plot. The maximum value increases when $\gamma_i/\beta_i$ is small and $m_{\tilde{g}}/m_{\tilde{Q}}$ is large. Also, it is found that the current discrepancy of \ensuremath{\varepsilon'/\varepsilon_K}\,can be explained if the squark mass is smaller than $5.6~\mbox{TeV}$. \begin{figure}[t!] \begin{center} \includegraphics[scale=0.5, bb= 0 0 360 363]{plot-epoe-1.pdf}\hspace{4mm} \includegraphics[scale=0.5, bb= 0 0 360 369]{plot-epoe-2.pdf} \caption{ The maximal gluino contributions to \ensuremath{\varepsilon'/\varepsilon_K}\,as a function of $m_{\tilde{Q}}$. The parameters are $\gamma_R/\beta_R=\gamma_L/\beta_L=1$ and $m_{\tilde{g}}/m_{\tilde{Q}}=1$ on the black line. In the left plot, $\gamma_R/\beta_R=\gamma_L/\beta_L=0.6, 0.8, 1.2$ with $m_{\tilde{g}}/m_{\tilde{Q}}=1$ from left to right of the red lines. In the right plot, $m_{\tilde{g}}/m_{\tilde{Q}}=1.8, 1.4, 0.8$ with $\gamma_R/\beta_R=\gamma_L/\beta_L=1$ from left to right of the green lines. } \label{fig:gluinog12} \end{center} \end{figure} \subsection{$\boldsymbol{\beta_L\gamma_L < 0}$ and $\boldsymbol{\beta_R\gamma_R > 0}$} \label{Sec:negaposi} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.5, bb= 0 0 360 351]{plot-KLpinn-1.pdf}\hspace{4mm} \includegraphics[scale=0.5, bb= 0 0 360 369]{plot-KLpinn-2.pdf} \caption{ The maximum value of $\mathcal{B}(K_L\to\pi^0\nu\bar\nu)$ normalized by the SM prediction as a function of $m_{\tilde Q}$. Here, $( \ensuremath{\varepsilon'/\varepsilon_K} )^{\rm SUSY} = 10.0 \times 10^{-4}$ is fixed. The parameters are $\gamma_R/\beta_R = -\gamma_L/\beta_L = 1$ and $m_{\tilde g}/m_{\tilde Q} = 1$ on the black line. In the left plot, $\gamma_R/\beta_R = -\gamma_L/\beta_L = 0.6, 0.8, 1.2$ with $m_{\tilde g}/m_{\tilde Q} = 1$ from left to right of the red lines. In the right plot, $m_{\tilde g}/m_{\tilde Q} = 1.8, 1.4, 0.8$ with $\gamma_R/\beta_R = -\gamma_L/\beta_L = 1$ from left to right of the green lines. } \label{fig:KLpinn} \end{center} \end{figure} We study other observables with keeping the SUSY contribution to \ensuremath{\varepsilon'/\varepsilon_K}\,sizable for $\beta_L\gamma_L < 0$ and $\beta_R\gamma_R > 0$. The SUSY parameters are determined to achieve $( \ensuremath{\varepsilon'/\varepsilon_K} )^{\rm SUSY} = 10.0 \times 10^{-4}$, where the current discrepancy between the experimental and SM values is explained at the $1\sigma$ level. In Fig.~\ref{fig:KLpinn}, $\mathcal{B}(K_L\to\pi^0\nu\bar\nu)$ is maximized for given $m_{\tilde Q}$. One finds a peak structure for each line. On the left side of the peak, the parameters are constrained by $\mathcal{B}(\bar B \to X_d \gamma)$. If the soft masses are too small, \ensuremath{\varepsilon'/\varepsilon_K}\,cannot be large sufficiently. On the right side, the constraints from \ensuremath{\varepsilon_K}\,and $\mathcal{B}(\bar B \to X_s \gamma)$ become relevant. When SUSY particles are very heavy, the SUSY contribution to \ensuremath{\varepsilon_K}\,via $\mathcal{C}_{HQ}$ and $\mathcal{C}_{HD}$ cannot be canceled enough by that via the gluino box contribution in the parameter region allowed by the other constraints. One can see that $\mathcal{B}(K_L\to\pi^0\nu\bar\nu)$ can be larger than the SM value. This result is contrasted with the case when $\beta_L\gamma_L > 0$ and $\beta_R\gamma_R > 0$. In the figures, $\gamma_i/\beta_i$ or $m_{\tilde g}/m_{\tilde Q}$ is also varied. On the black line, $\gamma_R/\beta_R = -\gamma_L/\beta_L = 1$ and $m_{\tilde g}/m_{\tilde Q} = 1$ are chosen. In the left plot, $\gamma_R/\beta_R = -\gamma_L/\beta_L = 0.6, 0.8, 1.2$ with $m_{\tilde g}/m_{\tilde Q} = 1$ from left to right of the red lines. On the other hand, $m_{\tilde g}/m_{\tilde Q} = 1.8, 1.4, 0.8$ with $\gamma_R/\beta_R = -\gamma_L/\beta_L = 1$ from left to right of the green lines in the right plot. In both plots, the peak positions depend on the setup. The maximum value increases when $|\gamma_i/\beta_i|$ is small and/or $m_{\tilde g}/m_{\tilde Q}$ is large. It is found that $\mathcal{B}(K_L\to\pi^0\nu\bar\nu)$ can be about 1.5 times larger than the SM prediction. Such a branching ratio could be discovered in future KOTO experiment. \begin{figure}[t] \begin{center} \includegraphics[scale=0.5, bb= 0 0 360 371]{plot-KPpinn-1.pdf}\hspace{4mm} \includegraphics[scale=0.5, bb= 0 0 360 370]{plot-KPpinn-2.pdf} \caption{ The maximum value of $\mathcal{B}(K^+\to\pi^+\nu\bar\nu)$ normalized by the SM prediction as a function of $m_{\tilde Q}$. Here, $( \ensuremath{\varepsilon'/\varepsilon_K} )^{\rm SUSY} = 10.0 \times 10^{-4}$ is fixed. The parameters are $\gamma_R/\beta_R = -\gamma_L/\beta_L = 1$ and $m_{\tilde g}/m_{\tilde Q} = 1$ on the black line. In the left plot, $\gamma_R/\beta_R = -\gamma_L/\beta_L = 0.6, 0.8, 1.2$ with $m_{\tilde g}/m_{\tilde Q} = 1$ from left to right of the red lines. In the right plot, $m_{\tilde g}/m_{\tilde Q} = 1.8, 1.4, 0.8$ with $\gamma_R/\beta_R = -\gamma_L/\beta_L = 1$ from left to right of the green lines. } \label{fig:KPpinn} \end{center} \end{figure} Next, $\mathcal{B}(K^+\to\pi^+\nu\bar\nu)$ is maximized for given $m_{\tilde Q}$ in Fig.~\ref{fig:KPpinn}. The branching ratio depends on $\mathcal{C}_{HQ}$ and $\mathcal{C}_{HD}$ similarly to the case of $\mathcal{B}(K_L\to\pi^0\nu\bar\nu)$. Hence, it can be larger than the SM prediction when either $\beta_L\gamma_L$ or $\beta_R\gamma_R$ is negative. The real component of $\mathcal{C}_{HQ}$ and $\mathcal{C}_{HD}$ contributes to the ratio, which is different from the case of $\mathcal{B}(K_L\to\pi^0\nu\bar\nu)$ and \ensuremath{\varepsilon'/\varepsilon_K}. Consequently, the peak structure in Fig.~\ref{fig:KLpinn} disappears. The maximal value tends to decrease as $m_{\tilde Q}$ increases. They are enhanced when $|\gamma_i/\beta_i|$ is small and $m_{\tilde g}/m_{\tilde Q}$ is large. The maximal value can be about 1.6--1.7 times larger than the SM prediction. The deviation could be measured in the current NA62 experiment. \begin{figure}[t] \begin{center} \includegraphics[scale=0.5, bb= 0 0 360 356]{plot-DACP-1.pdf}\hspace{4mm} \includegraphics[scale=0.5, bb= 0 0 360 374]{plot-DACP-2.pdf} \caption{ The maximum value of $\Delta A_{\rm CP}(b\to s\gamma)$ as a function of $m_{\tilde Q}$. Here, $( \ensuremath{\varepsilon'/\varepsilon_K} )^{\rm SUSY} = 10.0 \times 10^{-4}$ is fixed. The parameters are $\gamma_R/\beta_R = -\gamma_L/\beta_L = 1$ and $m_{\tilde g}/m_{\tilde Q} = 1$ on the black line. In the left plot, $\gamma_R/\beta_R = -\gamma_L/\beta_L = 0.6, 0.8, 1.2$ with $m_{\tilde g}/m_{\tilde Q} = 1$ from left to right of the red lines. In the right plot, $m_{\tilde g}/m_{\tilde Q} = 1.8, 1.4, 0.8$ with $\gamma_R/\beta_R = -\gamma_L/\beta_L = 1$ from left to right of the green lines. } \label{fig:DAcp} \end{center} \end{figure} Let us also mention about the ${C\hspace{-0.2mm}P}$-violating~ observable, $\Delta A_{\rm CP}(b\to s\gamma)$. In the analysis, since the ${C\hspace{-0.2mm}P}$-violating~ phases arise in $(T_D)_{23}$ and $(T_D)_{32}$, the asymmetry can be sizable. In Fig.~\ref{fig:DAcp}, the maximum value of $\Delta A_{\rm CP}(b\to s\gamma)$ is shown as a function of $m_{\tilde Q}$. Here, $( \ensuremath{\varepsilon'/\varepsilon_K} )^{\rm SUSY} = 10.0 \times 10^{-4}$ is fixed. On the black line, $\gamma_R/\beta_R = -\gamma_L/\beta_L = 1$ and $m_{\tilde g}/m_{\tilde Q} = 1$ are chosen. In the left plot, the trilinear coupling is varied as $\gamma_R/\beta_R = -\gamma_L/\beta_L = 0.6, 0.8, 1.2$ with $m_{\tilde g}/m_{\tilde Q} = 1$ from left to right of the red lines. In the right plot, the gluino mass is set as $m_{\tilde g}/m_{\tilde Q} = 1.8, 1.4, 0.8$ with $\gamma_R/\beta_R = -\gamma_L/\beta_L = 1$ from left to right of the green lines. It is found that the asymmetry is enhanced especially when $|\gamma_i/\beta_i|$ is small, because smaller ratios lead to larger $(T_D)_{23}$ and $(T_D)_{32}$ to achieve $( \ensuremath{\varepsilon'/\varepsilon_K} )^{\rm SUSY} = 10.0 \times 10^{-4}$. Also, when $m_{\tilde g}/m_{\tilde Q}$ is small, the asymmetry becomes large. The ${C\hspace{-0.2mm}P}\hspace{-0.4mm}~$ asymmetry can be as large as 14\% for $\gamma_R/\beta_R = -\gamma_L/\beta_L = 0.6$. We also find that $\Delta A_{\rm CP}(b\to s\gamma)$ is likely to be positive when it is enhanced in our scenario. Such an asymmetry seems to be large enough to be measured at Belle II with $50\,{\rm ab}^{-1}$.\footnote{ Although a part of the parameter regions seems to be constrained by the current experimental result \eqref{eq:ExpDAcp}, the theoretical uncertainty is large, and thus, we have not employed this limit. } \begin{figure}[t] \begin{center} \includegraphics[scale=0.5, bb= 0 0 360 360]{plot-KSmm-1.pdf}\hspace{4mm} \includegraphics[scale=0.5, bb= 0 0 360 369]{plot-KSmm-2.pdf} \caption{The effective branching ratio of $ K_S \to \mu^+ \mu^-$ is shown. Here, $D=1$ and $\eta_\mathcal{A}=-1$ are chosen. The model parameters are the same as those in Fig.~\ref{fig:KLpinn}. Here, $( \ensuremath{\varepsilon'/\varepsilon_K} )^{\rm SUSY} = 10.0 \times 10^{-4}$. The parameters are $\gamma_R/\beta_R = -\gamma_L/\beta_L = 1$ and $m_{\tilde g}/m_{\tilde Q} = 1$ on the black line. In the left plot, $\gamma_R/\beta_R = -\gamma_L/\beta_L = 0.6, 0.8, 1.2$ with $m_{\tilde g}/m_{\tilde Q} = 1$ from left to right of the red lines. In the right plot, $m_{\tilde g}/m_{\tilde Q} = 1.8, 1.4, 0.8$ with $\gamma_R/\beta_R = -\gamma_L/\beta_L = 1$ from left to right of the green lines. } \label{fig:KSmm} \end{center} \end{figure} Finally, we study the SUSY contribution to $K_S \to \mu^+\mu^-$ as a function of $m_{\tilde Q}$. They are enhanced when the sign of the left-handed contribution is opposite to that of the right-handed one. Such a setup is realized in this subsection. In Fig.~\ref{fig:KSmm}, the effective branching ratio of $K_S \to \mu^+ \mu^-$ is shown. Here, the dilution factor $D=1$ and the relative sign $\eta_\mathcal{A}=-1$ are chosen as a reference case.\footnote{ In the case of $D=0$, we find that the branching ratio $\mathcal{B}(K_S \to \mu^+ \mu^-)$ in Eq.~\eqref{eq:KSmmBr} is not deviated from the SM value \eqref{eq:KSMUMU_SMD0} sizably. } Since the interference term is almost independent of a real component of $\mathcal{C}_{H-}$ in the parameter regions of our interest, $\mathcal{B}\left( K_S \to \mu^+ \mu^- \right)_{\rm eff}$ is determined once $( \ensuremath{\varepsilon'/\varepsilon_K} )^{\rm SUSY}$ and $\mathcal{B}(K_L\to\pi^0\nu\bar\nu)$ are given. Therefore, in Fig.~\ref{fig:KSmm}, we take the same $\alpha_i$, $ \beta_i$ and $\gamma_i$ as those in Fig.~\ref{fig:KLpinn}, which maximize $\mathcal{B}(K_L\to\pi^0\nu\bar\nu)$. It is found that $\mathcal{B}\left( K_S \to \mu^+ \mu^- \right)_{\rm eff}$ is enhanced especially when $|\gamma_i/\beta_i|$ is small. The effective branching ratio can be $1.9 \times 10^{-11}$, which is larger than the SM prediction \eqref{eq:KSMUMU_SMD1}. Such a branching ratio might be measured by the end of the LHCb Run-2, and it is large enough to be detected at the LHCb Run-3 \cite{LHCbupgrade}. \section{Conclusions} \label{sec:conclusion} In this paper, we studied ${C\hspace{-0.2mm}P}\hspace{-0.4mm}~$ violations in the neutral kaon decay in the MSSM scenario where non-minimal flavor mixings and ${C\hspace{-0.2mm}P}$-violating~ phases reside in the trilinear scalar couplings of the down-type squarks. We calculated SUSY contributions that are induced by one-loop diagrams involving gluino and squarks, and evaluated their effects on flavor observables. We took the top-Yukawa contributions to $\Delta S = 2$ observables into account. Considering constraints from the vacuum stability and the measurements of \ensuremath{\varepsilon_K}, $\mathcal{B}(K_L\to \mu^+\mu^-)$, $\mathcal{B}(\bar{B}\to X_s\gamma)$ and $\mathcal{B}(\bar{B}\to X_d\gamma)$, we searched for the allowed parameter regions of the trilinear coupling parameters and investigated possible effects on \ensuremath{\varepsilon'/\varepsilon_K}, $\mathcal{B}(K_L\to \pi^0\,\nu\,\bar{\nu})$, $\mathcal{B}(K^+\to \pi^+\,\nu\,\bar{\nu})$, $\mathcal{B}(K_S\to\mu^+\,\mu^-)_{\rm eff}$ and $\Delta A_{\mathrm{CP}}(b\to s\,\gamma)$. We found that the difference between the measured value and the SM prediction of \ensuremath{\varepsilon'/\varepsilon_K}\,can be explained by the gluino-mediated $Z$-penguin contribution to the $s\to d$ transition amplitude for the squark mass smaller than $5.6\,\textrm{TeV}$. In addition, $\mathcal{B}(K_L\to \pi^0\,\nu\,\bar{\nu})$ and $\mathcal{B}(K^+\to \pi^+\,\nu\,\bar{\nu})$ can be enhanced by about $50\,\%$ and $70\,\%$ of the SM values, respectively. It is also shown that $\mathcal{B}(K_S\to\mu^+\,\mu^-)_{\rm eff}$ and $\Delta A_{\mathrm{CP}}(b\to s\,\gamma)$ are significantly enhanced. The deviations from the SM predictions of these observables can be probed in near-future experiments such as KOTO, NA62, LHCb and Belle II. Since the pattern of the deviations is closely related to the structure of the trilinear coupling matrix in the model, the measurements would provide us with important clues to explore flavor structures in physics beyond the SM. \vspace{1em} \noindent {\it Acknowledgements}: We are grateful to J.~A. Evans and D. Shih for helping us to compare our numerical results for some of FCNC observables to outputs from the FormFlavor code~\cite{Evans:2016lzo}. We would also like to thank A.~Ishikawa for valuable comments about $\Delta A_{\rm CP}(b \to s \gamma) $ in the Belle experiment. This work was supported by JSPS KAKENHI No.~16K17681 (M.E.), 16H03991 (M.E.), 16H06492 (K.Y.) and 17K05429 (S.M.).
{ "timestamp": "2018-04-04T02:09:13", "yymm": "1712", "arxiv_id": "1712.04959", "language": "en", "url": "https://arxiv.org/abs/1712.04959" }
\section{Introduction} \label{sec_1} In this work we deal with categorical (ordinal) variables collected in a contingency table and we propose a model able to capture different kind of independence relationships involving ordinal variables. Different models have been proposed in the literature with the aim of describing (in)dependence relationships among the variables focusing on the independence or the dependence structure. We will refer to the \cite{bartolucci2007} Hierarchical Multinomial Marginal Models (HMMMs), that investigate the dependence structure among a set of variables. The HMMMs are specified by a set of marginals distributions together with a set of interactions defined within different marginal distributions. Particular case of these models are the classical Log-Linear models, the \cite{bergsma2002} Marginal models, the \cite{glonek1995} Multivariate Logistic models. In particular, in this work we take advantage of the possibility of using different interactions that are significant also when we handle with ordinal variables, \cite{cazzaro2014}. Furthermore, we will focus on the relationships among a set of categorical (ordinal) variables under the perspective of testing, simultaneously, marginal, conditional and context-specific (CS) independencies. The first two are well known, the (CS) independence, instead, is a conditional independence which holds only in a subspace of the outcome space. For instance, given 3 variables $X_1$, $X_2$ and $X_3$, we have $X_1\perp X_2|X_3=1$ and $X_1\not\perp X_2|X_3\neq1 $. It is interesting to study this kind of independence as it allows us to focus on the modality(ies) which discriminate and really affect the connection among two variables. Finally, we propose a graphical representation of all the considered independencies taking advantage of the graphical models. As a matter of fact, graphical models rotate around a system of independencies among a set of variables, and their strong benefit lies on the notable visual tool that easily represents also complex system of relationships. Different graphical models exist in literature, see \cite{lauritzen1996graphical}, \cite{whittaker2009} and \cite{wermuth2004} for an overview. Here, we start by considering a Chain Graphical (CG) model, known as type IV, see \cite{drton2009}, adapting it according to our aims. The CG model of type IV is a naturally representation even of regression models where there are \textit{purely} response variables, \textit{purely} covariates and \textit{mixed} variables (that are covariates for some variables and responses for others). \replaced[id=Fe]{For this reason the CG model of type IV take advantages from the so-called \textit{Multivariate Regression Markov properties}}{In fact, this kind of CG can be also see as a particular case of a Regression Chain Graphical model}, see \cite{sadeghi2016}. In this work, for the first time, we then integrate the CS independence in a CG model. The CS independence, under the graphical model point of view, was debated in \cite{boutilier1996}, \cite{hojsgaard2004} and \cite{nyman2016} among others. In particular, Nyman generalized the Graphical model with the so-called ``Stratified" Graphical model. Here we propose the ``Stratified" Chain Graphical (SCG) model of type IV. Furthermore, by considering the regression model represented by the CG model, the study of CS independence offers the possibility of reducing the number of parameters in complicate models. The work follows this structure. At first we give an overview of the HMMMs with a special attention to the representation of CS independence via HMM models, in Section \ref{sec_2}. In this section, we reach out the same results of \cite{nyman2016} by using a different approach concerning the variables coded with baseline logits. It is worthwhile to note that the known results in the literature are carried out limited to the classical log-linear models. Furthermore, in Subsection \ref{sub_loc} and \ref{sub_con}, we provide as new result, how it is possible to define CS independence by using appropriate parameters for ordinal variables. Section \ref{sec_3} proposes the Stratified Chain Graphical (SCG) model as a generalization of CG model of type IV. Here the Markov properties for a SCG model were provided and the admissible SCG model are discussed. In Section \ref{sec_4} we show how to parametrize a SCG model of type IV through a parametrization based on HMMM parameters. Here the original aspects are multiple. Starting from the Regression Chain Graphical model of \cite{marchetti2011} we introduce the possibility of using parameters suitable for ordinal variables, instead of the parameters based on \textit{baseline} logits. Furthermore, we provide the connection between the SGM of type IV, discussed in Section \ref{sec_3} and the HMMMs in Section \ref{sec_2}. Finally, in Section \ref{sec_5} some applications to a real dataset on the innovation status of small and medium Italian firms are shown. The conclusion is reported in Section \ref{sec_6}. All the proofs of the theorems lie in the Appendix A in order to make more flowing the paper. \section{Hierarchical multinomial marginal parametrization for context-specific independence} \label{sec_2} Let us consider $q$ categorical variables $\mathcal{Q}=(X_1,\dots,X_q$) taking values $(i_1,$ $\dots,$ $i_q)$ in the contingency table $\mathcal{I}=(I_1\times \dots \times I_q)$. Thus, the generic variable $X_j$ takes values in $\left\{1,\dots,I_j\right\}$. The Hierarchical multinomial Marginal Model (HMMM), introduced by \cite{bartolucci2007}, is used here in order to describe marginal, conditional and CS independence statements also when we deal with ordinal variables. The HMMMs use parameters, henceforth HMM parameters, that generalize the canonical log-linear parameters, by considering also the marginal distributions and possibly different coding for the logits of the variables, see \cite{cazzaro2014}. In particular, within a given set of variables $A$, the cells $i_A$ and $i^*_A$ represent, respectively, the $i$-th and the reference modalities of the variables in $A$ depending on the type of logits assigned to the variables on which the parameters is based. In a \textit{baseline}, \textit{local} and \textit{continuation} logit $i_{A}$ is the $i-$th modality of each variable $\cap_{j\in A} {i_j}$. On the other hand, the index $i^*_A$, in \textit{baseline} logit is $\cap_{j\in A} {I_j}$, in the \textit{local} logit is $\cap_{j\in A}\left({i_j+1}\right)$ and in the \textit{continuation} logit is $\cap_{j\in A} \sum_{s_j\geq i_j+1 }{s_j} $. Higher order parameters are obtained as contrast of logits and preserve the type of coding. Within a given marginal distribution $\mathcal{M}\subseteq \mathcal{Q}$, let us consider the marginal probabilities $\pi_{\mathcal{M}}$ that is the marginal $\mathcal{M}$ probability obtained by summarizing respect to the variables $\mathcal{Q}\backslash \mathcal{M}$. Considering the HMM parameters as constrasts among the logarithms of probabilities of disjoint subsets of cells, they will be characterized by the set $\mathcal{L}$, $\mathcal{L}\subseteq \mathcal{M}$, of variables involved and the marginal distribution $\mathcal{M}$ where they are defined having the following form:\\ \begin{equation} \eta_{\mathcal{L}}^{\mathcal{M}}(i_{\mathcal{L}}|i^{**}_{\mathcal{M}\backslash\mathcal{L}})=\sum_{\mathcal{J}\subseteq \mathcal{L}}(-1)^{|\mathcal{L}\backslash\mathcal{J}|}\log \pi_{\mathcal{M}}\left(i_{\mathcal{L}\backslash\mathcal{J}},i^*_{ \mathcal{J}},i^{**}_{\mathcal{M}\backslash\mathcal{L}}\right) \label{eq_parametri} \end{equation} where $\mathcal{M}\subseteq \mathcal{Q}$ denotes the marginal table $\mathcal{I}_{\mathcal{M}}$ where the parameter is defined; $\mathcal{L}\subseteq \mathcal{M}$ is the subset of variables which the parameter refers and $i^{**}_A$ is an arbitrary cell, here the last modality $I_A$. Note that $i^{**}_{\mathcal{M}\backslash\mathcal{L}}$ select the levels of the conditioning variables. In this context, as we already cited, the reference cell involved in the $i^{**}$ indexes will be always the last one, so following this convention we can simply denote: \begin{equation} \eta_{\mathcal{L}}^{\mathcal{M}}(i_{\mathcal{L}}|i^{**}_{\mathcal{M}\backslash\mathcal{L}}) =\eta_{\mathcal{L}}^{\mathcal{M}}(i_{\mathcal{L}}). \label{eq_parametri_short} \end{equation} Note that for each $\mathcal L$ the parameter $\eta_{\mathcal{L}}^{\mathcal{M}}(I_{\mathcal{L}})$ is trivially zero whatever the coding of the variables. Note that in the environment of HMMMs conditional independencies among variables can be tested by imposing to appropriate HMM parameters to be zero. For instance, given three variables $X_1$, $X_2$ and $X_3$, in order to represent the conditional independence $X_1\perp X_2|X_3$ we have that \replaced[id=Fe]{$\eta^{123}_{12}(i_{12})=\eta^{123}_{123}(i_{123})=0$ for any $i_{12}\in \mathcal{I}_{12}$ and $i_{123}\in \mathcal{I}_{123}$, where the numbers $1$, $2$ and $3$ in the parameters refer to the variables $X_1$, $X_2$ and $X_3$, repectively.}{ $\eta^{X_1X_2X_3}_{X_1X_2}(i_{12})=\eta^{X_1X_2X_3}_{X_1X_2X_3}(i_{123})=0$ for any $i_{12}\in \mathcal{I}_{12}$ and $i_{123}\in \mathcal{I}_{123}$}. \cite{bergsma2002} and \cite{bartolucci2007} proved that the above mentioned parameters provide a parameterization of the full joint probability function $\pi_{\mathcal{Q}}$ if and only if the property of hierarchy and completeness are satisfied. These two properties make sure of the smoothness of the parametrization that implies the existence of the maximum likelihood estimation. \begin{example} \label{ex_logit} Let us consider two variables $X_1$, $X_2$ collected in a $3\times3$ contingency table. In Table \ref{tab_logit} are the parameters (\ref{eq_parametri}) according to the different coding: \scriptsize \begin{table}[h] \begin{tabular}{l|ccc} type &$\eta_{1}^{12}(i_1)$ & $\eta_{2}^{12}(i_2)$ & $\eta_{12}^{12}(i_1i_2)$ \\ \hline baseline & $\log\left(\frac{\pi_{33}}{\pi_{i_13}}\right)$ & $\log\left(\frac{\pi_{33}}{\pi_{3i_2}}\right)$ & $\log\left(\frac{\pi_{i_1i_2}\pi_{33}}{\pi_{i_13}\pi_{3i_2}}\right)$ \\ &&&\\ local & $\log\left(\frac{\pi_{(i_{1}+1)3}}{\pi_{i_13}}\right)$ & $\log\left(\frac{\pi_{3(i_{2}+1)}}{\pi_{3i_2}}\right)$ & $\log\left(\frac{\pi_{i_1i_2}\pi_{(i_{1}+1)(i_{2}+1)}}{\pi_{(i_{1}+1)i_2}\pi_{i_1(i_{2}+1)}}\right)$ \\ &&&\\ cont & $\log\left(\frac{\sum_{i_1'>i_1}\pi_{(i_1')3}}{\pi_{i_13}}\right)$ & $\log\left(\frac{\sum_{i_2'>i_2}\pi_{3(i_2')}}{\pi_{3i_2}}\right)$ & $\log\left(\frac{\pi_{i_1i_2}\sum_{i_1'>i_1,i_2'>i_2}\pi_{(i_{1}')(i_{2}')}}{\sum_{i_1'>i_1}\pi_{(i_1')i_2}\sum_{i_2'>i_2}\pi_{i_1(i_2')}}\right)$ \\ &&&\\ \end{tabular} \caption{Different coding for logits and contrasts of logits.} \label{tab_logit} \end{table} \end{example} \normalsize The classical log-linear model is a particular case of HMMM where the parameters are all based on \textit{baseline} logit and there are only one marginal set equal to the joint distribution $\mathcal{M}=\mathcal{Q}$. \cite{nyman2016} provide the condition to define a CS independence in classical log-linear models. Next we will reach the same condition, in a new way, for the CS independencies on the HMMMs with HMMM parameters based on baseline logit. Let us suppose we want to define a CS independence among the variables in the marginal set $\mathcal{M}$. Thus, by collecting the variables in the marginal set $\mathcal{M}$ in three subsets, supposing $A$, $B$ and $C$, we are interesting to define the following statement \begin{equation} A\perp B| (C=i'_C), \qquad i'_C\in \mathcal{K}\\ \label{nic1} \end{equation} where $A\cup B\cup C=\mathcal{M}$, and $i'_C$ is the vector of certain modalities of variables in $C$, taking values in $\mathcal{K}\subset\mathcal{I}_C$, for which the conditional independence holds. \begin{theorem} \label{T1_baseline} The CS independence in formula (\ref{nic1}) holds if and only if the HMM parameters, based on baseline logits, satisfy the following constraints \begin{equation} \sum_{c \in \mathcal{P}(C) }(-1)^{|C\backslash c|}\eta_{vc}^{\mathcal{M}}(i_v \cap i'_c)=0 \qquad i_v\in \mathcal{I}_v \qquad i'_c \in \left(\mathcal{K} \cap \mathcal{I}_c\right), \label{eq.teo1} \end{equation} $\forall v\in \mathcal{V}=\left\{\left(\mathcal{P}(A)\setminus\emptyset\right) \cup \left( \mathcal{P}(B)\setminus \emptyset\right) \right\}$, where $\mathcal{P}(\cdot)$ denotes the power set. \end{theorem} The Example \ref{ex_2.2} shows step by step how to get the constraints in formula (\ref{eq.teo1}). \begin{example} \label{ex_2.2}Let us consider four variables collected in the marginal $\mathcal{I}_\mathcal{M}$ of dimension $3\time3\times3\times3\times3$ and let us consider the CS independence $X_1\perp X_2|(X_3X_4)=(1,1)$. The HMM parameter $\eta^{1234}_{1234}(1111)$ based on \textit{baseline logit} can be decomposed as follows \[ \begin{array}{lll} \eta^{1234}_{1234}(1111)&=&\log\left(\frac{\pi_{3333}\pi_{1133}\pi_{1313}\pi_{3113}\pi_{1331}\pi_{3131}\pi_{3311}\pi_{1111}}{\pi_{1333}\pi_{3133}\pi_{3313}\pi_{1113}\pi_{3331}\pi_{3111}\pi_{1311}\pi_{1131}}\right)=\\ &\\ &=&\log\left(\frac{\pi_{3333}\pi_{1133}\pi_{1313}\pi_{3113}}{\pi_{1333}\pi_{3133}\pi_{3313}\pi_{1113}}\right)+ \log\left(\frac{\pi_{3333}\pi_{1133}\pi_{1331}\pi_{3131}}{\pi_{1333}\pi_{3133}\pi_{3331}\pi_{1131}}\right)+\\ &\\ &&- \log\left(\frac{\pi_{3333}\pi_{1133}}{\pi_{1333}\pi_{3133}}\right)+ \log\left(\frac{\pi_{3311}\pi_{1111}}{\pi_{3111}\pi_{1311}}\right)=\\ &\\ &=&(-1)^{|\{4\}|+1}\eta^{1234}_{123}(111)+(-1)^{|\{3\}|+1}\eta^{1234}_{124}(111)+\\ &&\\ &&(-1)^{|\{3,4\}|+1}\eta^{1234}_{12}(11)+(-1)^{|\{3,4\}|}\eta^{1234}_{12}(11|11). \end{array} \] From the CS independence we have that $\eta^{1234}_{12}(11|11)=0$ and by shifting the right hand side on the left we get: \[ \eta^{1234}_{1234}(1111)-\eta^{1234}_{123}(111)-\eta^{1234}_{124}(111)+\eta^{1234}_{12}(11)=0 \] The same equivalence holds for $\eta^{1234}_{1234}(1211)$, $\eta^{1234}_{1234}(2111)$ and $\eta^{1234}_{1234}(2211)$. Note that, having the CS independence $ X_1\perp X_2|(X_3X_4) =(1,3)$, the constraints involving the variables $X_4$ at the fourth modality are zero by definition, thus formula (\ref{eq.teo1}) becomes \[ \begin{array}{lll} -\eta^{1234}_{123}(111)+\eta^{1234}_{12}(11)=0\\ \\ -\eta^{1234}_{123}(121)+\eta^{1234}_{12}(12)=0\\ \\ -\eta^{1234}_{123}(211)+\eta^{1234}_{12}(21)=0\\ \\ -\eta^{1234}_{123}(221)+\eta^{1234}_{12}(22)=0\\ \end{array} \] \added[id=Fe]{INVECE DELLA FORMULA PRECEDENTE MEGLIO QUESTA?} \[-\eta^{1234}_{123}(i_{123})+\eta^{1234}_{12}(i_{12})=0\] \added[id=Fe]{where $i_{123}\in\left\{(111),(121),(211),(221)\right\}$ and $i_{12}\in\left\{(11),(12),(21),(22)\right\}$.} \end{example} \begin{remark} \label{rem_1}If in the CS statement in formula (\ref{nic1}) $\mathcal{K}=\mathcal{I}_C$, then the constraints in formula (\ref{eq.teo1}) satisfy the conditional independence $A\perp B|C$.\\ \end{remark} From Remark \ref{rem_1} comes that the CS independence $A\perp B|D(C=i'_C)$, for $i'_C\in \mathcal{K}$, matches with the CS independence \replaced[id=Fe]{$A\perp B|(DC=i'_{DC})$, where $i'_{DC}=i'_D\cap i'_C$,}{ $A\perp B|(DC=i_Di_C)$}, for $i'_C\in \mathcal{K}$ and $i'_D\in \mathcal{I}_D$. Henceforth, this situations will be described as $A\perp B|(DC)=(*,i'_C)$ where the asterisk denotes we refer to all modalities. \begin{remark} \label{rim_df} Given a CS statement as in formula (\ref{nic1}), the number of constraints imposed at a saturated log-linear model are $\left[\left(\prod_{j\in(A\cup B)}I_j\right)-1\right] \times |\mathcal{K}|$. \end{remark} As mentioned before, the aim of this work is to provide a model able to represent the CS independence statements by considering also the ordinal variables. When we handle with ordinal variables, \textit{baseline} logits are no longer appropriate. The \textit{local}, \textit{continuation} or \textit{reverse} approaches are more suitable. The following subsections deal with these logits. \subsection{Constraints on HMM parameters based on \textit{local} logit} \label{sub_loc} Let us suppose that the conditional set in (\ref{nic1}) is composed only of ordinal variables and we use parameters based on \textit{local} logits to code these ones, then the CS independence can be described by Theorem \ref{T2_local}. \begin{theorem} \label{T2_local} The CS independence in formula (\ref{nic1}) holds if and only if the HMM parameters based on \textit{local} logits satisfy the following constraints \begin{equation} \sum_{c \in \mathcal{P}(C) }(-1)^{|C\backslash c|} \sum_{i_c \geq i'_c }\eta_{vc}^{\mathcal{M}}(i_{vc})=0 \label{nic2} \end{equation} $\forall v\in \mathcal{V}$ where $\mathcal{V}= \left\{\left(\mathcal{P}(A)\setminus\emptyset\right) \cup \left( \mathcal{P}(B)\setminus \emptyset\right) \right\}$, $i_{vc}=i_v \cap i_c$, $\forall i_v\in \mathcal{I}_v$ and $\forall i'_c \in \left(\mathcal{K} \cap \mathcal{I}_c\right)$. \end{theorem} \begin{example} \label{ex_local1} Let us consider the case of three variables collected in a $2\times2\times4$ contingency table. If we want to consider the CS independence $X_1\perp X_2|X_3=2$ where all the variables are coded with \textit{local} approach in the parameters we consider the decomposition in formula (\ref{nic2}): \[ \begin{array}{ll} e^{\left(\left(\eta^{123}_{123}(112)+\eta^{123}_{123}(113)\right)-\eta^{123}_{12}(11)\right)}&=\left(\frac{\pi_{223}\pi_{113}\pi_{122}\pi_{212}}{\pi_{123}\pi_{213}\pi_{222}\pi_{112}}\right)\left(\frac{\pi_{224}\pi_{114}\pi_{123}\pi_{213}}{\pi_{124}\pi_{214}\pi_{223}\pi_{113}}\right)\left(\frac{\pi_{124}\pi_{214}}{\pi_{224}\pi_{114}}\right)=\\ &\\ &=\frac{\pi_{122}\pi_{212}}{\pi_{222}\pi_{112}} \end{array} \]\\ \replaced[id=Fe]{ that becomes equal to $1$ when the CS independence holds, thus the $\log$ of the previuos fraction is equal to $0$.}{ that when the CS independence holds become equal to $1$, thus the $\log$ is equal to $0$.} \end{example} Until now we consider the CS independence like in formula (\ref{nic1}), but when we handle with ordinal variables a more interesting specification of CS independence is \begin{equation} A\perp B| C\geq i'_C, \qquad i'_C\in \mathcal{K} \label{nic_cs_1} \end{equation} or \begin{equation} A\perp B| C\leq i'_C, \qquad i'_C \in \mathcal{K} \label{nic_cs_2} \end{equation} where in this case the class $\mathcal{K}$ is composed of only one cell $i'_C$ and the CS independence must hold for all modalities of variables in $C$ greater(lower) than or equal to the cell in $\mathcal{K}$. Obviously, if the constraints in Theorem \ref{T2_local} are satisfied for each $i_C\geq i'_C$ ($i'_C\leq i_C$), then the (\ref{nic_cs_1}) (or (\ref{nic_cs_2})) holds too. But in the case of \textit{ local} parameters, there is a easiest way to define the CS independence in formula (\ref{nic_cs_1}), as shown in Corollary \ref{cor1}. \begin{corol} \label{cor1} The CS independence in formula (\ref{nic_cs_1}) holds if and only if the HMM parameters based on \textit{local} logits satisfy the following constraints: \begin{equation} \eta^{\mathcal{M}}_{vc}(i_{vc})=0 \qquad i_{vc}=i_v \cap i_c \qquad i_c \geq i'_c \qquad i'_c \in \left(\mathcal{K} \cap \mathcal{I}_c\right) \qquad i_v\in \mathcal{I}_v \label{eq_cor1} \end{equation} $\forall v\in \mathcal{V}$ \added[id=Fe]{where $\mathcal{V}= \left\{\left(\mathcal{P}(A)\setminus\emptyset\right) \cup \left( \mathcal{P}(B)\setminus \emptyset\right) \right\}$,} and $\forall c\in \mathcal{P}(C)$ with $c \neq \emptyset$. \end{corol} \begin{example} \label{ex_local_2} From Example \ref{ex_local1} let us consider a marginal set $\mathcal{M}=(X_1, X_2, X_3)$. The CS independence $X_1\perp X_2 |X_3\geq 2$ holds if \[ \begin{array}{lll} \eta^{123}_{12}(11)=0 &\quad \eta^{123}_{123}(112)=0&\quad \eta^{123}_{123}(113)=0.\\ \end{array}\] \end{example} \subsection{Constraints on parameters based on \textit{continuation} logit} \label{sub_con} As it is shown in Table \ref{tab_logit}, the parameters based on \textit{continuation} logits involve also sum of probabilities. This make impossible to explicit constraints to define the CS independence as defined in formula (\ref{nic1}). However, since this kind of parametrization is adopted when the variables are ordinal, it is helpful also to consider the particular cases displayed in formula (\ref{nic_cs_1}) and (\ref{nic_cs_2}). In this section we deal with these questions.\\ \begin{theorem} \label{teo3} The CS independence in formula (\ref{nic_cs_1}) holds if and only if the HMM parameters based on \textit{continuation} logits satisfy the following constraints: \begin{equation} \eta^{\mathcal{M}}_{vc}(i_{vc})=0 \qquad i_{vc}=i_v \cap i_c \qquad i_c \geq i'_c \qquad i'_c \in \left(\mathcal{K} \cap \mathcal{I}_c\right) \qquad i_v\in \mathcal{I}_v \label{eq_cor1bis} \end{equation} $\forall v\in \mathcal{V}$, \added[id=Fe]{where $\mathcal{V}= \left\{\left(\mathcal{P}(A)\setminus\emptyset\right) \cup \left( \mathcal{P}(B)\setminus \emptyset\right) \right\}$,} and $\forall c\in \mathcal{P}(C)$ with $c \neq \emptyset$. \end{theorem} \begin{example} \label{ex_continuation} Let us consider the situation described in the Example \ref{ex_local_2} but with parameters based on \textit{continuation} logits. The parameters involved in Theorem \ref{teo3} are $\eta_{12}^{123}(11)$, $\eta_{123}^{123}(112)$ and $\eta_{123}^{123}(113)$. In particular, the first is \[ \eta_{12}^{123}(11)=\log\left(\frac{\pi_{114}\pi_{224}}{\pi_{124}\pi_{214}}\right). \] Note that, $X_1\perp X_2|X_3\geq 2$ implies $X_1\perp X_2|X_3=4$. Then the previous parameter is equal to zero. About the second parameter, we have: \[ \eta_{123}^{123}(112)=\log\left(\frac{\left(\pi_{223}+\pi_{224}\right)\left(\pi_{113}+\pi_{114}\right)\left(\pi_{122}\right)\left(\pi_{212}\right)}{\left(\pi_{123}+\pi_{124}\right)\left(\pi_{213}+\pi_{214}\right)\left(\pi_{222}\right)\left(\pi_{112}\right)} \right). \] Since the variable $X_3$ appears only with modalities $2, 3 $ and $4$ for which the CS independence holds, then we get that even this parameter is null. In the same way we progress for the third parameter that is equal to zero. \end{example} \begin{remark} When we are interested in defining a CS independence as expressed in formula (\ref{nic_cs_2}), we can proceed in an analogous way previously sorting in a descending order the modalities of the interest variable. This corresponds to the \textit{reverse continuation} coding of the variable. \end{remark} Thus, if, for instance, we are interested in checking if a CS independence between two variables holds when the population is young or adult against old, we can sort the modalities of the variable \textit{Age} in the reverse order $\left\{Old, Adult, Young\right\}$ and then consider the CS independence in formula (\ref{nic_cs_1}).\\ In general, we can decide to codify the variables heterogeneously, with different kinds of logits, in order to suit the nature of the variables. However, as it is shown in this section, the constraints required to define CS independence statements depend on the type of logits used to code the variables in the conditional set. Here we present an example in order to show how to apply the different theorems when we handle with variables coded with different type of logits. \begin{example} \label{ex_misto} Let us consider a marginal set $\mathcal{M}$ composed of 4 variables collected in a $2\times 2\times 4\times4$ contingency table $\mathcal{I}_\mathcal{M}$. We codify the variables with \textit{baseline}, \textit{baseline}, \textit{local} and \textit{continuation} logits, respectively. We are interested in checking the CS independence $X_1\perp X_2| X_3X_4\geq (2,2)$ that means that the CS independence must hold when the variables $X_3$ and $X_4$ assume, respectively, the values $X_3\geq 2$ and $X_4\geq 2$ that is the levels $\left\{(2,2);(2,3);(3,2);(3,3)\right\}$. In this case, noting that the variables in the conditioning set are coded with the local and the continuation logits, the results due to Corollary \ref{cor1} and Theorem \ref{teo3} imply that the following parameters, involving the conditioning variables with values greater or equal to $(2,2)$, have to be zero, how effectively is: \[ \begin{array}{ll} \eta_{1234}(1122)=&\log\left(\frac{\left(\pi_{1122}\right)\left(\pi_{2222}\right)\left(\pi_{2132}\right)\left(\pi_{2123}+\pi_{2124}\right)\left(\pi_{1232}\right)\left(\pi_{1223}+\pi_{1224}\right)\left(\pi_{1133}+\pi_{1134}\right)\left(\pi_{2233}+\pi_{2234}\right)}{\left(\pi_{2122}\right)\left(\pi_{1222}\right)\left(\pi_{1132}\right)\left(\pi_{1123}+\pi_{1124}\right)\left(\pi_{2232}\right)\left(\pi_{2223}+\pi_{2224}\right)\left(\pi_{2133}+\pi_{2134}\right)\left(\pi_{1233}+\pi_{1234}\right)}\right)=0\\ &\\ \eta_{1234}(1132)=&\log\left(\frac{\left(\pi_{1132}\right)\left(\pi_{2232}\right)\left(\pi_{2142}\right)\left(\pi_{2133}+\pi_{2134}\right)\left(\pi_{1242}\right)\left(\pi_{1233}+\pi_{1234}\right)\left(\pi_{1143}+\pi_{1144}\right)\left(\pi_{2243}+\pi_{2244}\right)}{\left(\pi_{2132}\right)\left(\pi_{1232}\right)\left(\pi_{1142}\right)\left(\pi_{1133}+\pi_{1134}\right)\left(\pi_{2242}\right)\left(\pi_{2233}+\pi_{2234}\right)\left(\pi_{2143}+\pi_{2144}\right)\left(\pi_{1243}+\pi_{1244}\right)}\right)=0\\ &\\ \eta_{1234}(1123)=&\log\left(\frac{\left(\pi_{1123}\right)\left(\pi_{2223}\right)\left(\pi_{2133}\right)\left(\pi_{2124}\right)\left(\pi_{1233}\right)\left(\pi_{1224}\right)\left(\pi_{1134}\right)\left(\pi_{2234}\right)}{\left(\pi_{2123}\right)\left(\pi_{1223}\right)\left(\pi_{1133}\right)\left(\pi_{1124}\right)\left(\pi_{2233}\right)\left(\pi_{2224}\right)\left(\pi_{2134}\right)\left(\pi_{1234}\right)}\right)=0\\ &\\ \eta_{1234}(1133)=&\log\left(\frac{\left(\pi_{1133}\right)\left(\pi_{2233}\right)\left(\pi_{2143}\right)\left(\pi_{2134}\right)\left(\pi_{1243}\right)\left(\pi_{1234}\right)\left(\pi_{1144}\right)\left(\pi_{2244}\right)}{\left(\pi_{2133}\right)\left(\pi_{1233}\right)\left(\pi_{1143}\right)\left(\pi_{1134}\right)\left(\pi_{2243}\right)\left(\pi_{2234}\right)\left(\pi_{2144}\right)\left(\pi_{1244}\right)}\right)=0\\ &\\ \eta_{123}(113)=&\log\left(\frac{\left(\pi_{2134}\right)\left(\pi_{1234}\right)\left(\pi_{1144}\right)\left(\pi_{2244}\right)}{\left(\pi_{1134}\right)\left(\pi_{2234}\right)\left(\pi_{2144}\right)\left(\pi_{1244}\right)}\right)=0\\ &\\ \eta_{124}(112)=&\log\left(\frac{\left(\pi_{2142}\right)\left(\pi_{1242}\right)\left(\pi_{1143}+\pi_{1144}\right)\left(\pi_{2243}+\pi_{2244}\right)}{\left(\pi_{1142}\right)\left(\pi_{2242}\right)\left(\pi_{2143}+\pi_{2144}\right)\left(\pi_{1243}+\pi_{1244}\right)}\right)=0\\ &\\ \eta_{124}(113)=&\log\left(\frac{\left(\pi_{2143}\right)\left(\pi_{1243}\right)\left(\pi_{1144}\right)\left(\pi_{2244}\right)}{\left(\pi_{1143}\right)\left(\pi_{2243}\right)\left(\pi_{2144}\right)\left(\pi_{1244}\right)}\right)=0\\ &\\ \eta_{12}(11)=&\log\left(\frac{\left(\pi_{1144}\right)\left(\pi_{2244}\right)}{\left(\pi_{2144}\right)\left(\pi_{1244}\right)}\right)=0 \end{array}\] The same holds for the remaining modalities of $X_1X_2$. \end{example} \section{Stratified Chain Graphical models of type IV} \label{sec_3} In this section we will handle with the Chain Graphical models, thus a brief review on these tools is necessary.\\ Formally, a \textit{Chain Graph} (CG) is a graph $G=\left\{V,E\right\}$ that is a collection of vertices and edges, with both directed and undirected arcs in $E$ and without any directed or semi-directed cycle. Two vertices linked by an undirected arc are \textit{adjacent}. Given a set $A$ of vertices, the \textit{neighbour} of $A$, $nb(A)$, is the set of vertices adjacent to at least one vertex in $A$. The \textit{neighbourhood}, $Nb(A)$, add to the neighbour set the $A$ itself: $nb(A)\cup A$. A set $A$ is called \textit{non connected} if there is not a path that links all the vertices in the set. The set of vertices from which directed arcs start, pointing all to $A$, is called \textit{parent} set, $pa_G(A)$. A CG is characterized by the so-called \textit{chain components}, denoted by $T_1,....,T_s$, where the vertices are partitioned according to the following conventions. Vertices linked by undirected arcs must belong to the same component and vertices linked by directed arcs must belong to different components. The set of components from which start at least one directed arc pointing to the component $T_h$ is called parent component, $pa_D(T_h)$. Finally the \textit{non descendant} of the component $T_h$, $nd(T_h)$, is composed of the components that cannot be reached from $T_h$ by a direct path.\\ The Chain Graphical Model (CGM) is a model of conditional and marginal independencies represented by a CG where the variables are represented by vertices and the relationships between variables through arcs. This kind of model is useful when the analysed variables follow an inherent explicative order such as some variables are explicative of other variables which can be in turn explicative variables for other ones. Thus, the partition of the vertices in components comes naturally according to the variables which vertices represent.\\ As shown by \cite{drton2009}, there are different rules to extract a list of independencies between variables from a CG. These rules are called Markov Properties and characterize 4 types of CGM. In this work we take advantage from the CGM of type IV \added[id=Fe]{also known as multivariate regression Markov Properties}, \cite{sadeghi2014}. \begin{defi} Given a CG, the Markov Properties of type IV to extract a list of conditional and marginal independencies are: \begin{equation} \begin{array}{lll} \textsf{C1)}\quad &T_h\perp nd(T_h)|pa_D(T_h),\quad &h=1,\dots,s;\\ \textsf{C2)}\quad & A\perp T_h\backslash Nb(A)|pa_D(T_h), \quad & h=1,\dots,s, \qquad A\subset T_h;\\ \textsf{C3)}\quad &A\perp pa_D(T_h)\backslash pa_G(A)|pa_G(A), \quad & h=1,\dots,s, \qquad A\subset T_h.\\ \end{array} \label{MP_IV} \end{equation} \end{defi} Note that this type of CGM identifies the independencies between variables involved in the same component as marginal independencies. \\ \begin{example} Let us consider the CG in Figure \ref{nicolussi:fig1} where we can recognize two components: $T_1=\left(1, 2\right)$ and $T_2=\left(3,4,5 \right)$. By applying the Markov Properties in (\ref{MP_IV}), focusing on Figure \ref{nicolussi:fig1} (a), we get the following list of independencies: $3\perp 4|12$, $3\perp 2|1$ and $5\perp 12$. \label{ex_CG4} \end{example} In order to take into account the CS independencies, we propose the Stratified Chain Graphical Models (SCGMs) as an extension of the Stratified Graphical Models (SGMs) proposed by \cite{nyman2016}. Similarly to SGM, we denote the CS independencies throw labeled arcs, denoted as \textit{stratum}, $S$. \added[id=Fe]{The example \ref{ex_stratum} shows briefely how to interpret the \textit{stratum} in the SCGM before the tecnical explanation below.} \begin{example} \label{ex_stratum} \added[id=Fe]{In Figure \ref{nicolussi:fig1} (b), we have the labelled arc between the nodes $3$ and $4$ which reports the modality $i_1'$ of the variable $1$. This \textit{stratum} stands for $3 \perp 4|12=(i_1',*)$, where the asterisk denotes that the independence holds for any modality of $2$.} \end{example} \begin{figure}[!ht] \begin{center} \begin{minipage}[t]{0.45\textwidth} \begin{center} \includegraphics[width=4cm]{cgm_n.pdf}\\ (a) \end{center} \end{minipage} \begin{minipage}[t]{0.45\textwidth} \begin{center} \includegraphics[width=4cm]{scgm_n.pdf}\\ (b) \end{center} \end{minipage} \caption{\label{nicolussi:fig1} \replaced[id=Fe]{CG (on the left) and SCG with the stratum $\mathcal{K}^{12}_{34}=\left\{(i_1',*)\right\}$ (on the right), both with components $T_1=\left(1,2\right)$ and $T_2=\left(3,4,5 \right)$.}{CG (on the left) and SCG with the stratum $S_{34}=\left\{12=(i_1',*)\right\}$ (on the right), both with components $T_1=\left(1,2\right)$ and $T_2=\left(3,4,5 \right)$.}} \end{center} \end{figure} Formally, a SCG is defined by three sets, the \added[id=Fe]{one of} vertices $V$, the \added[id=Fe]{one of} edges $E$ and the \added[id=Fe]{one of} stratum $S$ which denotes the labelled arcs. \replaced[id=Fe]{In particular each element of S is a stratum $\mathcal{K}^C_{\gamma,\delta}=\left\{i'_C:\gamma\perp \delta |C=i'_C\right\}$. which refers to a pair of vertices $(\gamma,\delta)$ and which reports the list of modalities of the veriables in $C$ according to which the arc is missed (and the CS independence holds).}{In particular each stratum $S_{\gamma,\delta}={S=i_{S}}$ refers to a pair of vertices $(\gamma,\delta)$ and reports the list of modalities ($i_S$) of the variables in $S$ according to which the arc is missed} \added[id=Fe]{ Since we are using the SCGM as extension of CGM of type IV, the set $C$, is always included or equal to the set of parents of vertices $(\gamma, \delta)$}. \deleted[id=Fe]{Note that in Figure \ref{nicolussi:fig1} (b) if the arc between $3$ and $4$ was missed, we have the independence $3\perp 4|12$, however on the labelled arc are reported only the modalities of the variable $1$, this means that any modality of $2$ produces an independence, thus is unnecessary to write it.} \begin{defi} Given a SCG, the \added[id=Fe]{Stratified} Markov Properties to extract a list of conditional, marginal and CS independencies are \begin{equation} \begin{array}{llll} \textsf{C1)}\quad &T_h\perp nd(T_h)|pa_D(T_h),\quad &h=1,\dots,s;&\\ \textsf{CS2)}\quad &\gamma\perp \delta |pa_D(T_h)=i'_{pa_D(T_h)};\quad &i'_{pa_D(T_h)}\in\mathcal{K}_{\gamma,\delta}^{pa_D(T_h)} \qquad &\text{if}\quad \gamma,\delta \in T_h;\\ \textsf{CS3)}\quad &\gamma\perp \delta |S_{\gamma,\delta}=pa_G(\gamma)=i'_{pa_G(\gamma)} \quad &i'_{pa_G(\gamma)}\in\mathcal{K}_{\gamma,\delta}^{pa_G(\gamma)} \qquad &\text{if}\quad \gamma\in T_h, \delta\in pa_D(T_h)\backslash pa_G(\gamma) \\ \end{array} \label{SMP_IV} \end{equation} where the \textbf{C1)} is equal to the rule \textbf{C1)} in formula (\ref{MP_IV}) and \textbf{CS2)} and \textbf{CS3)} are a generalization of the remaining rules in formula (\ref{MP_IV}). \end{defi} \deleted[id=Fe]{In the conditional set of both \textbf{CS2)} and \textbf{CS3)}, we have that $S_{\gamma,\delta}\subseteq pa_D(T_h)$ and $S_{\gamma,\delta}\subseteq pa_G(\gamma)$. } \added[id=Fe]{ When in the conditional set of both \textbf{CS2)} and \textbf{CS3)}, the \textit{stratum} $\mathcal{K}_{\gamma,\delta}^{C}$ coincide with $\mathcal{I}_C$, we are handling with a conditional indpendence and the \textit{stratum} is unnecessary. In this case we bring back to the ``pairwise" Markov properties} for a CGM of type IV \added[id=Fe]{as listed in formula (2) of} \cite{marchetti2011} \added[id=Fe]{that are equivalent to the ones in formula (\ref{MP_IV}).} \deleted[id=Fe]{On the other hand, when any variable $c$ in $C$ assumes a subset of modalities of $\mathcal{I}_c$, the asterisk in the CS independencies come from the \textbf{CS2)} and \textbf{CS3)}, drops out.}\\ Graphically, a \textit{stratum} can be an undirected labelled arc \added[id=Fe]{the case of \textbf{(CS2)}}, or a directed labelled arc, \added[id=Fe]{the case pf \textbf{CS3}}. \deleted[id=Fe]{, but the variables in the \textit{stratum} belong only to the parent set.} However, not any possible \textit{stratum} is admitted in the SCG model. Let us consider, for instance, the graph in Figure \ref{no_admit}, we have the conditional independence $3\perp 2| 1$ but, at the same time we have the CS independence $3\perp 1|2=i_2'$. In the conditional independence we declare that the variable $2$ does not affect variable $3$ for any modality of $1$ but, in the CS independence we affirm that the variable $2$ discriminates the relationship between $1$ and $3$, thus it has some effect on the variable $3$. \cite{nyman2016} dealt with this situation and, in their Theorem 2, they give the condition for the existence of a stratum that is summarized in the following remark. \begin{figure}[h] \centering \includegraphics[width=4cm]{ndsg_n.pdf} \caption{SCG with components $T_1=\left(1,2\right)$ and $T_2=\left(3\right)$ with a non representable \textit{stratum}.} \label{no_admit} \end{figure} \begin{remark}\label{rem_stratum} Given a SCG, with at least one stratum $\mathcal{K}_{\gamma,\delta}^{C}\not\equiv \mathcal{I}_{C}$, then the variables in $S$ must be adjacent or parents of both $\gamma$ and $\delta$. \end{remark} \begin{example} \label{ex_stratum2} \added[id=Fe]{(\textit{continuation of example \ref{ex_stratum}})} In Figure \ref{nicolussi:fig1} (b), we have the stratum $\mathcal{K}^{12}_{34}=\left\{(i_1',*)\right\}$. Thus, by applying the \textbf{CS2)}, the statement $3\perp 4|12=(i_1',*)$ comes. Note that, the variable $1$ involved by the stratum belongs to both $pa_G(3)$ and $pa_G(4)$ and the previous remark \ref{rem_stratum} holds. \end{example} \section{Regression model with context specific independencies} \label{sec_4} In Section \ref{sec_2} the main results about the HMM parametrization are discussed, while in Section \ref{sec_3} a graphical model for different kind of independencies is presented. In this section we connect these two models and we show how we can parametrize a SCGM through a HMMM.\\ As mentioned in Section \ref{sec_3}, the approach of CGMs seems natural to explain the effect of some variables (covariates) on a set of dependence variables that can be in turn covariates for other dependence variables. Thus, it is appropriate to collect the variables in the components according to this purpose and, by focusing on each component $T_h$, we consider as covariates of $T_h$ the variables in $pa_D(T_h)$. The CGM of type IV admits to simplify the regression statements by using a marginal approach for the variables in the same components, as it is shown by \cite{marchetti2011}. Here we want to improve this \replaced[id=Fe]{multivariate regression model based on SCGMs}{Chain Regression model} by considering ordinal variables coded by \textit{local} logits and then by simplifying the regression equations thanks to the CS independencies. As it is shown in \cite{marchetti2008}, \cite{rudas2010} and \cite{Nicolussi2013} the CGM of type IV can be parametrized by using the HMMMs with the appropriate hierarchical marginal sets $\mathcal{H}=\left\{\mathcal{H}_1,\mathcal{H}_2\right\}$ where \begin{equation} \begin{array}{ll} \mathcal{H}_1=&\left\{(pa_D(T_h)\cup A),\,h=1,\dots,s;\,A\subseteq T_h\right\}\\ \mathcal{H}_2=&\left\{(nd(T_h)\cup T_h),\,h=1,\dots,s \right\}. \end{array} \label{marginal} \end{equation} These two classes must be put together in $\mathcal{H}$ so that if $j< i$ then $\mathcal{M}_i \not\subseteq \mathcal{M}_j$. Then, focusing on each group of dependent variables, we define the HMM parameters (\ref{eq_parametri_short}) evaluated in each conditional distribution of the covariates. That means, for each set of dependent variables $A\subseteq T_h$, we define the parameters $\eta^{A\cup pa_D(T_h)}_{A}(i_A|i_{pa_D(T_h)})$ evaluated in any values $i_{pa_D(T_h)}\in\mathcal{I}_{pa_D(T_h)}$ of the covariates $pa_D(T_h)$. All these parameters can be expressed as combination of regression parameters as follows. \begin{defi} \label{def_3} Given a SCGM, \textit{regression parameters} are given by \begin{equation} \eta^{A\cup pa_D(T_h)}_{A}(i_A|i_{pa_D(T_h)})= \sum_{t\subseteq pa_D(T_h)} \beta_{t}^{A}(i_t) \qquad \forall h=1,\dots,s\qquad A\subseteq T_{h}. \label{eq:regression_param} \end{equation} \end{defi} \begin{theorem} \label{teo:param} The parameters $\beta_{t}^{A}(i_t)$ in the regression model (\ref{eq:regression_param}), are the HMM parameters based on \textit{baseline} or \textit{local} logi \begin{equation} \label{formula} \beta^A_{t}(i_t)=(-1)^{|pa_D(T_h)\backslash t|}\eta^{(A\cup pa_D(T_h))}_{tA}(i_{tA}|i^{*}_{pa_D(T_h)\backslash t}) \end{equation} $\forall t\subseteq pa_D(T_h)\neq \emptyset$ \added[id=Fe]{and where $i_{tA}=i_t\cap i_A$}. \end{theorem} \begin{example} Let us consider the CGM in Figure \ref{nicolussi:fig1} (a) where there are two components. The first is composed of the purely dependent variables $3$, $5$ and $6$ while in the second there are the covariates $1$ and $2$. Thus, according to the formula (\ref{marginal}), the marginal sets take values in $\left\{(123),(124),\right.$ $\left.(125),(1234),(1235),(1245),(12345)\right\}$. By focusing on the dependent variable $4$, we can express the regression model as follows \[ \eta^{123}_{4}(i_4|i_{12})=\beta_{\emptyset}^{4}+\beta_{1}(i_1)^{4}+\beta_{2}^{4}(i_2)+\beta_{12}^{4}(i_{12}) \] $\forall i_{12}\in \mathcal{I}_{12} $ and $i_4\in\left\{1,\dots,I_4-1\right\} $ because when $i_4= I_4$ the parameter is zero by definition. By applying Corollary \ref{cor1_APP} in Appendix \ref{appendix_1}, we see that the $\beta$ parameters are \[ \begin{array}{lll} \beta^4_{\emptyset}&=&\eta^{(124)}_{4}(i_{4}|i^{*}_{12})\\ \beta^4_{1}(i_1)&=-&\eta^{(124)}_{14}(i_{14}|i^{*}_{2})\\ \beta^D_{2}(i_2)&=-&\eta^{(124)}_{24}(i_{24}|i^{*}_{1})\\ \beta^D_{12}(i_{12})&=&\eta^{(124)}_{124}(i_{124}). \end{array} \] \end{example} The parameters in formula (\ref{eq:regression_param}) are able to explain the relationship between variables in $T_h\cup pa_D(T_h)$ for each $h=1,\dots,s$. The remaining relationships between variables belonging to disjointed components can be described by the HMM parameters \begin{equation} \eta^{T_h\cup nd(T_h)}_{AB}(i_{AB}) \label{mix} \end{equation} \added[id=Fe]{where $h=1,\dots,s$, $A\subseteq T_h$ and $B\subseteq nd(T_h)$ such that $B\cap (nd(T_h)\backslash pa_D(T_h)\neq \emptyset$}. \begin{theorem}The regression parameters in formula (\ref{eq:regression_param}) and the HMM parameters in formula (\ref{mix}) are a 1:1 function (a reparametrization) of the HMM parameters $\eta_{\mathcal{L}}^{\mathcal{M}}$, $\forall \mathcal{L}\in\mathcal{P}(\mathcal{Q})$ and $\forall \mathcal{M}\in\mathcal{H}$. \label{parametrizz} \end{theorem} Now let us to consider the SCGM as presented in Section \ref{sec_3}. The previous considerations about the parametrization still holds and the following theorem explains how to constrain the HMM parameters according to the SGCM. \begin{theorem} \label{regression_constraints} A SCGM that obeys to the \added[id=Fe]{Stratified} Markov Properties of type IV in (\ref{SMP_IV}) can be parametrized as follows: \begin{itemize} \item[i)]Each (C1) holds iif the HMM parameters in formula (\ref{mix}) are equal to zero;\\ \item[ii)] Each (CS2) holds iif the regression parameters in formula (\ref{eq:regression_param}) satisfy $\eta^{\mathcal{L}\cup pa_D(T_h)}_{\mathcal{L}}(i_{\mathcal{L}}|i'_{pa_D(T_h)})=0$, $\forall \mathcal{L}$ non connected set such that $(\gamma\cup \delta)\subseteq \mathcal{L}$ when $i'_{pa_D(T_h)} \in \mathcal{K}_{\gamma,\delta}^{pa_D(T_h)}$.\\ \item[iii)]Each (CS3) holds iif the regression parameters in formula (\ref{eq:regression_param}) satisfy $\sum_{t\in C }\beta^{\gamma}_{t}(i'_{t})=0$, where $i'_t=\in \mathcal{K}^{pa_G(\gamma)}_{\gamma,\delta}\cap \mathcal{I}_t$ . \\ \end{itemize} \end{theorem} \begin{example} Let us consider the CGM in Figure \ref{nicolussi:fig1} (a) where there are two components. The first is composed of the purely dependent variables $3$, $5$ and $6$ while in the second there are the purely covariates $1$ and $2$. Then the following parameters fully describe the relationships among the 5 variables. \small \[ \begin{array}{lll} \eta^{123}_{3}(i_3|i_{12})=&\beta_{\emptyset}^{3}+\beta_{1}(i_1)^{3},\; \quad &\forall i_3\in \mathcal{I}_3-1,\, \forall i_{12}\in \mathcal{I}_{12}\\ &\\ \eta^{123}_{4}(i_4|i_{12})=&\beta_{\emptyset}^{4}+\beta_{1}(i_1)^{4}+\beta_{2}^{4}(i_2)+\beta_{12}^{4}(i_{12}), &\forall i_4\in \mathcal{I}_4-1,\, \forall i_{12}\in \mathcal{I}_{12} \\ &\\ \eta^{123}_{4}(i_4|i_{12})=&\beta_{\emptyset}^{4},\; &\forall i_{12}\in \mathcal{I}_{12} \\ &\\ \eta^{123}_{35}(i_{35}|i_{12})=&\beta_{\emptyset}^{35}+\beta_{1}(i_1)^{35}+\beta_{2}^{5}(i_2)+\beta_{12}^{35}(i_{12}), &\forall i_{35}\in \mathcal{I}_{35}-1,\, \forall i_{12}\in \mathcal{I}_{12} \\ &\\ \eta^{123}_{45}(i_{45}|i_{12})=&\beta_{\emptyset}^{45}+\beta_{1}(i_1)^{45}+\beta_{2}^{45}(i_2)+\beta_{12}^{45}(i_{12}), &\forall i_{45}\in \mathcal{I}_{45}-1,\, \forall i_{12}\in \mathcal{I}_{12} \\ \end{array} \] \end{example} \section{Application} \label{sec_5} In this section we study the relationships among a set of variables by using the regression model with CS independences as presented in Section \ref{sec_4}. At first we collect the variables in component according to their nature and the possible regression model that we want to study. \\ Several graphical models were tested and in each of them the likelihood ratio test $G^2$ is carried out. The $G^2$ compares the model under investigation with the saturated (unconstrained) one; under the null hypothesis the $G^2$ follows the $\chi^2$ distribution with $df$ equal to the difference between the free parameters in the two models. We reject all models with a \textit{p-value} lower than $0.05$. Among the non rejected models, we choose the one with greatest Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC).\\ Since to testing all possible models, particularly when we handle with CS independencies, is computationally expensive, we implement a three steps procedure to achieve the best SGCM of type IV. At first, we carried out an exploratory phase where we test all CGM with only one missed arc in order to the have an overview of the weakest relationships. Then, we consider as \textit{reduced} model the CGM without the arcs that have lead to a \textit{p-value} greater than $0.05$ in the previous step. Starting from the \textit{reduced} models we add one by one all removed arcs. We choose the CGM with greatest AIC and BIC. \\ A further simplification of the CGM is obtained evaluating the model with the highest order parameters constrained to zero.\\ Finally, once obtained the best CGM we move on to further simplification by testing the CS independencies by simplifying the conditional ones that have lead to reject the model. \subsection{Innovation Study Survey 2010-2012} In this section we apply the proposed model on a real dataset. Our aim is to build a chain regression model that study the effect of the innovation in some aspects of the enterprise's life on the revenue growth without omitting the main features of the enterprise. Thus, we collect the following variables from the survey on the innovation status of small and medium Italian enterprises during the $2010-2012$ \cite{ISTAT}. At first, as pure response we consider the \textit{revenue growth} variable in 2012, \textbf{GROW} (Yes, No) henceforth denoted as variable \textbf{1}. Then, as mixed variables, we take into account the innovation through three dichotomous variables referring to the period 2009-2012: \textit{innovation in products or services or production line or investment in R\&D}, \textbf{IPR} (Yes, No), \textit{innovation in organization system}, \textbf{IOR} (Yes, No) and \textit{innovation in marketing strategies}, \textbf{IMAR} (Yes, No), henceforth denoted as variables \textbf{2, 3} and \textbf{4}, respectively. Finally, the role of purely covariates is entrusted to variables concerning the firm's featuring in 2009-2012: the \textit{main market (in revenue terms)}, \textbf{MRKT} (A= Regional, B= National, C= International), the \textit{percentage of graduate employers}, \textbf{DEG} (1= $0\%\vdash10\%$, 2= $10\%\vdash50\%$, 3=$50\%\vdash100\%$) and the \textit{enterprise size}, \textbf{DIM} (1= Small, 2= Medium), henceforth denoted as variables \textbf{5, 6} and \textbf{7}, respectively. The survey covers $18697$ firms, collected in a $2\times2\times2\times2\times3\times3\times2$ contingency table. \\ In order to analyse this dataset, we build a chain graph with three components according to the nature of the variables, so in the first component we collect the firm's features variables (5,6,7), in the second component the innovations variables (2,3,4) and in the third component the revenue growth variabl (1) \\ In the explanatory phases, we tested the independencies associated to all CGM of type IV with only one missed arc on the HMMM associated. Thus, according to the formula (\ref{marginal}), we considered the following marginal sets $\left\{(5,6,7);\,(2,5,6,7);\,(3,5,6,7);\,(4,5,6,7);\,(2,3,5,6,7);\,(2,4,5,6,7);\,\right.$ $\left.(3,4,5,6,7);\, (2,3,4,5,6,7);\,(1,2,3,4,5,6,7)\right\}$. The parameters associated to the dichotomous variables were based on the \textit{baseline} logits, while, the variables with three modalities have been coded with the \textit{local} logits. We found the three eligible conditional independencies \begin{description} \item[\textbf{(a)}] $1\perp 2|34567$, \item[\textbf{(b)}]$1\perp 4|23567$, \item[\textbf{(c)}] $1\perp 6|23457$. \end{description} By testing the combination of these independencies whom results are reported in Table \ref{tab_CGM}, we choose the HMMM characterized by the \textbf{(b)} and \textbf{(c)}, reported at the third row in Table \ref{tab_CGM}, since it is the only model with a $p-value> 0.05$. \\ \begin{table} \centering \begin{tabular}{lrrrrr} \hline Independencies & Gsq & df & pval & AIC & BIC \\ \hline \textbf{(a)}, \textbf{(b)}& 139.74 & 108 & 0.02 & -220.26 & 1190.24 \\ \textbf{(a)}, \textbf{(c)}& 168.57 &120 &0.00 &-167.42 &1149.04\\ \textbf{(b)}, \textbf{(c)} & 141.34 & 120 & \textbf{0.09} & -194.66 & 1121.81 \\ \textbf{(a)}, \textbf{(b)} ,\textbf{(c)}& 180.97 & 132 & 0.00 & -131.03 & 1091.40 \\ \hline \end{tabular} \caption{HMMM which combining the three independencies \textbf{(a)} $1\perp 2|34567$, \textbf{(b)}$1\perp 4|23567$ and \textbf{(c)} $1\perp 6|23457$.} \label{tab_CGM} \end{table} However, from the explanatory phases, there are some clues that independencies between variables $1$ and $2$ could be. Thus, among the \textbf{(b)} and \textbf{(c)}, we took into account also the independence \textbf{(a)} and we test all possible CS independencies originated from this last. The preferred model is described by the conditional independencies \textbf{(b)} and \textbf{(c)} and by the CS independence $1\perp2|34567=(1,*,3,*,1)$ that is when there are no innovation in \textbf{IORG}, when the innovation \textbf{IMAR} assume any modality, when the firm works in an international market, when the percentage of degree employers is whatever and when the firm is small. In correspondence of this model we have \texttt{df=121}, \texttt{Gsq=141.83}, \texttt{p-val=0.09}, \texttt{AIC=-192.17}, \texttt{BIC=1116.46}.\\ The SCGM associated to this model is displayed in Figure \ref{fig.SGCM_IV}. Note that, in the stratum, the conditioning variables $3$ and $5$ are not set to a specific modality because they do not satisfy the condition for a CS independence as summarized in the Remark \ref{rem_stratum}. \\ \begin{figure} \centering \includegraphics[width=0.4\linewidth]{SGCM_IV2012_2} \caption{SCGM of type IV with components $T_1=(5,6,7)$, $T_2=(2,3,4)$ and $T_3=(1)$.} \label{fig.SGCM_IV} \end{figure} By looking at the SCGM in Figure \ref{fig.SGCM_IV} as a regression chain model we can distinguish two regression structures, the first with the dependent variable \textbf{G} with all the others like covariates and the second with the innovation variables as dependent and the featuring variables as covariates. The regression parameters of the two regression models are in Tables \ref{tab_1_par} and \ref{tab_2_par} respectively. In particular, in Table \ref{tab_1_par} we have the regression parameters of the only dependent variable $1$, thus the parameters are logarithms of logits concerning the variable $1$ (probability of no revenue growth against probability on revenue growth) evaluated in all the possible conditioning distributions. Generally speaking, when the parameters in Table \ref{tab_1_par} are less than zero it means that in the fitted probability having a revenue growth is greater than the probability to have not. In Table \ref{tab_1_par}, the conditional distributions where the difference between the two probabilities achieve high values (greater than 10 in absolute value) are highlight in bold. The cells where this disparity assume huge dimension in negative are where the conditioning distribution of the variables $3,4,5,6,7$ assume value equal to $(1,2,1,2,2,1)$ or $(1,1,1,2,2,1)$, on the other hand, the great disparity in positive is in the cells $(1,1,1,2,1,1)$ and $(1,1,1,1,1,1)$. \begin{table}[ht] \centering \begin{tabular}{rr|rr|rr|rr} $i^*_{34567}$ & $\eta^{\mathcal{M}}_{1}$ & $i^*_{34567}$ & $\eta^{\mathcal{M}}_{1}$& $i^*_{34567}$ & $\eta^{\mathcal{M}}_{1}$ & $i^*_{34567}$ & $\eta^{\mathcal{M}}_{1}$ \\ \hline 222332 & -0,4796 & 221222 & -2,4468 & 222331 & -0,3584 & 221221 & -10,841 \\ 122332 & -0,4483 & 121222 & -11,5921 & 122331 & -0,3141 & \textbf{121221} & \textbf{-32,7425} \\ 212332 & -0,1707 & 211222 & -2,071 & 212331 & 0,8602 & 211221 & -4,9049 \\ 112332 & 0,6803 & 111222 & -10,9506 & 112331 & 3,0365 & \textbf{111221} & \textbf{-27,4244} \\ 221332 & -0,7988 & 222122 & 0,4845 & 221331 & -1,8311 & 222121 & -0,1683 \\ 121332 & -2,102 & 122122 & 1,8001 & 121331 & -3,9632 & 122121 & -2,3828 \\ 211332 & -0,5672 & 212122 & 0,3546 & 211331 & 0,1332 & 212121 & 0,7241 \\ 111332 & -1,8448 & 112122 & 3,435 & 111331 & 0,6237 & 112121 & 0,2428 \\ 222232 & -0,7417 & 221122 & -1,4924 & 222231 & -0,7225 & 221121 & -9,1679 \\ 122232 & -1,0322 & 121122 & -2,4818 & 122231 & -1,3533 & \textbf{121121} & \textbf{-16,5698} \\ 212232 & 0,8408 & 211122 & -4,8401 & 212231 & 3,7374 & \textbf{211121} & \textbf{ -11,2031} \\ 112232 & 3,7735 & 111122 & -5,8998 & 112231 & 9,8352 & \textbf{111121} & \textbf{-14,355} \\ 221232 & -1,6234 & 222312 & -0,4355 & 221231 & -3,7989 & 222311 & -0,3504 \\ 121232 & -5,3967 & 122312 & -0,6114 & \textbf{121231} & \textbf{-9,8567} & 122311 & -0,2496 \\ 211232 & 0,6047 & 212312 & -0,2184 & 211231 & 4,2275 & 212311 & 1,2512 \\ 111232 & -0,7702 & 112312 & 0,5931 & 111231 & 7,1064 & 112311 & 5,1493 \\ 222132 & 0,0546 & 221312 & -1,3541 & 222131 & 0,7491 & 221311 & -3,6898 \\ 122132 & 1,3823 & 121312 & -4,2052 & 122131 & 2,9369 & 121311 & -7,1286 \\ 212132 & 1,0984 & 211312 & -1,3974 & 212131 & 3,4766 & 211311 & -0,3392 \\ 112132 & 4,7429 & 111312 & -4,6389 & \textbf{112131} & \textbf{11,8076} & 111311 & 2,7044 \\ 221132 & -0,8322 & 222212 & -0,5193 & 221131 & -2,8954 & 222211 & -0,2307 \\ 121132 & -0,1691 & 122212 & -0,156 & 121131 & -1,3416 & 122211 & 1,8533 \\ 211132 & -0,5679 & 212212 & 1,1386 & 211131 & -0,1813 & 212211 & 6,581 \\ 111132 & 2,3733 & 112212 & 5,9737 & \textbf{ 111131} & \textbf{12,1091} & \textbf{ 112211} & \textbf{22,5741} \\ 222322 & -0,2048 & 221212 & -2,3837 & 222321 & -0,5180 & 221211 & -6,0503 \\ 122322 & -0,7378 & 121212 & -6,7821 & 122321 & -3,719 & 121211 & -9,1221 \\ 212322 & -0,4051 & 211212 & -0,4052 & 212321 & -0,0053 & 211211 & 7,674 \\ 112322 & -0,5112 & 111212 & -0,7885 & 112321 & -4,0535 & \textbf{111211} & \textbf{29,7031} \\ 221322 & -0,9409 & 222112 & 0,5425 & 221321 & -4,3148 & 222111 & 1,0062 \\ 121322 & -4,5919 & 122112 & 2,4386 & \textbf{ 121321} & \textbf{-13,8434} & 122111 & 4,9567 \\ 211322 & -2,1517 & 212112 & 1,6583 & 211321 & -4,0426 & 212111 & 4,3403 \\ 111322 & -7,5744 & 112112 & 7,265 & \textbf{111321} & \textbf{-16,6193} & \textbf{ 112111} & \textbf{20,1193} \\ 222222 & -0,4359 & 221112 & -1,6496 & 222221 & -2,1184 & 221111 & -6,8449 \\ 122222 & -2,3083 & 121112 & -1,1323 & \textbf{ 122221} & \textbf{-10,308} & 121111 & -2,5094 \\ 212222 & 0,6317 & 211112 & -2,2924 & 212221 & 1,4343 & 211111 & -3,1236 \\ 112222 & 2,0696 & 111112 & 2,6188 & 112221 & -5,3214 & \textbf{ 111111} & \textbf{ 26,1675} \\ \end{tabular} \caption{Regression parameters concerning the dependent variable 1 with covariates $2,3,4,5,6,7$. The $i^*_{34567}$ in table refers to the conditional distribution where the parameters concerning the dependent variable $1$ is evaluated as in formula (\ref{eq:regression_param}), i.e. $\eta^{\mathcal{V}}_{1}(i_1=2|i^*_{234567})$ where $V=123456$. } \label{tab_1_par} \end{table} In Table \ref{tab_2_par} we report the regression parameters concerning the combination of the three dependent variables $2$, $3$ and $4$. In particular, from the 2th to the 4th column there are the parameters associated to the single variables, thus the parameters are logarithms of logits. In the columns 5 to 7 there are the logarithms of contrasts of logists and in the last column there are the third order parameters associated to the variables $2,3,4$. In the first group of columns there is a prevalence of positive parameters which highlight a trend where the probability to make any innovation is lower than the probability to do not, wherever the conditioning distribution is. In the column 5 to 7 there are the pairwise comparison between the different kinds of innovation. Where the parameters are negative, such as in the 4th and in the 6th columns, the probability of concordance between the two innovations considered (i.e. innovation in both aspects or no innovation in both aspects) is lower than the probability of discordance. The opposite case occurs in the 5th column. \begin{table}[ht] \centering \begin{tabular}{r|rrrrrrr} $i^{*}_{567}$ & $\eta_{2}^{2567}$ & $\eta_{3}^{3567}$ & $\eta_{4}^{4567}$ & $\eta_{23}^{23567}$ & $\eta_{24}^{24567}$ & $\eta_{34}^{34567}$ & $\eta_{234}^{234567}$ \\ \hline 332 & 0,6831 & 0,5360 & -0,1749 & -1,7706 & -1,9050 & -1,5165 & 0,9079 \\ 232 & 0,6275 & 0,5631 & -0,0295 & -1,8384 & -1,3781 & -1,7466 & 0,6467 \\ 132 & 0,2105 & 0,5281 & -0,0161 & -1,8344 & -1,0737 & -1,7653 & 0,4587 \\ 322 & 1,2485 & 0,6050 & -0,0662 & -1,9450 & -1,2532 & -1,5024 & -1,1644 \\ 222 & 1,6045 & 0,6567 & 0,0426 & \textbf{-2,0638} & 0,3596 & -2,0162 & -2,8816 \\ 122 & 1,3508 & 0,7910 & 0,3162 & -1,9630 & 0,6518 & -1,5455 & -3,1860 \\ 312 & 0,7214 & -0,0583 & -0,4054 & -1,8619 & -1,8553 & -1,5649 & -0,5257 \\ 212 & 1,3267 & 0,3062 & 0,1200 & -1,9385 & -0,8871 & -1,5361 & -1,4836 \\ 112 & 1,0679 & 0,3998 & 0,4696 & -1,8852 & -0,2482 & -1,5967 & -1,4493 \\ 331 & 0,3001 & 0,1707 & -0,0851 & -1,3967 & -0,9175 & -1,7610 & -1,6513 \\ 231 & 0,2883 & 0,3485 & 0,5260 & -1,7401 & 0,8988 & -2,2774 & -3,8131 \\ 131 & 0,3271 & 0,7048 & 0,7575 & -0,8155 & 1,2579 & -2,0448 & -4,1997 \\ 321 & 1,5258 & 0,7076 & 0,2295 & -1,5524 & 0,7126 & -2,2999 & -7,6228 \\ 221 & 2,1658 & 1,0819 & 1,0864 & \textbf{-2,2763} & \textbf{4,9992} & \textbf{-3,9665} & \textbf{-15,2039} \\ 121 & \textbf{2,8376 }& \textbf{1,8068 }& 1,4223 & -0,7067 & \textbf{5,1590} & \textbf{-2,7446} & \textbf{-15,6388} \\ 311 & 1,0988 & 0,0431 & 0,0358 & -1,4920 & 0,2885 & -2,0071 & -5,4161 \\ 211 & 2,6186 & 1,1211 & 1,1570 & -2,0384 & 3,9245 & -2,2946 & -10,4366 \\ 111 & \textbf{2,8229} & \textbf{1,7590} & \textbf{2,2578} & -0,6703 & 4,7713 & -2,1910 & -10,4318 \\ \end{tabular} \caption{Regression parameters concerning the dependent variables $2,3,4$ with covariates $5,6,7$.The $i^*_{567}$ in table refers to the conditional distribution where the parameters concerning the dependent variables $2,3,4$ is evaluated as in formula (\ref{eq:regression_param}), i.e. $\eta^{\mathcal{M}}_{1}(i_1=2|i^*_{567})$. } \label{tab_2_par} \end{table} In conclusion, the output of this application shows a little aspects of the things that we can derive from the application of this models. For instance, once fitted the model it can be used to forecast the values of some dependent variables given the covariate, or again, looking the regression parameters it is possible to define a strategy where to invest. The possibility are several, it depends on the aim of the analysis, the HMMM parameters (that are not listed here) can be used to study the relationships among the variables. \section{Conclusions} \label{sec_6} In this work we provide several results in the environment of context-specific independences. At first, we focus on the problem to handle with ordinal variables where it is more useful to use parameters based on the \textit{local} or \textit{continuation} logits compares to the classical ones based on the \textit{baseline} logits. In this case, not only we confirm the results on \textit{baseline} logits such as provided in \cite{nyman2016}, even if in the marginal models, but we provide the results in the case of \textit{local} and \textit{continuation} parameters. \\ Further, we focus on the problem of the graphical representation. We take advantage from the well known relationships between the HMMM and the chain graphical models, in particular the type IV, and we extend the so-called stratified graph to this case.\\ Finally, we provide the advantage of the use of CS independencies in the Chain Regression models, \cite{marchetti2011} where the CS independencies can simplify the models.\\ The application shows a small part of the potentiality of this work.
{ "timestamp": "2017-12-25T02:07:49", "yymm": "1712", "arxiv_id": "1712.05229", "language": "en", "url": "https://arxiv.org/abs/1712.05229" }
\section{Introduction} In recent years there has been a great interest in the construction of discrete dynamical systems with given properties (see for example \cite{bib:eich93,bib:EHHW98,bib:HBM17,bib:ost10,bib:OPS10,bib:OS10degree,bib:OS10length}) both for applications (see for example \cite{bib:BW05,bib:chou95,bib:eich91,bib:eich92,bib:GPOS14, bib:NS02, bib:NS03, bib:TW06,bib:winterhof10}) and for the purely mathematical interest that these objects have (see for example \cite{bib:eich91,bib:EMG09,bib:ferraguti2016existence,bib:FMS16,bib:FMS17,bib:GSW03}). This paper deals with the problem of finding discrete dynamical systems which can be new candidates for pseudorandom number generation. Let us denote the set of natural numbers by $\mathbb{N}$. Given a finite set $S$, a sequence $\{a_m\}_{m\in \mathbb{N}}$ of elements in $S$ is said to have \emph{full orbit} if for any $s\in S$ there exists $m\in \mathbb{N}$ such that $a_m=s$. Let $q$ be a prime power, $\mathbb{F}_q$ be the finite field of cardinality $q$, and $n$ be a positive integer. In this paper we produce maps $\psi:\mathbb{F}_q^n \rightarrow \mathbb{F}_q^n$ such that \begin{itemize} \item the sequences $\{\psi^m(0)\}_{m\in \mathbb{N}}$ have full orbit (whenever this property is verified, we say that the map $\psi$ is \emph{transitive}), \item the sequences constructed from $\psi$ have nice discrepancy bounds, analogous to those constructed from an Inversive Congruential Generator (ICG), \item they are very inexpensive to iterate: if $n>1$ they are asymptotically less expensive than an ICG for the same bitrate. \end{itemize} In addition, such maps can be described using quotients of degree one polynomials. From a purely theoretical point of view related to the full orbit property, one of the reasons why such constructions are interesting is that one cannot build transitive affine maps (i.e. of the form $x\mapsto Ax+b$, with $A$ an invertible $n\times n$ matrix and $b$ an $n$-dimensional vector) unless either $n=1$ and $q$ is prime, or $n=2$ and $q=2$ (see Theorem \ref{affine_transitivity_theorem}). For $n=1$ our construction covers the well-studied case of the ICG, for which we obtain easy proofs of classical facts (see for example Remark \ref{remarkICGfullorbit}). In fact, we fit the theory of full orbit sequences in a much wider context, where tools from projective geometry can be used to establish properties of the sequences produced with our method (see for example Proposition \ref{theorem_uniformity}). Let us now summarise the results of the paper. The main tool we use to construct full orbit sequences is the notion of fractional jump of projective maps, which is described in Section \ref{affine_jumps}. With such a notion we are able to produce maps in the affine space which can be guaranteed to be transitive when they are fractional jumps of transitive projective maps. In Section \ref{transitivity_projective} we characterise transitive projective maps using the notion of projective primitivity for polynomials (see Definition \ref{projectively_primitive_polynomial}). In Section \ref{uniformity} we show that whenever our sequences come from the iterations of transitive projective automorphisms, they behave quite uniformly with respect to proper projective subspaces (i.e. not many consecutive element in the sequence can lie in a proper subspace of the projective space). This fact (and in particular Proposition \ref{theorem_uniformity}) will allow us in Section \ref{explicit} to give an explicit description of the fractional jump of a transitive projective map, finally leading to the new explicit constructions of full orbit sequences promised earlier. In turn, such a description and the theory developed in Section \ref{transitivity_projective} allow us to prove the discrepancy bounds of Theorem \ref{thm:discrepancy} in Section \ref{discrepancy}. In Section \ref{computation} we show the computational advantage of our approach compared to the classical ICG one. Finally, we include some conclusions which summarise the results of the paper. \subsection*{Notation} Let us denote the set of natural numbers by $\mathbb{N}$, and the ring of integers by $\mathbb{Z}$. For a commutative ring with unity $R$, let us denote by $R^*$ the group of invertible elements of $R$. We denote by $\mathbb{F}_q$ the finite field of cardinality $q$, which will be fixed throughout the paper, and by $\overline{\mathbb{F}}_q$ an algebraic closure of $\mathbb{F}_q$. Given an integer $n \geq 1$, we often denote the $n$-dimensional affine space $\mathbb{F}_q^n$ by $\mathbb{A}^n$. The $n$-dimensional projective space over the finite field $\mathbb{F}_q$ is denoted by $\mathbb{P}^n$. Also, we denote by $\mathbb{G}\mathrm{r} (d, n)$ the set of $d$-dimensional projective subspaces of $\mathbb{P}^n$. We denote by $\mathbb{F}_q[x_1,...,x_n]$ the ring of polynomials in $n$ variables with coefficients in $\mathbb{F}_q$. For a polynomial $a \in \mathbb{F}_q[x_1,...,x_n]$ we denote by $\deg a$ its total degree, which we will simply call its degree. Also, for $b\in\mathbb{F}_q[x_1,\dots,x_n]$ we let $V(b)$ denote the set of points $x\in \mathbb{A}^n$ such that $b(x)=0$. We denote by $\mathrm{GL}_n (\mathbb{F}_q)$ the general linear group over the field $\mathbb{F}_q$, i.e. the group of $n \times n$ invertible matrices with entries in $\mathbb{F}_q$, and by $\mathrm{PGL}_{n+1} (\mathbb{F}_q)$ the group of automorphisms of $\mathbb{P}^{n}$. Recall that $\mathrm{PGL}_{n+1} (\mathbb{F}_q)$ can be identified with the quotient group $\mathrm{GL}_{n+1} (\mathbb{F}_q) / \mathbb{F}_q^*\mathrm{Id}$, where $\mathbb{F}_q^*\mathrm{Id}$ is just the subgroup of nonzero scalar multiples of the identity matrix $\mathrm{Id}$. Given a matrix $M \in \mathrm{GL}_{n+1} (\mathbb{F}_q)$, we denote by $[M]$ its class in $\mathrm{PGL}_{n+1} (\mathbb{F}_q)$. Let $X$ be either $\mathbb{A}^n$ or $\mathbb{P}^n$. We will say that a map $f : X \rightarrow X$ is \emph{transitive}, or equivalently that it \emph{acts transitively on $X$}, if for any $x, y \in X$ there exists an integer $i \geq 0$ such that $y = f^i (x)$. Equivalently, $f$ is transitive if and only if for any $x \in X$ the sequence $\{ f^m(x)\}_{m \in \mathbb{N}}$ has full orbit, that is $\{ f^m (x) \, : \, m \in \mathbb{N} \} = X$. A map $f:\mathbb{A}^n\rightarrow \mathbb{A}^n$ is said to be affine if there exist $A\in \mathrm{GL}_n(\mathbb{F}_q)$ and $b\in \mathbb{F}_q^n$ such that $f(x)=Ax+b$ for any $x\in \mathbb{A}^n$. Let $G$ be a group acting on a set $S$. The orbit of an element $s\in S$ will be denoted by $\mathcal O(s)$. For any element $g\in G$, let us denote by $o(g)$ the order of $g$ in $G$. We write $f\ll g$ or $f=O(g)$ to mean that for some positive constant $C$ it holds that $|f|\le Cg$. The notation $f\ll_\delta g$ or $f=O_\delta(g)$ means the same, but now the constant $C$ may depend on the parameter $\delta$. For any real vector $\mathbf{h}=(h_1,\dots, h_n)$, we write $\|\mathbf{h}\|_{\infty}= \max\{|h_j|\, : \, j\in \{1,\dots, n\}\}$. Finally, for any prime $p$ and any $z \in \mathbb{Z}$ we write $e_p (z) = \exp (2 \pi i z / p)$. \section{Fractional jumps} \label{affine_jumps} Fix the standard projective coordinates $X_0, \ldots, X_n$ on $\mathbb{P}^n$, and the canonical decomposition \begin{equation} \label{decomposition} \mathbb{P}^n = U \cup H, \end{equation} where \begin{align*} U &= \set{[X_0: \ldots: X_n] \in \mathbb{P}^n \, : \, X_n \neq 0}, \\ H &= \set{[X_0: \ldots: X_n] \in \mathbb{P}^n \, : \, X_n = 0}. \end{align*} There is a natural isomorphism of the affine $n$-dimensional space into $U$ given by \begin{equation} \label{pi_definition} \pi : \mathbb{A}^n \xrightarrow{\sim} U, \quad (x_1, \ldots, x_n) \mapsto [x_1: \ldots: x_n: 1], \end{equation} Let now $\Psi$ be an automorphism of $\mathbb{P}^n$. We give the following definitions: \begin{definition} For $P \in U$, the \emph{fractional jump index of $\Psi$ at $P$} is \begin{equation*} \mathfrak{J}_P = \min \set{k \geq 1 \, : \, \Psi^k (P) \in U}. \end{equation*} \end{definition} \begin{remark} The fractional jump index $\mathfrak{J}_P$ is always finite, as it is bounded by the order of $\Psi$ in $\mathrm{PGL}_{n+1} (\mathbb{F}_q)$. \end{remark} \begin{definition} The \emph{fractional jump of $\Psi$} is the map \begin{equation*} \psi : \mathbb{A}^n \rightarrow \mathbb{A}^n, \quad x \mapsto \pi^{-1} \Psi^{\mathfrak{J}_{\pi(x)}} \pi (x). \end{equation*} \end{definition} Roughly speaking, the purpose of defining this new map is to avoid the points which are mapped outside $U$ via $\Psi$. This is done simply by iterating $\Psi$ until $\Psi(\pi (x))$ ends up again in $U$. In this definition, $\pi$ is simply used to obtain the final map defined over $\mathbb{A}^n$ instead of $U$. A priori, one of the issues here is that a global description of the map might be difficult to compute, as in principle it depends on each of the $x\in \mathbb{A}^n$. It is interesting to see that this does not happen in the case in which $\Psi$ is transitive on $\mathbb{P}^n$: in fact, we will show in Section \ref{explicit} that there always exists a set of indices $I$, a disjoint covering $\set{U_i}_{i \in I}$ of $\mathbb{A}^n$, and a family $\set{f^{(i)}}_{i \in I}$ of rational maps of degree $1$ on $\mathbb{A}^n$ such that \begin{enumerate} \item[i)] $|I| \leq n+1$, \item[ii)] $f^{(i)}$ is well-defined on $U_i$ for every $i \in I$, \item[iii)] $\psi (x) = f^{(i)} (x)$ if $x \in U_i$. \end{enumerate} That is, $\psi$ can be written as a multivariate linear fractional transformation on each $U_i$. In addition, for any fixed $i\in \{1,\dots n+1\}$, all the denominator of the $f^{(i)}$'s will be equal. \begin{example} \label{inversive} Let $n=1$. For $a\in \mathbb{F}_q^*$ and $b\in \mathbb{F}_q$ and \begin{equation*} \Psi ([X_0: X_1]) = [b X_0 + a X_1: X_0] \end{equation*} we get the case of the inversive congruential generator. In fact, the fractional jump index of $\Psi$ is given by \begin{equation*} \mathfrak{J}_P = \begin{cases} 1, & \text{if } P \neq [0, 1], \\ 2, & \text{if } P = [0, 1], \end{cases} \end{equation*} and $\Psi^2 ([0, 1]) = [b, 1]$. Therefore, the fractional jump $\psi$ of $\Psi$ is defined on the covering $\set{U_1, U_2}$, where $U_1 = \mathbb{A}^1 \setminus \set{0}$ and $U_2 = \set{0}$, by \begin{equation*} \psi (x) = \begin{cases} \frac{a}{x} + b, & \text{if } x \neq 0, \\ b, & \text{if } x = 0. \end{cases} \end{equation*} The inversive sequence is then given by $\set{\psi^m (0)}_{m \in \mathbb{N}}$, which has full orbit under suitable assumptions on $a$ and $b$ (see for example \cite[Lemma FN]{bib:chou95}). \end{example} \begin{remark} \label{remark_transitivity_affine_jump} Let $\Psi$ be an automorphism of $\mathbb{P}^n$ and let $\psi$ be its fractional jump. It is immediate to see that if $\Psi$ acts transitively on $\mathbb{P}^n$ then $\psi$ acts transitively on $\mathbb{A}^n$. \end{remark} For the case of $n = 1$, the next proposition shows that the notion of transitivity for $\Psi$ and its fractional jump $\psi$ are actually equivalent, under the additional assumption that $\Psi$ sends a point of $U$ to a point of $H$ (which is equivalent to ask that the induced map on $\mathbb{A}^1$ is not affine). \begin{proposition} \label{transitivity_affine_jump_P1} Let $\Psi$ be an automorphism of $\mathbb{P}^1$ and let $\psi$ be its fractional jump. Assume that $\Psi$ sends a point of $U$ to the point at infinity. Then, $\Psi$ acts transitively on $\mathbb{P}^1$ if and only if $\psi$ acts transitively on $\mathbb{A}^1$. \end{proposition} \begin{proof} As already stated in Remark \ref{remark_transitivity_affine_jump}, if $\Psi$ is transitive on $\mathbb{P}^1$ then $\psi$ is obviously transitive on $\mathbb{A}^1$. Conversely, assume that $\psi$ is transitive on $\mathbb{A}^1$. Consider the decomposition $\mathbb{P}^1 = U \cup H$ of $\mathbb{P}^1$ as in \eqref{decomposition}. Since $n = 1$, we have $H = \set{P_0}$, for $P_0 = [1 : 0]$. Since there exists $P_1 \in U$ such that $\Psi (P_1) = P_0$, we have that $\Psi^2 (P_1) = \Psi (P_0) \in U$, as otherwise the point $P_0$ would have two preimages under $\Psi$, which is not possible as $\Psi$ is an automorphism, and so in particular a bijection. We have to prove that given $P, Q \in \mathbb{P}^1$ there exists an integer $i\geq 0$ such that $Q = \Psi^i (P)$. Assume that $P$ and $Q$ are distinct, as otherwise we can simply set $i = 0$. We distinguish two cases: either $P, Q \in U$, or one of the two, say $P$, is equal to $P_0$ and $Q \in U$. In the first case, the claim follows by transitivity of $\psi$. In the second case, reduce to the previous case by considering $\Psi (P_0), Q \in U$. \end{proof} One can actually prove that affine transformations of $\mathbb{A}^n$ are never transitive, unless restrictive conditions on $q$ and $n$ apply. Actually, the result that follows will not be used in the rest of the paper but provides additional motivation for the study of fractional jumps of projective maps and for completeness we include its proof. \begin{theorem} \label{affine_transitivity_theorem} There is no affine transitive transformation of $\mathbb{A}^n$ unless $n = 1$ and $q$ is prime, or $q = 2$ and $n = 2$, with explicit examples in both cases. \end{theorem} \begin{proof} For convenience of notation, in this proof we will identify the points of $\mathbb{A}^n$ with columns vectors in $\mathbb{F}_q^n$. Let us first deal with the pathological cases. For $n=1$ it is trivial to observe that $x\mapsto x+1$ has full orbit if and only if $q$ is prime. For $n=2$ and $q=2$, we get by direct check that the map \begin{equation*} \varphi \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \cdot \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} + \begin{pmatrix} 1 \\ 1 \end{pmatrix}, \quad \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} \in \mathbb{F}_2^2, \end{equation*} has full orbit. Let $\varphi$ be an affine transformation of the $n$-dimensional affine space over $\mathbb{F}_q$. Then, by definition there exist $A \in \mathrm{GL}_n (\mathbb{F}_q)$ and $b \in \mathbb{F}_q^n$ such that \begin{equation*} \varphi (x) = A x + b, \quad x \in \mathbb{F}_q^n. \end{equation*} Assume by contradiction that $\varphi$ is transitive, so that the order $o (\varphi)$ of $\varphi$ is $q^n$. Denote by $p$ the characteristic of $\mathbb{F}_q$. We firstly prove that the order $o (A)$ of $A$ in $\mathrm{GL}_n (\mathbb{F}_q)$ is $q^n / p$. Then, we will show how this will lead to a contradiction. Let $j$ be the smallest integer such that \begin{equation*} \varphi^j (x) = x + c, \quad \text{for all }x \in \mathbb{F}_q^n, \end{equation*} for some $c \in \mathbb{F}_q^n$. As \begin{equation} \label{explicit_affine} \varphi^j (x) = A^j x + \sum_{i = 0}^{j-1} A^i b, \quad x \in \mathbb{F}_q^n, \end{equation} we get $o(A) = j$. If $c = 0$, then $o(\varphi) = j = o (A) \leq q^n - 1$, so that $\varphi$ cannot be transitive. We then have $c \neq 0$. By \eqref{explicit_affine}, we get $\varphi^{j p} = \mathrm{Id}$, therefore $o(\varphi)\mid jp$. We now prove that $o(\varphi) = jp$. Write $o(\varphi) = j s + r$, with $r < j$. Then, we have \begin{align*} \varphi^{j s + r} (x) &= \varphi^r (x) + s c \\ &= A^r x + v, \quad x \in \mathbb{F}_q^n, \end{align*} for a suitable $v \in \mathbb{F}_q^n$. Since $\varphi^{j s + r} = \mathrm{Id}$, we get that $A^r x + v = x$ for all $x \in \mathbb{F}_q^n$, and so we must have $r = 0$ and $v=0$. It follows that $\varphi^{j s} (x) = x + s c = x$ for all $x \in \mathbb{F}_q^n$, which gives $p \mid s$, as $c \neq 0$, so that we get $p \leq s$, and then $o(\varphi) = j s \geq jp$. Therefore we conclude that $o (\varphi) = jp$. As $\varphi$ is assumed to be transitive, we have that $j p = q^n$, and so $o (A) = j = q^n / p$. Essentially, what we have proved up to now is that, if such a transitive affine map $\varphi(x)=Ax+b$ exists, then it must have the property that $o (A) = q^n / p$. Let $\mu_A (T) \in \mathbb{F}_q [T]$ be the minimal polynomial of $A$. By the fact that $o (A) = q^n / p$ we get \begin{equation*} \mu_A (T) \mid T^{q^n / p} - 1 = (T-1)^{q^n / p}. \end{equation*} Then, $\mu_A (T) = (T - 1)^d$, for some $d \leq n$, as the degree of the minimal polynomial is less than or equal to the degree of the characteristic polynomial by Cayley-Hamilton. From basic ring theory, one gets that the order of $A$ in $\mathrm{GL}_n (\mathbb{F}_q)$ is equal to the order of the class $\overline{T}$ of $T$ in the quotient ring $(\mathbb{F}_q[T] / (\mu_A (T)))^* = (\mathbb{F}_q[T] / ((T - 1)^d))^*$. Let us now assume $q^n / p^2 \geq n$. In this case we have \begin{equation*} \overline{T}^{q^n / p^2} = (\overline{T}-1)^{q^n / p^2} + 1 = 1, \end{equation*} as $q^n / p^2 \geq n \geq d$. Therefore, $o (A)=o(\overline T)\leq q^n/p^2 < q^n / p$ from which the contradiction follows. Therefore we can restrict to the case $q^n / p^2 < n$. It is easy to see that this inequality forces $q=p$: in fact if $q=p^k$ and $k\geq 2$, then $q^n/p^2=p^{kn-2}\geq p^{2n-2}\geq 4^{n-1}\geq n$. Therefore, the only uncovered cases are in correspondence with the solutions of $p^{n-2}<n$, which consist only of the following: $n=3$ and $p=2$, or $n=1$ and $p$ any prime, or $n=2$ and $p$ any prime. For $n=3$ and $p=2$ an exhaustive computation shows that there is no transitive affine map. Also, we already know that in the case $n=1$ and $p$ any prime we have such a transitive map, as this is one of the pathological cases. For the case $n=2$ we argue as follows. Let \[\varphi(x)=Ax+b\] be such a transitive affine map. Clearly $A\in \mathrm{GL}_2(\mathbb{F}_p)$ must be different from the identity matrix, as otherwise $\varphi$ cannot have full orbit. So the minimal polynomial of $A$ is different from $T-1$. On the other hand, the minimal polynomial of $A$ must divide $(T-1)^d$. Since $n=2$, we have that $d=2$. In $\mathrm{GL}_2(\mathbb{F}_p)$ having minimal polynomial $(T-1)^2$ forces a matrix to be conjugate to a single Jordan block of size $2$ with eigenvalue $1$, hence there exists $C\in \mathrm{GL}_2(\mathbb{F}_p)$ such that \[CAC^{-1}= \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \] Let us now consider again the map $\varphi$. Clearly, $\varphi$ is transitive if and only if the map $\widetilde \varphi=C\varphi C^{-1}$ is. For any $x\in \mathbb{F}_p^2$ we have that $\widetilde \varphi(x)=C(AC^{-1}x+b)$. Therefore the map $\widetilde \varphi$ can be written as \[\widetilde \varphi\begin{pmatrix} x_1 \\ x_2 \end{pmatrix}=\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \cdot \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} + \begin{pmatrix} r \\ s \end{pmatrix}.\] for some $r,s\in \mathbb{F}_p$. We will now prove that $\widetilde \varphi^p\begin{pmatrix} r \\ s\end{pmatrix}=\begin{pmatrix} r \\ s \end{pmatrix}$ so that $\varphi$ cannot be transitive, as starting form $c:=\begin{pmatrix} r \\ s\end{pmatrix}$ only visits $p$ points. \begin{align*} \widetilde \varphi^p\begin{pmatrix} r \\ s\end{pmatrix}&=\sum^{p}_{i=0} \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}^ic \\ &= c+\sum^p_{i=1}\begin{pmatrix} 1 & i \\ 0 & 1 \end{pmatrix} \cdot \begin{pmatrix} r \\ s \end{pmatrix} \\ &= c+\sum^p_{i=1}\begin{pmatrix} r+is \\ s \end{pmatrix}=c+ \begin{pmatrix} s\sum^p_{i=1}i \\ 0 \end{pmatrix} \end{align*} But now the sum $\sum^p_{i=1}i$ is different from zero if and only if $p= 2$. Therefore, such a transitive map could exist only for $p=n=2$. Since we already provided such an example of transitive map, the proof of the theorem is now concluded. \end{proof} \section{Transitive actions via projective primitivity} \label{transitivity_projective} In this section we characterise transitive projective automorphisms. \begin{definition} \label{projectively_primitive_polynomial} A polynomial $\chi(T) \in \mathbb{F}_q [T]$ of degree $m$ is said to be \emph{projectively primitive} if the two following conditions are satisfied: \begin{enumerate} \item[i)] $\chi(T)$ is irreducible over $\mathbb{F}_q$, \item[ii)] for any root $\alpha$ of $\chi(T)$ in $\mathbb{F}_{q^m} \cong \mathbb{F}_q[T] / (\chi(T))$ the class $\overline{\alpha}$ of $\alpha$ in the quotient group $G = \mathbb{F}_{q^m}^* / \mathbb{F}_q^*$ generates $G$. \end{enumerate} \end{definition} \begin{remark} Note that if a polynomial $\chi(T)\in \mathbb{F}_q[T]$ of degree $m$ is primitive, i.e. it is irreducible and any of its roots in $\mathbb{F}_{q^m} \cong \mathbb{F}_q[T] / (\chi(T))$ generates the multiplicative group $\mathbb{F}_{q^m}^*$, then it is obviously projectively primitive. The class of projectively primitive polynomials is in general larger than then the class of primitive polynomials: for example take the polynomial $\chi(T)=T^3+T+1\in \mathbb{F}_5[T]$. One can check that this polynomial is irreducible but is not primitive. In fact, let $\alpha$ be a root of $\chi(T)$. Since $\alpha^{62}=1\in \mathbb{F}_5[T] / (\chi(T))\cong \mathbb{F}_{5^3}$, and therefore $o(\alpha)\neq 5^3-1=124$, we have that $\chi(T)$ is not primitive. On the other hand $G=\mathbb{F}_{5^3}^* / \mathbb{F}_5^*$ has prime cardinality equal to $|G|=(5^3-1)/(5-1)=31$ and $\overline{\alpha}\neq 1\in G$. It follows immediately that $\overline \alpha$ has to be a generator of $G$. \end{remark} \begin{remark} Let $M, M' \in \mathrm{GL}_{n+1} (\mathbb{F}_q)$ be such that $[M] = [M']$ in $\mathrm{PGL}_{n+1} (\mathbb{F}_q)$, and let $\chi_M (T), \chi_{M'} (T) \in \mathbb{F}_q [T]$ be their characteristic polynomials. It is immediate to see that $\chi_M (T)$ is projectively primitive if and only if $\chi_{M'} (T)$ is projectively primitive. \end{remark} We are now ready to give a full characterisation of transitive projective automorphisms on $\mathbb{P}^n$. \begin{theorem} \label{transitivity_characterisation} Let $\Psi$ be an automorphism of $\mathbb{P}^n$. Write $\Psi = [M] \in \mathrm{PGL}_{n+1} (\mathbb{F}_q)$ for some $M \in \mathrm{GL}_{n+1} (\mathbb{F}_q)$. Then, $\Psi$ acts transitively on $\mathbb{P}^n$ if and only if the characteristic polynomial $\chi_M (T) \in \mathbb{F}_q [T]$ of $M$ is projectively primitive. \end{theorem} \begin{proof} For simplicity of notation, set $N = |\mathbb{P}^n| = (q^{n+1}-1) / (q-1)$. Assume that $\chi_M (T)$ is projectively primitive. Now we prove that for any $P \in \mathbb{P}^n$ we have that $\Psi^k (P) \neq P$ for $k \in \set{1, \ldots, N -1}$. Suppose by contradiction that there exists $P_0 \in \mathbb{P}^n$ such that $\Psi^k (P_0) = P_0$ for some $k \in \set{1, \ldots, N-1}$. Let $v_0 \in \mathbb{F}_q^{n+1} \setminus \set{0}$ be a representative of $P_0$. Then, there exists $\lambda \in \mathbb{F}_q^*$ such that \begin{equation*} M^k v_0 = \lambda v_0. \end{equation*} This means that $v_0$ is an eigenvector for the eigenvalue $\lambda$ of $M^k$, which implies that $\lambda = \alpha^k$ for some root $\alpha$ of $\chi_M (T)$ in $\mathbb{F}_{q^{n+1}}$. But now, the class $\overline{\alpha}^k$ of $\alpha^k$ in $G = \mathbb{F}_{q^{n+1}}^* / \mathbb{F}_q^*$ is $\overline{\lambda} = \overline{1}$, contradicting the hypothesis that $\overline{\alpha}$ generates $G$. Conversely, assume that $\Psi$ is transitive, so that for any $P \in \mathbb{P}^n$ we have that $\Psi^k (P) \neq P$ for $k \in \set{1, \ldots, N -1}$. Let $\alpha$ be a root of $\chi_M(x)$ in its splitting field and $h$ be a positive integer such that $\mathbb{F}_{q^h}\cong\mathbb{F}_q(\alpha)$. Clearly $1\leq h\leq n+1$. We also have that $\alpha \neq 0$ as $\det M \neq 0$, and $\alpha \notin \mathbb{F}_q^*$ as otherwise $M v_0 = \alpha v_0$ for some eigenvector $v_0 \in \mathbb{F}_q^{n+1} \setminus \set{0}$ for the eigenvalue $\alpha$, so that $\Psi (P_0) = P_0$ for $P_0$ the class of $v_0$ in $\mathbb{P}^n$, in contradiction with the fact that $\Psi$ is transitive. Let $d$ be the order of the class $\overline{\alpha}$ of $\alpha$ in $\mathbb{F}_{q^h}^* / \mathbb{F}_q^*$. Then, there exists $\lambda \in \mathbb{F}_q^*$ such that $\alpha^d = \lambda$. Now, $\alpha^d$ is an eigenvalue of $M^d$, and so $M^d v_1 = \lambda v_1$ for some eigenvector $v_1\in \mathbb{F}_q^{n+1} \setminus \set{0}$ for the eigenvalue $\alpha^d$. Thus, $\Psi^d (P_1) = P_1$ for $P_1$ the class of $v_1$ in $\mathbb{P}^n$, and so $d=N$ by the transitivity of $\Psi$. Therefore we have $(q^{n+1}-1)/(q-1)=N=d\leq (q^{h}-1)/(q-1)$. Now, since $h\leq n+1$, this forces $h = n+1$, so that $\chi_M$ is irreducible, which together with $d=N$ gives projective primitivity for $\chi_M$, as we wanted. \end{proof} \begin{remark}\label{remarkICGfullorbit} When $n = 1$, our approach gives immediately the criterion to get maximal period for inversive congruential generators, see for example \cite[Lemma FN]{bib:chou95}. To see this, set $\Psi$ and $\psi$ as in Example \ref{inversive}, so that $\set{\psi^m (0)}_{m \in \mathbb{N}}$ is an inversive sequence. By Proposition \ref{transitivity_affine_jump_P1}, transitivity of $\Psi$ and $\psi$ are equivalent. If $\chi (T) = T^2 - a T - b$ is irreducible, then $\psi$ acts transitively on $\mathbb{A}^1$ if and only if the class $\overline{\alpha}$ of a root $\alpha$ of $\chi(T)$ in $G = \mathbb{F}_{q^{2}}^* / \mathbb{F}_q^*$ generates $G$, which is itself equivalent to the fact that $\alpha^{q-1}$ has order $q+1$ in $\mathbb{F}_{q^2}^*$ (which is in fact the condition given in \cite[Lemma FN]{bib:chou95}). \end{remark} \section{Subspace Uniformity} \label{uniformity} In this section we show that sequences associated to iterations of transitive projective maps behave ``uniformly'' with respect to subspaces, i.e. not too many consecutive points can lie in the same projective subspace of $\mathbb{P}^n$. More precisely, we have the following: \begin{proposition} \label{theorem_uniformity} Let $\Psi$ be a transitive automorphism of $\mathbb{P}^n$. For any $P \in \mathbb{P}^n$ and any $d \in \set{1, \ldots, n-1}$ there is no $W \in \mathbb{G}\mathrm{r} (d, n)$ such that $\Psi^i (P) \in W$ for all $i \in \set{0, \ldots, d+1}$. \end{proposition} \begin{proof} Suppose by contradiction that there exists a projective subspace $W$ of dimension $d$ such that there exists $P\in \mathbb{P}^n$ such that $\Psi^i (P) \in W$ for all $i\in \set{0, \ldots, d+1}$. Let $W'$ be the subspace of $\mathbb{F}_q^{n+1}$ whose projectification is $W$, and let $v \in \mathbb{F}_q^{n+1} \setminus \set{0}$ be a representative for $P$. Let also $M\in \mathrm{GL}_{n+1}(\mathbb{F}_q)$ be a representative for $\Psi$. Consider now the smallest integer $h$ such that $M^h v$ is linearly dependent on $\{M^i v \, : \, i\in \{0,\dots h-1\}\}$ over $\mathbb{F}_q$. Since $M^iv$ is contained in $W'$ for any $i\in \{0,\dots d+1\}$, and $W'$ has dimension $d+1$, then we have that $h$ is at most $d+1$. Therefore, $M^hv$ can be rewritten in terms of lower powers of $M$, which in turn forces the span of $\{ M^iv \, : \, i\in \{0,\dots, h-1\}\}$ over $\mathbb{F}_q$ to be an invariant space for $M$. It follows that the characteristic polynomial of $M$ has a non-trivial factor of degree less than or equal to $d$. Since $d\leq n$, we have the claim, as by Theorem \ref{transitivity_characterisation} the characteristic polynomial of $M$ has to be irreducible. \end{proof} \begin{remark} This result is optimal with respect to $d$, as for any set $S$ of $d+2$ points of $\mathbb{P}^n$ there always exists a $W\in \mathbb{G}\mathrm{r} (d+1, n)$ containing $S$. \end{remark} Fix the canonical decomposition $\mathbb{P}^n = U \cup H$ as in \eqref{decomposition}. \begin{corollary} \label{bound_affine_jump_index} Let $\Psi$ be a transitive automorphism of $\mathbb{P}^n$. For any $P \in U$ the fractional jump index $\mathfrak{J}_P$ of $\Psi$ at $P$ is bounded by $n+1$. \end{corollary} \begin{proof} Assume by contradiction $\mathfrak{J}_P \geq n+2$. Then, setting $P' = \Psi (P)$ we get $\Psi^i (P') \in H$ for all $i \in \set{0, \ldots, n}$. But we have $H \in \mathbb{G}\mathrm{r} (n-1, n)$, and so this violates Proposition \ref{theorem_uniformity}. \end{proof} \section{Explicit description of fractional jumps} \label{explicit} Let $\Psi$ be an automorphism of $\mathbb{P}^n$. In this section we will give an explicit description of the fractional jump $\psi$ of $\Psi$. First of all, fix homogeneous coordinates $X_0, \ldots, X_n$ on $\mathbb{P}^n$, fix the canonical decomposition $\mathbb{P}^n = U \cup H$ as in \eqref{decomposition} and the map $\pi$ as in \eqref{pi_definition}, and write $\Psi\in \mathrm{PGL}_{n+1} (\mathbb{F}_q)$ as \begin{equation*} \Psi = [F_0: \ldots: F_n], \end{equation*} where each $F_j$ is an homogeneous polynomial of degree $1$ in $\mathbb{F}_q [X_0, \ldots, X_n]$. Fix now affine coordinates $x_1, \ldots, x_n$ on $\mathbb{A}^n$, and for each $j \in \set{1, \ldots, n}$ set \begin{equation} \label{rational_functions} f_j (x_1, \ldots, x_n) = \frac{F_{j-1} (x_1, \ldots, x_n, 1)}{F_n (x_1, \ldots, x_n, 1)}. \end{equation} Let $K = \mathbb{F}_q (x_1, \ldots, x_n)$ be the field of rational functions on $\mathbb{A}^n$. Then, \eqref{rational_functions} defines elements $f_j \in K$ for $j \in \set{1, \ldots, n}$, and $f_\Psi = (f_1, \ldots, f_n) \in K^n$. In turn this process defines a map \begin{equation} \label{map_pgl} \imath : \mathrm{PGL}_{n+1} (\mathbb{F}_q) \rightarrow K^n, \quad \Psi \mapsto f_\Psi. \end{equation} It is easy to see that this map is well defined and for any element $f=(f_1,\dots,f_n)$ in the image of $\imath$ all the denominators of the $f_j$'s are equal. It also holds that $\imath(\Psi\circ \Phi)=\imath(\Psi)\circ \imath(\Phi)$, where the composition in $K^n$ is defined in the obvious way, i.e. just plugging in the components of $\imath (\Phi)$ in the variables of $\imath(\Psi)$. Let us go back to $f_\Psi = (f_1, \ldots, f_n) \in K^n$ for a fixed automorphism $\Psi$. For any $i \geq 1$, let us define $f^{(i)} = \imath(\Psi^{i})$. For each $f^{(i)}$ and for each $j\in \{1,\dots,n\}$, write the $j$-th component of $f^{(i)}$ as \begin{equation*} f^{(i)}_j = \frac{a^{(i)}_j}{b^{(i)}_j}, \quad \text{for } a^{(i)}_j, b^{(i)}_j \in \mathbb{F}_q [x_1, \ldots, x_n]. \end{equation*} As we already observed, for fixed $i\geq 1$ all the $b^{(i)}_j$'s are equal, so we can set $b^{(i)}=b^{(i)}_1$. Define now \begin{align*} V_0 &= \mathbb{A}^n, \\ V_i &= \bigcap_{k = 1}^i V(b^{(k)}), \quad \text{for } i \geq 1. \end{align*} These sets will be the main ingredient in the definition of the covering mentioned in Section \ref{affine_jumps}. The following result characterises the $V_i$'s in terms of the position of a bunch of iterates of $\Psi$. \begin{lemma} \label{characterisation_vanishing_loci} Let $x \in \mathbb{A}^n$, and $P = \pi (x) \in U$. Then, $x \in V_i$ if and only if $\Psi^k (P) \in H$ for $k \in \set{1, \ldots, i}$. \end{lemma} \begin{proof} By definition, $x \in V_i$ if and only if $x \in V(b^{(k)})$ for every $k \in \set{1, \ldots, i}$, which means $b^{(k)} (x) = 0$ for every $k \in \set{1, \ldots, i}$. Now, $b^{(k)} (x) = 0$ if and only if the last component of $\Psi^k (P)$ is zero, which is equivalent to the condition $\Psi^k (P) \in H$. \end{proof} \begin{definition} Define the \emph{absolute fractional jump index $\mathfrak{J}$ of $\Psi$} to be the quantity \begin{equation*} \mathfrak{J} = \max \set{\mathfrak{J}_P \, : \, P \in U}. \end{equation*} \end{definition} When $\Psi$ is transitive, Corollary \ref{bound_affine_jump_index} ensures that $\mathfrak{J} \leq n+1$. We will now show that the absolute jump index equals the number of non empty $V_i$'s. \begin{proposition} We have that \begin{equation*} \label{absolute_jump_index} \min \set{i \in \mathbb{N} \, : \, V_i = \emptyset} = \mathfrak{J}. \end{equation*} \end{proposition} \begin{proof} Set $i_0 = \min \set{i \in \mathbb{N} \, : \, V_i = \emptyset}$. In order to show that $i_0 \leq \mathfrak{J}$, it is enough to prove that $V_{\mathfrak{J}} = \emptyset$. Assume that there exists $x \in V_{\mathfrak{J}}$. Then, if $P = \pi (x)$, we have by Lemma \ref{characterisation_vanishing_loci} that $\Psi^j (P) \in H$ for $j \in \set{1, \ldots, \mathfrak{J}}$, and so the jump index $\mathfrak{J}_P$ must be strictly greater than $\mathfrak{J} $, a contradiction. Conversely, in order to show that $\mathfrak J \leq i_0$, it is enough to prove that $V_{\mathfrak J -1}\neq \emptyset $. To do so, take $P_0 \in U$ for which $\mathfrak{J}_{P_0} = \mathfrak{J}$. Then $\Psi^k(P_0)\in H$ for any $k\in \{1,\dots \mathfrak{J}-1\}$. Let $x_0 = \pi^{-1} (P_0)$. Then, by Lemma \ref{characterisation_vanishing_loci} we have $x_0 \in V_{\mathfrak{J}-1}$. \end{proof} Define now \begin{equation*} U_i = V_{i-1} \setminus V_i, \quad \text{for } i \in \set{1, \ldots, \mathfrak{J}}. \end{equation*} Thus, for $I = \set{1, \ldots, \mathfrak{J}}$, the family $\set{U_i}_{i \in I}$ is a disjoint covering of $\mathbb{A}^n$ and each $f^{(i)}$ is a rational map of degree $1$ on $\mathbb{A}^n$. Also, we observe that by construction $f^{(i)}$ is well defined on $U_i$, so that the fractional jump is defined as \begin{equation*} \psi (x) = f^{(i)} (x), \quad \text{if } x \in U_i. \end{equation*} To clarify this contruction, we now give an explicit description of a fractional jump over $\mathbb{A}^2$. \begin{example} Let $q = 101$ and $n = 2$. Consider the automorphism of $\mathbb{P}^2$ defined by \begin{align*} \Psi([X_0: X_1: X_2]) &= [F_0 : F_1 : F_2] \\ &= [X_0 + 2 X_2: 3 X_1+4 X_2: 4X_0 + 2 X_1 + 3 X_2]. \end{align*} Notice that \begin{equation*} M = \begin{pmatrix} 1 & 0 & 2 \\ 0 & 3 & 4 \\ 4 & 2 & 3 \end{pmatrix} \end{equation*} is a representative of $\Psi$ in $\mathrm{GL}_3 (\mathbb{F}_{101})$. The characteristic polynomial $\chi_M (T) \in \mathbb{F}_{101} [T]$ of $M$ is given by \begin{equation*} \chi_M (T) = T^3 - 7 T^2 - T + 23, \end{equation*} which is irreducible over $\mathbb{F}_{101}$. Now, as \begin{align*} \frac{q^{n+1}-1}{q-1} &= \frac{101^3-1}{101-1} \\ &= 10303 \end{align*} is prime, any irreducible polynomial of degree $3$ in $\mathbb{F}_{101}[T]$ is projectively primitive. By Theorem \ref{transitivity_characterisation} we have that $\Psi$ is transitive on $\mathbb{P}^n$. Since $n=2$ and $\Psi$ is transitive, by Proposition \ref{absolute_jump_index} and the definition of the $U_i$'s we know that the fractional jump of $\Psi$ will be defined using at most $U_1,U_2,U_3$. As in \eqref{rational_functions}, we consider rational functions \begin{align*} f_1(x_1, x_2) &= \frac{x_1+2}{4x_1+2 x_2 +3}, \\ f_2(x_1, x_2) &= \frac{3x_2+4}{4x_1+2 x_2 +3}. \end{align*} in $\mathbb{F}_{101} (x_1, x_2)$, and set $f = (f_1, f_2) \in \mathbb{F}_{101}(x_1, x_2)^2$. Given the definition of $f$, we have \begin{align*} V_1 &= V(4x_1+2 x_2 +3), \\ U_1 &= \mathbb{A}^2 \setminus V_1. \end{align*} Let now $f^{(1)} = f$ and $f^{(2)} = f \circ f = (f_1^{(2)}, f_2^{(2)}) \in \mathbb{F}_{101}(x_1, x_2)^2$, where \begin{align*} f_1^{(2)}(x_1, x_2) &= f_1 (f_1 (x_1, x_2), f_2 (x_1, x_2)) \\ &= \frac{9x_1 + 4x_2 + 8}{16 x_1 + 12 x_2 +25}, \\ f_2^{(2)}(x_1, x_2) &= f_2 (f_1 (x_1, x_2), f_2 (x_1, x_2)) \\ &= \frac{16 x_1 + 17 x_2 + 24}{16 x_1 + 12 x_2 +25}. \end{align*} Define \begin{align*} V_2 &= V_1 \cap V(16 x_1 + 12 x_2 +25) = \set{(64, 22)}, \\ U_2 &= V_1 \setminus V_2. \end{align*} Finally, let $f^{(3)} = f \circ f \circ f = (f_1^{(3)}, f_2^{(3)}) \in \mathbb{F}_{101}(x_1, x_2)^2$, where \begin{align*} f_1^{(3)}(x_1, x_2) &= f_1 (f_1^{(2)} (x_1, x_2), f_2^{(2)} (x_1, x_2)) \\ &= \frac{41 x_1 + 28 x_2 - 43}{15 x_1 - 15 x_2 - 47}, \\ f_2^{(3)}(x_1, x_2) &= f_2 (f_1^{(2)} (x_1, x_2), f_2^{(2)} (x_1, x_2)) \\ &= \frac{11 x_1 - 2 x_2 - 30}{15 x_1 - 15 x_2 - 47}, \end{align*} and $U_3 =V_2= \set{(64, 22)}$, since $V_3=V_2 \cap V(15x_1-15x_2-47)=\emptyset$. By construction, $\mathbb{A}^2 = U_1 \cup U_2 \cup U_3$, and therefore we are ready to describe the fractional jump $\psi$ of $\Psi$ as \begin{equation*} \psi (x_1, x_2) = \begin{cases} f^{(1)} (x_1, x_2), & \text{if }(x_1, x_2) \in U_1, \\ f^{(2)} (x_1, x_2), & \text{if }(x_1, x_2) \in U_2, \\ f^{(3)} (x_1, x_2), & \text{if }(x_1, x_2) \in U_3. \end{cases} \end{equation*} Notice that $f^{(3)} (64, 22) = (63, 78)$, and so $\psi (x_1, x_2) = (63, 78)$ if $(x_1, x_2) \in U_3 = \set{(64, 22)}$. \end{example} \section{The discrepancy of fractional jump sequences}\label{discrepancy} In the context of pseudorandom number generation, it is of interest to say something about the distribution of a sequence. A statistic that is of particular interest is the discrepancy of a sequence, of which we recall the definition below. The goal of this section is to show that for sequences generated by fractional jumps one can prove the same discrepancy bounds as for the sequences generated by the ICG. For simplicity, we let $q=p$ be prime. We assume the set $ \mathbb{F}_p\cong \mathbb{Z}/p\mathbb{Z}$ to be represented by $\{0,1,\dots, p-1\}\subseteq \mathbb{Z}$ as in \cite{shparlinski10}. For $x\in \mathbb{F}_p$ we then write $\frac{x}{p}$ for the corresponding element in $\frac{1}{p}\mathbb{Z}\subseteq \mathbb{R}$. For a sequence \begin{equation*} \Gamma = \{(\gamma_{m,0},\dots,\gamma_{m,s-1})\}_{m=0}^{N-1} \end{equation*} of $N$ points in $[0,1)^s$, for $s\in \mathbb{N}$, the \emph{discrepancy} of $\Gamma$ is defined by \begin{equation*} D_\Gamma = \sup_{B\subseteq [0,1)^s} \bigg|\frac{T_\Gamma(B)}{N}-|B|\bigg|, \end{equation*} where the supremum is taken over boxes $B$ of the form \begin{equation*} B = [\alpha_1,\beta_1)\times \dots \times [\alpha_s,\beta_s) \subseteq [0,1)^s, \end{equation*} and $T_\Gamma(B)$ denotes the number of points of $\Gamma$ which lie inside $B$. For a sequence $\{u_m\}_{m \in \mathbb{N}}$ of points in $\mathbb{F}_p$ the main interest lies in bounding the discrepancy of the sequence \[ \Big(\frac{u_{m}}{p},\frac{u_{m+1}}{p},\dots,\frac{u_{m+s-1}}{p}\Big)_{m=0}^{N-1}\] for $s\ge 1$. In the case of a sequence generated by an ICG such a bound was given in \cite{shparlinski10}. The goal of this section is to extend the results in \cite{shparlinski10} to give discrepancy bounds for full orbit sequences generated by fractional jumps also in the case where the dimension $n$ satisfies $n>1$. Given a fractional jump $\psi:\mathbb{F}_p^n\to \mathbb{F}_p^n$ and an initial value $x\in \mathbb{F}_p^n$ we define the sequence $\{ \mathbf{u}_m (x) \}_{m \in \mathbb{N}}$ of points in $\mathbb{F}_p^n$ by setting $\mathbf{u}_0(x) = x$ and \begin{equation*} \mathbf{u}_m(x) = \psi^{m}(x), \quad \text{for } m \geq 1. \end{equation*} We also define the \emph{snake sequence} $\{ v_m (x) \}_{m \ge 1}$ of points in $\mathbb{F}_p$ by setting \[ (v_{kn+1}(x),v_{kn+2}(x),\dots,v_{(k+1)n}(x)) = \mathbf{u}_{k}(x), \quad \text{for } k \in \mathbb{N}.\] Let $D_{s,\psi}(N;x)$ denote the discrepancy of the sequence \[ \Big(\frac{v_{m+1}}{p},\frac{v_{m+2}}{p},\dots,\frac{v_{m+s}}{p}\Big)_{m=0}^{N-1}\] and let $D^n_{s,\psi}(N;x)$ denote the discrepancy of the sequence \[ \Big(\frac{\mathbf{u}_m}{p},\frac{\mathbf{u}_{m+1}}{p},\dots,\frac{\mathbf{u}_{m+s-1}}{p}\Big)_{m=0}^{N-1}.\] Note that in the first case the individual points of the sequence lie in $\mathbb{F}_p^s$, while in the second case the points of the sequence lie in $\mathbb{F}_p^{ns}$. Our main result for the discrepancy $D_{s,\psi}(N;x)$ is a direct generalization of \cite[Theorem 4]{shparlinski10}, which deals with the discrepancy of a sequence generated by an ICG. We also provide the analogous bounds for the $n$-dimensional discrepancy $D_{s,\psi}^n(N;x)$. \begin{theorem}\label{thm:discrepancy} Let $\Psi$ be a transitive automorphism of $\mathbb{P}^n$ and let $\psi$ be its fractional jump. Then for any integer $s\ge 1$ and any real $\Delta >0$, for all but $O(\Delta p^n)$ initial values $x\in \mathbb{F}_p^n$ it holds that \[ D_{s,\psi}(N;x) \ll_{s,n} (\Delta^{-2/3}N^{-1/3}+ p^{-1/4}\Delta^{-1})(\log N)^s \log p\] and \[ D_{s,\psi}^n(N;x) \ll_{s,n} (\Delta^{-2/3}N^{-1/3}+ p^{-1/4}\Delta^{-1})(\log N)^{sn} \log p\] for all $N$ with $1\le N\le p^n$. \end{theorem} The proof of Theorem \ref{thm:discrepancy} follows the same lines as the proof of \cite[Theorem 4]{shparlinski10}, but with Lemma \ref{lem:2nd-moment} below extending \cite[Lemma 1]{shparlinski10} to $n>1$. In the proofs we will make use of the Koksma--Sz\"{u}sz inequality as well as the Bombieri--Weil bound. \begin{theorem}[{\cite[Theorem 1.21]{drmota}}]\label{thm:koksma} For any integer $H\ge 1$, the discrepancy $D_\Gamma$ of the sequence $\Gamma = (\gamma_{m,0},\dots,\gamma_{m,s-1})_{m=0}^{N-1}$ satisfies \[ D_\Gamma \ll \frac{1}{H} + \frac{1}{N}\sum_{0<\| \mathbf{h} \|_\infty \le H} \frac{1}{\rho(\mathbf{h})} \bigg| \sum_{m=0}^{N-1} \exp\Big( 2\pi i \sum_{j=0}^{s-1} h_j \gamma_{m,j} \Big)\bigg|, \] where $\rho(\mathbf{h}) = \prod_{j=0}^{s-1} \max\{|h_j|,1\}$ for $\mathbf{h}=(h_0,\dots, h_{s-1})\in \mathbb{Z}^s$. \end{theorem} \begin{theorem}[{\cite[Theorem 2]{moreno}}]\label{thm:weil} Let $f/g$ be a rational function over $\mathbb{F}_p$ with $\deg(f)>\deg(g)$. Suppose that $f/g$ is not of the form $h^p-h$, where $h$ is a rational function over $\overline{\mathbb{F}}_p$. Then \[ \bigg|\sum_{\substack{x\in \mathbb{F}_p:\\g(x)\ne 0}} e_p\left( \frac{f(x)}{g(x)} \right) \bigg| \le (\deg(f) + v-1)p^{1/2},\] where $v$ is the number of distinct roots of $g$ in $\overline{\mathbb{F}}_p$. \end{theorem} We will also need to use the explicit description of $\psi$ given in Section \ref{explicit} to describe powers of $\psi$, which is done in the next lemma. \begin{lemma}\label{lem:explicit} Let $\Psi$ be a transitive automorphism of $\mathbb{P}^n$ and let $\psi$ be its fractional jump. Then there are polynomials $a^{(i)}_j, b^{(i)} \in \mathbb{F}_p [x_1, \ldots, x_n]$ of degree less or equal than $1$, for $i\in\{1,\dots,p-1\}$ and $j\in\{1,\dots,n\}$, with $b^{(i)}$ not identically a constant, and such that \[ \psi^i_j(x) = \frac{a^{(i)}_j(x)}{b^{(i)}(x)}, \quad \text{for } x\not\in \bigcup_{k=1}^i V(b^{(k)}),\] where $\psi^i_j (x)$ denotes the $j$-th component of $\psi^i (x)$. \end{lemma} \begin{proof} The functions $a_j^{(i)}, b^{(i)}$ are defined as in Section \ref{explicit}. Indeed, recall that there is a set $U_1$ and there is a rational map \[ f^{(1)}=\Big(\frac{a^{(1)}_1}{b^{(1)}},\dots,\frac{a^{(1)}_n}{b^{(1)}}\Big) \] of degree $1$ such that \[ \psi(x) = f^{(1)}(x),\quad \text{for } x\in U_1 = \mathbb{F}_p^n\setminus V(b^{(1)}).\] For $i\in \{1,\dots,p-1\}$ and $j\in\{1,\dots,n\}$, define the maps $a_j^{(i)},b^{(i)}$ by iterating the map $f^{(1)}$, that is \[ f^{(i)} = (f^{(1)})^i = \left(\frac{a^{(i)}_1}{b^{(i)}},\dots,\frac{a^{(i)}_n}{b^{(i)}}\right),\quad i\in\{1,\dots,p-1\}.\] Let us notice that in Section \ref{explicit} the function $f^{(i)}$ was used to describe the map $\psi$ on the set $U_i$. In this section we are instead using $f^{(i)}$ to describe the $i$'th iterate of $\psi$ on the set $U_1\cap \psi^{-1}(U_1) \cap \cdots \cap \psi^{-i+1}(U_1)$. In particular, Section \ref{explicit} made use of $f^{(i)}$ for $i\in\{1,\dots,n\}$, but here we instead use $f^{(i)}$ on the range $i\in \{1,\dots,p-1\}$. To see that $b^{(i)}$ isn't identically a constant for $i \in \set{1, \ldots, p-1}$ we need to show that $f^{(i)} = \imath(\Psi^i)$, where $\imath$ is the map in \eqref{map_pgl}, is not affine. Let $M\in \mathrm{GL}_{n+1}(\mathbb{F}_p)$ be such that $\Psi = [M] \in \mathrm{PGL}_{n+1}(\mathbb{F}_p)$. Suppose by contradiction that $\imath(\Psi^i)$ is affine for some $i \in \set{1, \ldots, p-1}$. Since $M$ has irreducible characteristic polynomial by Theorem \ref{transitivity_characterisation}, we have that $\mathbb{F}_p[M]$ is a field and in turn that $\mathbb{F}_p[M^i]$ is a proper subfield, therefore the minimal polynomial of $M^i$ is irreducible. Now, since $\imath(\Psi^i)$ is assumed to be affine, we have that $\Psi^i(H)=H$, which in turn forces $M^i$ to fix a proper subspace of $\mathbb{F}_p^{n+1}$. This directly implies that the (irreducible) minimal polynomial of $M^i$ cannot be equal to the characteristic polynomial of $M^i$, and therefore it must have degree $d<n+1$. We know, again by Theorem \ref{transitivity_characterisation}, that $[M]$ is a generator for the quotient group $\mathbb{F}_p[M]^*/ \mathbb{F}_p^*$ as $\Psi$ is transitive, but $[1]=[M^i]^{(p^d-1)/(p-1)}= [M]^{i (p^d-1)/(p-1)}$, so $(p^{n+1}-1)/(p-1)\mid i (p^d-1)/(p-1)$ which forces $i\geq (p^{n+1}-1)/(p^d-1)\geq p$, a contradiction. \end{proof} We are now ready to prove the technical heart of the argument. \begin{lemma}\label{lem:technical} Let $\Psi$ be a transitive automorphism of $\mathbb{P}^n$ and let $\psi$ be its fractional jump. Then for any integers $j_0, s\ge 1$, $d\le (p-1)n-s$ and $\mathbf{h} \in \mathbb{F}_p^s\setminus \{0\}$ it holds that \[ \Big|\sum_{x \in \mathbb{F}_p^n} e_p\Big( \sum_{j=0}^{s-1}h_j(v_{j_0+d+j}(x)-v_{j_0+j}(x))\Big)\Big| \le 3\Big(\frac{s+d}{n}+1\Big)p^{n-1}+4\Big(\frac{s}{n}+1\Big)p^{n-1/2}. \] \end{lemma} \begin{proof} Observe first that the result is trivial for $s \ge p^{1/2}n$, so assume that $s\le p^{1/2}n$. Let $r = \min\{j: h_j\ne 0\}$, $s'=s-r$ and $h_j'=h_{j+r}$ for $j\in\{0,\dots,s'-1\}$. Let $m= \floor{(j_0+r)/n}$. Since $\psi$ is a bijection, we can make the substitution $x'=\psi^m (x)$ and sum over $x'\in\mathbb{F}_p^n$ in place of summing over $x\in \mathbb{F}_p^n$. Then, we get $v_{i}(x')=v_{mn+i}(x) $, and so \begin{align*} \sum_{x\in\mathbb{F}_p^n} e_p\Big(\sum_{j=0}^{s-1} h_j(v_{j_0+d+j}(x)-v_{j_0+j}(x))\Big) &= \sum_{x'\in\mathbb{F}_p^n} e_p\Big(\sum_{j=0}^{s'-1} h_{j}'(v_{j_1+d+j}(x')-v_{j_1+j}(x'))\Big)\\ &=\sum_{x\in\mathbb{F}_p^n} e_p\Big(\sum_{j=0}^{s'-1} h_{j}'(v_{j_1+d+j}(x)-v_{j_1+j}(x))\Big), \end{align*} for some $j_1$ with $1\le j_1 \le n$, where in the last equality we have simply relabeled the summation index $x'$ to $x$, which we do for simplicity of notation. Notice that in this way we have that $h_0' \ne 0$, which was the entire point of shifting the sum. As $d\le (p-1)n-s$ we have $j_1+j \le j_1+d+j \le pn-1 < pn$. This means that $v_{j_1+d+j} = \psi_k^i(x)$ for some $i< p$ and some $k\in\{1,\dots, n\}$. An analogous statement also holds for $v_{j_1+j}$, i.e. $v_{j_1+j} = \psi_{k'}^{i'}(x)$ for some $i'< p$ and some $k'\in\{1,\dots, n\}$. We can therefore apply Lemma \ref{lem:explicit} to write \[v_{in+j}(x) = \psi_j^i(x) = \frac{a_j^{(i)}(x)}{b^{(i)}(x)},\quad x\not\in \bigcup_{k=1}^i V(b^{(k)}),\] for $i\in\{1,\dots,\floor{\frac{n+d+s-1}{n}}\}$, $j\in \{1,\dots,n\}$, and for $i=0$ we clearly have \[ v_{j}(x) = x_j,\] for $j \in \{1,\dots,n\}$. Since we want to estimate the sum \[\Big|\sum_{x \in \mathbb{F}_p^n} e_p\Big( \sum_{j=0}^{s'-1}h_j'(v_{j_0+d+j}(x)-v_{j_0+j}(x))\Big)\Big|,\] we consider for fixed $\tilde{x} = (x_1,\dots,x_{j_1-1},x_{j_1+1},\dots,x_n)\in \mathbb{F}_q^{n-1}$ the inner sum \[G(x_{j_1};\tilde{x})= \sum_{j=0}^{s'-1} h_j'(v_{j_1+d+j}(x)-v_{j_1+j}(x))\] as a function of the variable $x_{j_1}$. Since we want to apply Theorem \ref{thm:weil} to $G$ for fixed $\tilde x$ (and considered as a univariate polynomial in $x_{j_1}$) we first need to give a nice description of $G$ outside a certain set. We can do that outside of the set \[E=\bigcup_{i=1}^{\floor{(n+d+s-1)/n}}V(b^{(i)}).\] In fact, one may write \[G(x_{j_1};\tilde{x})= \frac{a(x_{j_1};\tilde{x})}{b(x_{j_1};\tilde{x})} = \frac{\tilde{a}(x_{j_1};\tilde{x})}{b(x_{j_1};\tilde{x})} - h_0' x_{j_1} + c(\tilde{x}),\] where $a,\tilde{a},b$ are polynomials and $c(\tilde{x})$ is constant with respect to $x_{j_1}$. In order to apply Theorem \ref{thm:weil} to the sum over $x_{j_1}$, we need to check that the conditions of the theorem are verified apart from a small set $F$ of $\tilde x$'s, whose size we can estimate. To begin with we check that $\deg(a) = \deg(b)+1$. This follows immediately if either $\deg(\tilde{a})\le \deg(b)$, or if $\deg(\tilde{a})=\deg(b)+1$ and the leading coefficient in $\tilde{a}/b$ doesn't cancel the term $-h_0'x_{j_1}$. By considering the possible powers of $\psi$ that can appear in the definition of $G$, we see that \[b(x_{j_1};\tilde{x}) = \prod_{i\in I} b^{(i)}(x),\] with the product taken over a set $I$ of $i$ satisfying \begin{equation} i \in \left[ \frac{j_1}{n}-1,\frac{j_1+s'-1}{n} \right)\cup\left[ \frac{j_1+d}{n}-1,\frac{j_1+d+s'-1}{n} \right) \subseteq [0, p) \label{eq:ugly} \end{equation} and such that the coefficient of $x_{j_1}$ is nonzero in $b^{(i)}$. Since $\tilde{a}/b$ was defined by a linear sum of rational functions of degree $1$, it follows that $\deg(\tilde{a}) \le \deg(b) + 1$. If $\deg(\tilde{a}) = \deg(b)+1$, the coefficient of the highest order term in $a$ is of the form a constant times $\prod_{i\in J} b^{(i)}(x)$ for some set $J$ of $i$ satisfying \eqref{eq:ugly} and such that $b^{(i)}$ doesn't depend on $x_{j_1}$. Think of this coefficient as a polynomial in $\tilde{x}$. In particular, there are no more than $2 \Big(\frac{s'}{n}+1 \Big)$ values of $\tilde{x}$ satisfying \eqref{eq:ugly}, and therefore the coefficient is equal to $h_0'$ for at most $2 \Big( \frac{s'}{n}+1 \Big) p^{n-2}$ values of $\tilde{x}$. We can therefore define a set $F \subseteq \mathbb{F}_p^{n-1}$ with \[ |F| \le 2\Big(\frac{s'}{n}+1\Big)p^{n-2},\] and such that $\deg(a) = \deg(b) + 1$~for $\tilde{x} \not \in F$. Finally, for $\tilde{x}\not\in F$ we want to check that $G$ is not of the form $h^p-h$ for some rational function $h$ over $\overline{\mathbb{F}}_p$. Assume therefore that in fact $a/b = h^p-h$ for some rational function $h = h_1/h_2$, where $h_1$ and $h_2$ are coprime. Then ${h_2^pa = h_1^p b - h_1bh_2^{p-1}}$, and so in particular $h_2^p | b$. Note that \[ \deg(b) \le 2(s'/n+1) < p \] since we initially assumed that $s\le p^{1/2}n$, and so $h_2$ must be constant. This gives $a = b(c_1h_1^{p} - c_2h_1)$ for some constants $c_1,c_2$. But then $\deg(a) -\deg(b)$ is a multiple of $p$, contradicting $\deg(a) = \deg(b)+1$. Combining all of this we may apply Theorem \ref{thm:weil} to the sum over $x_{j_1}$ to conclude that whenever $\tilde{x}\not\in F$ it holds that \[\bigg|\sum_{x_{j_1}} e_p\Big(\frac{a(x_{j_1};\tilde{x})}{b(x_{j_1};\tilde{x})}\Big)\bigg| \le 4\Big(\frac{s}{n}+1\Big)p^{1/2},\] where the sum is taken over values $x_{j_1}$ where $b\ne 0$. For $\tilde{x}\in F$ we have the trivial bound \[\bigg|\sum_{x_{j_1}} e_p\Big(\frac{a(x_{j_1};\tilde{x})}{b(x_{j_1};\tilde{x})}\Big)\bigg| \le p.\] Finally, these bounds together with the union bound \[ |E| \le \sum_{i=0}^{\floor{(n+d+s-1)/n}} |V(b^{(i)})| \le \left(\frac{s+d}{n}+1\right)p^{n-1}\] and the triangle inequality give \begin{align*} \Big|\sum_{x\in\mathbb{F}_p^n} e_p(G(x_{j_1};\tilde{x}))\Big| &\le |E| + \Big|\sum_{x\not\in E} e_p\Big(\frac{a(x_{j_1};\tilde{x})}{b(x_{j_1};\tilde{x})}\Big)\Big| \\ &\le |E| + p|F| + \Big|\sum_{\tilde{x}\not\in F}\sum_{\substack{x_{j_1}:\\ x\not\in E}} e_p\Big(\frac{a(x_{j_1};\tilde{x})}{b(x_{j_1};\tilde{x})}\Big)\Big|\\ &\le \Big(\frac{s+d}{n}+1\Big)p^{n-1} + 2p\Big(\frac{s}{n}+1\Big)p^{n-2} + 4\Big(\frac{s}{n}+1\Big)p^{1/2}p^{n-1}\\ &\le 3\Big(\frac{s+d}{n}+1\Big)p^{n-1}+4\Big(\frac{s}{n}+1\Big)p^{n-1/2}. \end{align*} \end{proof} We now need an additional ancillary result, which will be used in the proof of the main theorem. \begin{lemma}\label{lem:2nd-moment} Let $\Psi$ be a transitive automorphism of $\mathbb{P}^n$ and let $\psi$ be its fractional jump. Then for any integers $j_0, s \geq 1$ and $K$ with $1\le K \le p^n$, and any $\mathbf{h}\in \mathbb{F}_p^s\setminus \{0\}$ one has \[ \sum_{x\in \mathbb{F}_p^n} \bigg| \sum_{k=0}^{K-1} e_p\Big(\sum_{j=0}^{s-1}h_jv_{j_0+j+k}(x)\Big)\bigg|^2 \ll_{s,n} Kp^n + K^2p^{n-1/2}. \] \end{lemma} \begin{proof} We divide into the two cases $K\le p^{1/2}$ and $K> p^{1/2}$. In the first case we have \begin{align*} \sum_{x\in \mathbb{F}_p^n} \bigg| \sum_{k=0}^{K-1} e_p\Big(\sum_{j=0}^{s-1}h_jv_{j_0+j+k}(x)\Big)\bigg|^2 = \sum_{x\in\mathbb{F}_p^n} \sum_{m,l=0}^{K-1}e_p\Big(\sum_{j=0}^{s-1}h_j (v_{j_0+j+m}(x)-v_{j_0+j+l}(x))\Big)\\ \le Kp^n + 2\sum_{d=1}^{K-1}\sum_{m=0}^{K-1-d}\bigg|\sum_{x\in\mathbb{F}_p^n} e_p\Big(\sum_{j=0}^{s-1} h_j(v_{j_0+m+j+d}(x)-v_{j_0+m+j}(x))\Big)\bigg| , \end{align*} where we have split into the cases $m=l$ and $m\ne l$. Applying Lemma \ref{lem:technical} to the innermost sum when $d\le (p-1)n-s$, and applying the trivial bound for the $O_{s,n}(1)$ remaining values of $d$ then gives that this is \begin{align*} &\ll_{s,n} K p^n + \sum_{d=1}^{K-1} (K-d)\left(\frac{d}{n}p^{n-1} +(s/n+1)p^{n-1/2}\right) + p^n\\ &\ll_{s,n} Kp^n + K^3p^{n-1} + K^2p^{n-1/2}. \end{align*} As the middle term never dominates for the considered range of $K$, we are done in this case. In the second case, split the sum over $k$ into at most $K/M+1$ intervals of length $M=p^{1/2}$. On each interval we bound the sum as in the first case, and so by Cauchy--Schwarz it follows that \begin{align*} \sum_{x\in \mathbb{F}_p^n} \bigg|\sum_{k=0}^{K-1} e_p\Big(\sum_{j=0}^{s-1}h_jv_{j_0+j+k}(x)\Big)\bigg|^2 &\ll_{s,n} \left(\frac{K^2}{M^2}+1\right)(Mp^n+M^3p^{n-1}+M^2p^{n-1/2})\\ &\ll_{s,n} K^2p^{n-1/2}. \end{align*} \end{proof} We are now ready to prove the main theorem. \begin{proof}[Proof of Theorem \ref{thm:discrepancy}] Apply Theorem \ref{thm:koksma} with $H = \floor{N/2}$ to get \begin{equation} D_{s,\psi}(N;x) \ll \frac{1}{N}+\frac{1}{N}\sum_{0<\| \mathbf{h} \|_\infty \le N/2} \frac{1}{\rho(\mathbf{h})}\bigg| \sum_{m=0}^{N-1} e_p\Big(\sum_{j=0}^{s-1}h_j v_{m+j}(x)\Big)\bigg|. \label{eq:bound1} \end{equation} Let $k\geq 1$ be an integer. Observe that if $k > N-1$, we have that \[\bigg|\sum_{m=0}^{N-1}e_p\Big(\sum_{j=0}^{s-1}h_j v_{m+j}(x)\Big)-\sum_{m=0}^{N-1}e_p\Big(\sum_{j=0}^{s-1} h_j v_{m+j+k}(x)\Big)\bigg| \leq 2N \leq 2k,\] if $k \leq N-1$, since the two sums in $m$ overlap in all but $2k$ terms, we have that \begin{align*} &\bigg|\sum_{m=0}^{N-1}e_p\Big(\sum_{j=0}^{s-1}h_j v_{m+j}(x)\Big)-\sum_{m=0}^{N-1}e_p\Big(\sum_{j=0}^{s-1} h_j v_{m+j+k}(x)\Big)\bigg| \\ & = \bigg|\sum_{m=0}^{k-1}e_p\Big(\sum_{j=0}^{s-1}h_j v_{m+j}(x)\Big)-\sum_{m=N-k}^{N-1}e_p\Big(\sum_{j=0}^{s-1} h_j v_{m+j+k}(x)\Big)\bigg| \\ &\leq 2k. \end{align*} Therefore, for any integer $K \geq 1$ it holds that \begin{equation*} \begin{split} K\bigg|\sum_{m=0}^{N-1}e_p\Big(\sum_{j=0}^{s-1}h_j v_{m+j}(x)\Big) \bigg| &\le \bigg|\sum_{k=0}^{K-1} \sum_{m=0}^{N-1} e_p\Big(\sum_{j=0}^{s-1}h_j v_{m+j+k}(x)\Big) \bigg|+\sum_{k=0}^K 2k\\ &\le \sum_{m=0}^{N-1} \bigg|\sum_{k=0}^{K-1} e_p\Big(\sum_{j=0}^{s-1}h_j v_{m+j+k}(x)\Big)\bigg| + O(K^2). \end{split} \end{equation*} Combining this with \eqref{eq:bound1}, and noting that $\sum_{0<\| \mathbf{h} \|_\infty \le H} \frac{1}{\rho(\mathbf{h})}\ll (\log H)^s$, then gives \begin{equation} D_{s,\psi}(N;x) \ll \frac{K}{N}(\log N)^s + \frac{1}{N}R(N,K,x) \label{eq:bound4} \end{equation} where \begin{equation} R(N,K,x) = \frac{1}{K}\sum_{0<\| \mathbf{h} \|_\infty \le N/2} \frac{1}{\rho(\mathbf{h})}\sum_{m=0}^{N-1}\bigg|\sum_{k=0}^{K-1}e_p\Big(\sum_{j=0}^{s-1}h_jv_{m+j+k}(x)\Big)\bigg|. \label{eq:bound2} \end{equation} We now average over initial values $x$. By Cauchy--Schwarz one has \[\left( \sum_{x\in\mathbb{F}_p^n}\bigg|\sum_{k=0}^{K-1}e_p\Big(\sum_{j=0}^{s-1}h_jv_{m+j+k}(x)\Big)\bigg| \right)^2 \le p^n \sum_{x\in\mathbb{F}_p^n} \bigg|\sum_{k=0}^{K-1} e_p\Big(\sum_{j=0}^{s-1}h_j v_{m+j+k}(x)\Big)\bigg|^2. \] Inserting this and the bound from Lemma \ref{lem:2nd-moment} into \eqref{eq:bound2} gives \begin{equation} \sum_{x \in \mathbb{F}_p^n} R(N,K,x) \ll_{s,n} Np^n (K^{-1/2}+p^{-1/4})(\log N)^s. \label{eq:bound3} \end{equation} Now, let $N_j = 2^j$ and $K_j = \ceil{\Delta^{-2/3}N_j^{2/3}}$ for $j \in \set{ 0,1,\dots,\ceil{\log_2 p^n}}$. Let $\Omega_j\subseteq \mathbb{F}_p^n$ be the set of $x$ for which \[ R(N_j,K_j,x) \ge C_{s,n}\Delta^{-1}N_j(K_j^{-1/2}+p^{-1/4}) (\log N_j)^s \log p^n,\] where $C_{s,n}$ is the implied constant in \eqref{eq:bound3}. By \eqref{eq:bound3} we must have that $|\Omega_j| \le \Delta p^n/\log p^n$. Setting $\Omega = \cup_j \Omega_j$ we then have $|\Omega| \le \Delta p^n$, and for $x \not\in \Omega$ it holds that \begin{equation} R(N_j,K_j,x) \le C_{s,n} \Delta^{-1}N_j(K_j^{-1/2}+p^{-1/4}) (\log N_j)^s \log p^n \label{eq:bound5} \end{equation} for all $j \le \ceil{\log_2 p^n}$. Given $N$ such that $1\le N \le p^n$, take $\nu\in\mathbb{N}$ such that $N_{\nu-1}\le N < N_{\nu}$. By \eqref{eq:bound4} we have \[ D_{s,\psi}(N;x) \ll \frac{K_\nu}{N_\nu}(\log N_\nu)^s + \frac{1}{N_\nu}R(N_\nu,K_\nu,x),\] and so for $x \not\in \Omega$ it holds that \[ D_{s,\psi}(N;x) \ll (\Delta^{-2/3}N^{-1/3} + p^{-1/4}\Delta^{-1})(\log N)^s \log p^n\] by \eqref{eq:bound5}, completing the first bound in the theorem. For $D_{s,\psi}^n(N;x)$ we also apply Theorem \ref{thm:koksma} with $H = \ceil{N/2}$ to get \begin{equation*} D_{s,\psi}^n(N;x) \ll \frac{1}{N}+\frac{1}{N}\sum_{0< \| \mathbf{h} \|_\infty \le N/2} \frac{1}{\rho(\mathbf{h})}\Big| \sum_{m=0}^{N-1} e_p\Big(\sum_{j=0}^{s-1}\mathbf{h}_j\cdot \mathbf{u}_{m+j}(x)\Big)\Big|, \end{equation*} where now $\mathbf{h} = (\mathbf{h}_0,\dots,\mathbf{h}_{s-1})$ and $\mathbf{h}_j = (h_{j,1},\dots,h_{j,n})$ for $j \in \set{0,\dots,s-1}$. Observe that \[ \sum_{j=0}^{s-1} \mathbf{h}_j\cdot \mathbf{u}_{m+j}(x) = \sum_{j=0}^{s-1}\sum_{i=1}^{n} h_{j,i}v_{(m+j)n+i}(x) = \sum_{k=0}^{sn-1}h_j' v_{mn+k}(x), \] where $h_k' = h_{j,i}$ if $k = nj+i$, $0\le i \le s-1$. We may therefore bound all sums exactly as before, with the only difference being that $sn$~replaces $s$. \end{proof} \section{The computational complexity of fractional jump sequences} \label{computation} Let $\Psi$ be a transitive automorphism of $\mathbb{P}^n$, and let $\psi$ be its fractional jump. We now want to establish the computational complexity of computing the $m$-th term of the sequence $\{ \psi^m (0) \}_{m \in \mathbb{N}}$. In particular in this section we will show that computing a term in our sequence is less expensive than computing a term of a classical inversive sequence of the same bit size. Fix notations as in Section \ref{explicit}. For simplicity, let us restrict to the case in which $q$ is prime. Let us first deal with the regime in which $q$ is large (which is the regime in which we got the discrepancy bounds in Section \ref{discrepancy}), so that most of the computations will be performed for points in $U_1$. If one chooses $\Psi$ in such a way that the coefficients of the $F_j$'s are small (this is possible for example by taking $\Psi$ as the companion matrix of a projectively primitive polynomial with small coefficients), so that also the coefficients of the $f_j^{(1)}$'s are small, the multiplications for such coefficients cost essentially the same as sums. Therefore the computational cost of computing the $m$-th term of the sequence (given the $(m-1)$-th term) is reduced to the cost of computing $n$ multiplications in $\mathbb{F}_q$ and one inversion in $\mathbb{F}_q$ (as all the denominators of the $f_j^{(1)}$'s are equal). Let $M(q)$ (resp. $I(q)$) denote the cost of one multiplication (resp. inversion) in $\mathbb{F}_q$. The total cost of bit operations involved to compute a single term in the sequence is then \[C^{\text{new}}(q,n)=n M(q)+ I(q).\] Using the fast Fourier transform for multiplications \cite{bib:SchStr71} and the extended Euclidean algorithm for inversion \cite{bib:schonhage71} one gets \begin{align*} M(q) &= O(\log(q)\log\log(q) \log \log \log (q)), \\ I(q) &= O(M(q)\log\log(q)). \end{align*} Let us compare this complexity with the complexity of computing the $m$-th term of an inversive sequence of the form $x_{m}=a/x_{m-1}+b$ over $\mathbb{F}_p$. The correct analogue is obtained when $q^n$ has roughly the same bit size as $p$. If one chooses $a,b$ small, one obtains that the complexity of computing $x_m$ is essentially the complexity of computing only one inversion modulo $p$, which is \[C^{\text{old}}(p)=O(\log(p)[\log \log(p)]^2 \log \log \log(p)).\] Now, since $q,n$ are chosen in such a way that $q^n$ has roughly the same bit size as $p$, we can write $C^{\text{old}}(q,n)=C^{\text{old}}(p)$. It is easy to see that up to a positive constant we have \[\frac{C^{\text{new}}(q,n)}{C^{\text{old}}(q,n)}\leq \frac{1}{\log \log q}+\frac{1}{n},\] which goes to zero as $n$ and $q$ grow. It is also interesting to see that with our construction we have the freedom to choose $q$ relatively small and $n$ large (so that again one gets $q^n\sim p$). In this case one can see that ${C^{\text{new}}(q,n)}/{C^{\text{old}}(q,n)}$ goes to zero as $([\log(n)]^2\log \log(n))^{-1}$. If one tries to do something similar with an ICG (i.e. reducing the characteristic but keeping the size of the field large), one would anyway have to compute an inversion in $\mathbb{F}_{q^n}$ which costs $O(n^2)$ $\mathbb{F}_q$-operations, see \cite[Table 2.8]{bib:MVOV96}, while in our case we would only need to invert one element and multiply $n$ elements in $\mathbb{F}_q$, which costs $(n+1)$ $\mathbb{F}_q$-operations. \section{Conclusion and further research} Using the theory of projective maps, we provided a general construction for full orbit sequences over $\mathbb A^n$. Our theory generalises the standard construction for the inversive congruential generators. Let us summarise the properties of fractional jump sequences obtained in this paper: \begin{itemize} \item We completely characterise the full orbit condition for such sequences (Theorem \ref{transitivity_characterisation}). \item In dimension $1$ they cover the theory of ICG sequences. \item In dimension greater than $1$, they are automatically full orbit whenever $(q^{n+1}-1)/(q-1)$ is a prime number, which is something that can never occur in the case of ICG sequences, as $2$ always divides $q+1$ when $q$ is odd. \item In any dimension, they enjoy the same discrepancy bound as the one of the ICG, so they appear to be a good source of pseudorandomness both when one desires a one dimensional sequence of pseudorandom elements or if one desires a stream of $n$-dimensional pseudorandom points (Theorem \ref{thm:discrepancy}). \item They are very inexpensive to compute: for $n>1$ computations are asymptotically quicker than the ones of an ICG sequence, as described in Section \ref{computation}. The moral reason for this is that at each step the ICG generates a $1$-dimensional pseudorandom point with exactly one inversion in $\mathbb{F}_q$, on the other hand at each step our construction generates an $n$-dimensional point of $\mathbb{F}_q$ (again, using only one inversion). \end{itemize} Some research questions arising are the following. \begin{enumerate} \item As our bound on the discrepancy holds for any transitive non-affine fractional jump sequence, can one build special fractional jump sequences having strictly better discrepancy bounds with respect to the one of the ICG? \item What happens if one replaces the finite field with a finite ring? Can we extend the fractional jump construction to this case? \item Can the notion of fractional jump be extended to more general objects such as quasi-projective varieties and produce competitive behaviours as in the projective space setting? \end{enumerate} \section*{Acknowledgments} The authors would like to thank Violetta Weger for checking the preliminary version of this manuscript. The third author would like to thank the Swiss National Science Foundation grant number 171248. \bibliographystyle{abbrv}
{ "timestamp": "2018-09-12T02:02:12", "yymm": "1712", "arxiv_id": "1712.05258", "language": "en", "url": "https://arxiv.org/abs/1712.05258" }
\section{Introduction} The enhancement seen in atomic hyperfine structure experiments for electric quadrupole (E2) moments \cite{sc35} over predictions for a configuration formed by only one or a few single particle orbitals has triggered intensive research on its consequences for nuclear structure: Besides the pressure exerted by the individual nucleons \cite{ra50} a theory of collective rotational motion as the origin of low energy $2^+$-states observed in even nuclei was derived \cite{bo53}. Also a successful spin predictions for low lying levels in odd nuclei became possible through the introduction of two oscillator frequencies for each main nuclear shell; this interplay between nucleonic motion and nuclear distortion was studied in nuclei away from magic shells possessing large equilibrium deformation. A possible breaking of axial symmetry in less deformed heavy nuclei was mentioned in these reviews, but the observation of additional $2^+$-levels was preferentially attributed to strongly excited dynamic shape changes along the symmetry axis ($\beta$ - vibration) or perpendicular to it ($\gamma$ - vibration) \cite{bo75}. An alternative explanation of the various quadrupole excitations by a rotation of a more static non-axial deformation was given \cite{df58}, but not persued by many other groups. The controversy between static vs. dynamic triaxiality is still discussed intensively, albeit the relation of nuclear deformation to the Jahn-Teller effect, well established as the cause of symmetry breaking in crystals \cite{ja37, re84}, may challenge the axial approximation as well as the spherical one. A possible reason for the often reported apparent nuclear axiality in Hartree Fock type variational calculations was shown to be related \cite{ha84} to the order for the projection to angular momentum in relation to the variational procedure (PAV vs.VAP) and hence questioned. In this paper we want to report on a survey in various fields of nuclear physics performed with the aim to test the possible influence of a breaking of axial symmetry on the analysis and interpretation of experimental results. We start with a short review on recent theoretical work and then we list experimental findings in heavy nuclei along the valley of stability which may give a hint on their symmetry. \section{Axial symmetry in heavy nuclei} \subsection{Recent theoretical work} The importance of triaxiality in nearly all heavy nuclei was also shown \cite{de10} by recent HFB-calculations using the Gogny D1S interaction, constrained to the selected values of Z and N and combined to the generator coordinate method. They use a triaxial oscillator basis with the product ${\omega_0}^3 ={\omega_x}{\omega_y}{\omega_z}$, where $\hbar\omega_0$ is obtained through minimization of the HFB energy. At this point $R_0$ and $R_p$ are defined as the charge and radius parameters for the equivalent mass resp. charge ellipsoid and the half axes are given by $R_i = \frac{\omega_0}{\omega_i\cdot R_p}$. Unfortunately different formulae have been proposed to convert half-axis values to deformation parameters which relate to observables. In a series of tests we determined that the axis lengths calculated from formulae in refs. \cite{hi53, ku72, de10} differ by less than 2 $\%$ for heavy nuclei when using identical deformations $\beta$ and $\gamma$; the relation to the convention based on spherical harmonics \cite{bo75} is more complicated as this is not volume-conserving. Following a suggestion \cite{an94}, we investigated a possible relation between the two deformation parameters for nuclei in the valley of stability \cite{gr17} and found a quite surprising correlation extending over the full range of deformation. The tabulated values from a recent Hartree-Fock-Bogolyubov calculation \cite{de10} were used; we repeat that this calculation is constrained to $A$ and $Z$ (CHFB) and is combined to the generator coordinate method (GCM) covering the full range of deformation. Assuming only $R_\pi$-invariance it predicts for many nuclei non-zero triaxiality $\langle\gamma\rangle \neq 0$, and in some cases the predicted standard deviation does not include $\gamma = 0$, {\em i.e.} $\cos(3\gamma) = 1$. As was pointed out \cite{be07}, HFB-calculations tend to overpredict intrinsic electric quadrupole moments $Q_0$ for nuclei near closed shells, as they do not fully account for the very deep mean field potential there. Thus a reduction for nuclei only $\delta$ nucleons away from a shell a factor for the $\beta$-deformation \cite{de10} of $0.4 + \delta/20$ is applied for $\delta\leq 10$and this expression is used in our approach to describe GR shapes to be described later. We calculate correction factors for protons as well as neutrons and the larger of the two is taken; as an example we quote the resulting reduction of the predicted \cite{de10} $\beta$-values by 40, 30, 20 and $10 \%$ for the isotopes $^{148-154}$Sm. We thus use tabulated deformation values \cite{de10} for $cos 3 \gamma $ and the corrected $\beta$ to obtain $Q_0$; in Fig.1 they are plotted against each other. \begin{figure}[htb] \centerline{% \includegraphics[width=8cm]{APPcQ}} \caption{Plot of $cos 3 \gamma $ vs. $Q_0$ as obtained from CHFB calculations \cite{de10}; the data as well as the eye-guide (blue dash) indicate a rise in the axiality with increasing quadrupole deformation.} \label{Fig.1} \end{figure} A rather clear correlation is obvious \cite{an94}, but the apparent tendency versus triaxiality for nuclei with small $Q_0$ (like the isotopes of Pb) may be considered surprising. It becomes less confusing when the rotation invariant $Q_0^3 cos 3\gamma$ is used as ordinate representing the dependence of axiality versus the quadrupole moment; it would tend more strongly towards zero for non-deformed nuclei. Rotation invariants were originally introduced \cite{ku72} for $^{152}$Ba, a nucleus with intermediate deformation, but they are of general value: The relation of the observable B(E2) to the deformation $\beta$ may vary with increasing spin \cite{ri82}, in contrast to such invariants. This is very helpful especially in the analysis of experimental multiple Coulomb excitation data \cite{cl86, sr11}. \subsection{Ground state masses of heavy nuclei} Whereas nuclear theory studies other than the one mentioned above \cite{de10} often still prefer to assume axial symmetry {\em ad hoc} we actually question that an experimental proof for such symmetry exists, and probably all heavy nuclei do not strictly obey it, in apparent resemblance to what was found for tetrahedral crystals by Jahn and Teller in 1937 \cite{ja37}, only much later observed and demonstrated to also apply to sphericity and axiality of nuclei \cite{re84}. The suggested analogy is very reasonable as nuclei are three-dimensional objects like crystals and hence the axial description in two dimensions must be an approximation which has to be justified whenever used. A global test was performed by the extended Thomas-Fermi plus Strutinsky integral (ETFSI) method generalized to include a possible triaxiality of the nuclear shape \cite{du00}. In this paper ground state masses are calculated for 36 randomly selected nuclides from the valley of stability and tested with respect to a lowering as compared to an axial approximation; such a lowering was found for all of them. Albeit this effect turns out to be rather small the results are at variance to various predictions as reviewed \cite{mo08}; the latter also presents a Table presenting the results obtained by the Finite-Range Liquid-Drop Model (FRLDM). Unfortunately this table only lists nuclei for which the calculated decrease in energy due to triaxiality is equal to or larger than 0.01 MeV and this is misleading as it may be misunderstood as a proof of the axiality of the others. The sensitivity of the ground state mass calculated in the FRLDM against triaxiality is weak, but axial symmetry may well be broken even if its effect is below the calculational accuracy. \subsection{Level energies and transition rates} In a recent study \cite{na17} deviations between FRLDM and $\gamma$-values from gamma-decay spectra for $^{116-118}$Ru - obtained from fission product spectrocopy - were pointed out: The measured “signature” splitting of the yrast bands, when compared with the Triaxial Projected Shell Model (TPSM) calculations, shows the need for large, nearly constant, triaxial deformations near 30 degrees, which differ considerably from the FRLD-predicitions. Similar discrepancies to theoretical values for were reported \cite{me74} already long ago: In three odd nuclei close to $^{208}$Pb the observed level energies agree well to a calculation based on a single-j nucleon coupled to an asymmetric rotor with $\gamma$ clearly diffferent from an axial rotor prediction. The authors suggest that triaxial minima probably are more pronounced in heavy nuclei than predicted by existing calculations. Later on various groups investigated the possibity of triaxialiy in various nuclides by extracting information on $\gamma$ from spectroscopic observables: In one case \cite{wu96} it was determined for 25 deformed heavy nuclei with $150 <A <238$ from transition energies as well as from B(E2)-values. As reasonable agreement between the two was found, they state to have demonstrated quantitatively that the rms shape of all these nuclei is triaxial. Another group \cite{zh99} has pursued similar ideas covering a "broad range of nuclei from Z = 50-82, including nondeformed, deformed, and $\gamma$-soft nuclei". They first use results from an analysis of experimentcal data based on the Davydov model \cite{df58} which describes the rotation of a triaxial rigid body: $\gamma_E$ is taken from energy ratios and $\gamma_{(BE2)}$ as well as $\gamma_{(br)}$ are derived from the B(E2)-values or ratios, taken from three low lying 2$^+$ states. These are compared against each other and in a second step also with $\gamma_Q$ obtained from fits to experimental data using the interacting boson approximation (IBA). The authors find that the agreement or correlation in the pairwise comparisons of $\gamma$-values is reasonable; they also state that a rotation-invariant approach provides an approximate validation of the extraction of empirical triaxiality-values from the Davydov model. But they apparently wonder - in contrast to the work mentioned before - if its inherent rigidity concerning axial asymmetry creates a bias versus static triaxiality in such a study. Recently work was published \cite{be10} where ground-state deformation parameters $\beta$ and $\gamma$ for stable Kr, Xe, Ba, and Sm isotopes were calculated using the eigenvalues of corresponding IBA-1 calculations, and the resulting modifications of the equilibrium deformations as taken from LDM compilations \cite{mo08} were listed. With the exception of two well deformed nuclei $\gamma$-values between 17 and 30 deg found and this may be considered an indication of triaxiality, be it static or dynamic. This we consider a clear indication to look for other observables of relevant sensitivity, and this is expected to be best for not strongly deformed nuclei.were \subsection{Coulomb excitation and reorientation} The BE2-values and quadrupole moments of nuclei with given intrinsic deformation can be derived microscopically as solutions of the cranked HFB-equations \cite{ri82}. They are similar to those calculated for a rigid triaxial rotor \cite{df58} with the same deformation; both observables depend differently on the intrinsic axial asymmetry. Thus it is an evident strategy to measure both observables in one nucleus with sufficient accuracy to get direct information on its possibly broken axial symmetry. Unfortunately the experimental values reported for Q($2^+$) have rather large uncertainties related to difficulties in the measurement of Coulomb excitation reorientation. But in the case of $^{204}$Pb and $^{206}$Pb the B(E2)-values and hence their influence on the determination of $\gamma$ are very small, such that the respective paper \cite{jo78} gives values of $\gamma=43(8)^{\circ}$ and $ 33(6)^{\circ}$ clearly indicating static triaxiality with a tendency versus oblateness. This is a very interesting result with respect to the very often made assumption of near magic nuclei being rigidly spherical. In the combination of measurements for Coulomb excitation yields obtained with projectiles with different Z and their correlated analysis respecting the effect of rotation invariants the extraction of the deformation parameters $\beta$ and $\gamma$ becomes less ambiguous. This was demonstrated \cite{cl86, wu95} for a large number of heavy nuclei, and the analysis even allowed to estimate the equantum mechanical zero-point oscillation of the values derived. Such investigation of nuclei with $180<A<200$ "clearly elucidates the smooth transition from prolate strongly-deformed shapes to less deformed triaxial shapes that have considerable softness to triaxial vibrations" as stated by authors. Even for 25 rather well deformed nuclei intrinsic E2 matrix elements have been deduced from measured interband E2 matrix elements between ground and $\gamma$ band \cite{wu96, sr11}. After correcting for the angular momentum dependence of the coupling between the rotation and intrinsic motion centroids for the possibly fluctuating triaxiality are obtained, and these correlate well with the values obtained from the analysis of excitation energies, based on rigid triaxiality \cite{df58}. \subsection{Collective enhancement of level densities} The statistical model of nuclear reactions derives the exit channel phase space from the density of levels in the produced nuclei. A first estimate \cite{be37} was derived from the assumption that nuclear excitation can be modelled like a Fermi gas. The importance of rotational modes at low energy - well known from spectroscopy - lead to modifications \cite{er58, gi65}. A generalisation to non-axial deformation is clearly indicated in view of the various band structures found in nuclear spectroscopy experiments; it was proposed to include it in a group theoretical approach \cite{bj74} which handles the possible symmetries and their breaking. The Fermi gas state density, valid in the body fixed reference frame, is well defined, when the level density parameter $\tilde{a}$ from nuclear matter and a critical temperature $t_{ph-tr}$ from Fermi gas theory are applied. The parity independent level density, observable in the laboratory, is obtained for every spin I of an even nucleus by multiplication with a factor depending on the shape symmetry, which is $(2I+1)/4$, if nothing but the conservation in a rotation by $\pi$ is assumed. It is larger if axial deformation is assumed and even more so for the triaxial case \cite{bj74}. By a spin cut-off term a correction for the rotational energy is assured, and only shell and pairing effects have to be known for a prediction of level densities, when the shape symmetry is known . A sensitive test of this prescription is possible just above the neutron capture energy $S_n$ from the capture resonance spacings observed by neutron time of flight. For spin 0 target nuclei it was carried out by us recently \cite{gr14, gr17} and the results are depicted in Fig. 2, which also shows the reduction predicted for an assumption of axial or spherical symmetry. The discrepancies for $A>140$ may be related to octupolar deviations from $R_\pi$-symmetry. \begin{figure}[htb] \centerline{% \includegraphics[width=8cm]{APPllSn}} \caption{Plot of the level density near $S_n$ \cite{ca74} vs. A for even-odd nuclei in the valley of stability shown as black bars. The drawn blue curve shows the result of our parameter-free predicition assuming only the symmetry related to $R_\pi$, a rotation by $180^{\circ}$. The dashed (red) curve visualizes the effect of the axial symmetry approximation and the lowest (dashed green) curve was calculated without any collective enhancement \cite{be37} and Thomas-Fermi spin dispersion \cite{je52}.} \label{Fig.2} \end{figure} The strong change with A was found to be related to the shell correction reducing the ground state mass as compared to LDM or Thomas-Fermi predictions; effects of nucleon pairing play an increasing role with decreasing excitation energy $E_x$ and lead to a steepening in the slope of level density vs. $E_x$ below the transition to a quasi superfluid nuclear phase. As we obtain the absolute normalization from the assumption that a Fermi gas prescription based on a nuclear matter Fermi energy is valid down to the phase transition point, we get a parameter free formula and we do not consider $\tilde{a}$ as a freely adjustable "level density parameter". Some ambiguity remains in the actinide region where 1. the shell correction is difficult to fix and 2. the symmetry about $\pi$ might be broken. But aside from that we can state that the breaking of axial symmetry is clearly indicated by the experimental data albeit they do not allow a determination of the absolute value of $\gamma$. \subsection{Splitting of Giant resonances} The splitting of the Isovector Giant Dipole Resonance (IVGDR) is proposed as an indicator of axial deformation in many nuclear physics textbooks like in the one of A. Bohr and B. Mottelson \cite{bo75}; there a local fit to the experimental data for the five isotopes regarded was performed and this indicated an independent adjustment of the apparent width to be obviously superior. In the work of our group recently reviewed \cite{ju17, gr18} a more rigorous approach was pursued by applying the old idea of the width to be mainly given by the resonance's energy without a local adjustment for each isotope eventually dependent of $A$ and $Z$. Here the modification accounting for triaxiality \cite{bu91} was helpful to obtain a consistent fit for 23 nuclei (as examples) in a wide range of mass number A under the assumption of broken axial symmetry $\Gamma_i = c_w E_i^{1.6}$. This triple Lorentzian procedure \cite{ju10,er10,gr18} (TLO) enables a restriction to only one global fit parameter $c_w = 0.045(3)$, which is valid for all nuclei regarded, to parametrize the IVGDR-width; actually this energy dependence is also valid to describe the width variation in one nucleus with its three pole energies taken from the three oscillator frequencies predicted \cite{de10} by the HFB calculation. Actually our respective prediction for ${}^{208}$Pb agrees to the predicted \cite{do72} spreading width and we could disregard our fitting for $c_w$; the energy integrated absorption cross section is also fixed, as we require agreement to the TRK sum rule. The central resonance energy $E_0$ is fixed via the LDM \cite{my77} and an adjusted effective nucleon mass $m_{\rm eff} c^2$=800 MeV; other fits to the data only involve the peak widths, which will now be discussed. In Fig. 3 results for two neighboring nuclei are shown to visualize that for ${}^{150}$Sm we clearly need three poles when we use the same width parameter as in $^{152}$Sm; only for the latter a reasonable fit is also possible when using the axial approximation \cite{ca09}, as is well known \cite{df58} for strongly deformed nuclei. A modification resulting from sampling the IVGDR shapes according to the variances given \cite{de10} for the CHFB-calculation does not influence that conclusion. \begin{figure}[htb] \centerline{% \includegraphics[width=12.5cm]{APPSm}} \caption{Plot of the photoneutron cross section \cite{ca74} for $^{150}$Sm (a) and $^{152}$Sm (b) together with the TLO sum of three Lorentzians (drawn curve) with $E_i$ indicated as black bars. The dashed (purple) curves visualizes the effect of shape sampling \cite{zh09, er10, gr17}.} \label{Fig.3} \end{figure} \subsection{Photon strength and neutron capture} The radiative capture of fast neutrons by heavier nuclei plays an important role in considerations for advanced nuclear systems and it is of interest also for the cosmic nucleosynthesis. To test the combination of the present ansatz on the photon strength to the one for the level density - both allowing for a breaking of axial symmetry - a comparison on absolute scale of predicted to measured average radiative widths is possible. A sum over the decay channels to all bound states $J_b$ which can be reached from the capture resonances $J_r$ by photons of energy $E_\gamma = E_r - E_b$, multiplied by their density $\rho(E_b,J_b)$ leads to an effective averaging and to a maximum sensitivity of the product to rather low photon energy. As shown by us earlier \cite{sc12, gr17} the impact of photon strength on radiative neutron capture cross sections is peaking in the region below $E_\gamma \cong 5$ MeV and low energy modes may have some effect as well as irregular $A$ and $Z$ dependence of $\Gamma_\gamma$ or modifications in its slope vs. $E_\gamma$ \cite{ca09}. In our TLO approach the sole variation of $\Gamma_i$ with the pole energies $E_i$ avoids such problems by a strict implementation of the TRK sum rule: At least within the valley of stability a good agreement is obtained on absolute scale for neutron capture in the range of unresolved resonances. This is depicted in Fig. 4 for the Maxwellian average cross sections compiled recently \cite{di10} for $\langle E_n \rangle = 30 $~keV. An essential feature here is our global ansatz for the spreading width which fixes the important tail of the E1-resonance; it depends on the IVGDR pole energies only and their dependence on exact deformation parameters is nearly unimportant. But the broken axial symmetry has a large influence on the absolute value of the density of levels reached in the capture process. Thus the good agreement to experimental data as seen in the figure can be considered a clear support of our preference for non-axiality. \begin{figure}[htb] \centerline{ \includegraphics[width=8cm]{APPMx}} \caption{Plot of the Maxwellian averaged neutron capture cross section shown as black dots \cite{di10} for even-even nuclei together with the TLO ansatz using three Lorentzians combined to the discussed parameter free level density prediction (drawn curve) versus $A$ in the valley of stability. } \label{Fig.4} \end{figure} For very neutron rich nuclei with their small $S_n$ the situation may become more complex, and experimental tests may become possible eventually in newly available radioactive beam facilities. \section{Conclusion} Various spectroscopic information presented over the years \cite{me74, ri82, cl86, an94, wu96, na17} indicated triaxiality for a number of heavy nuclei. Admission of the breaking of axial symmetry in accord to CHFB calculations \cite{de10} clearly improves a global description of Giant Dipole Resonance (IVGDR) shapes by a triple Lorentzian (TLO), introduced recently \cite{ju08, ju10, gr11} and discussed in some detail by this paper. The three parts add up to the TRK sum rule, when theoretical predictions for the $A$-dependence of pole energies from droplet model \cite{my77} and spreading widths based on one-body dissipation \cite{bu91} are used. The consideration of broken spherical and axial symmetry – for low excitation as well as for increasing excitation energy –, even when only weak, allows for a surprising reduction of the number of free fit parameters in two fields: The photon strength function and also in our novel approach to the density of low spin states populated by neutron capture. Thus a combination of the ansatz for photon strength and the one for level densities leads to a surprisingly good prediction of Maxwellian averaged capture cross sections for more than $100$ spin-0 target nuclei with $ A>50$. They, as well as resonance spacing data are well described by a global ansatz with all parameters adjusted in advance and independent of the respective quantities. The triple Lorentzian (TLO) fit to IVGDR`s is global as it has only one free parameters – an effective nucleon mass – adjusted simultaneously to many resonance energies. The width can be taken from a HFB calculation \cite{do72} for $^{208}$Pb and adjusted to other $A$ and $Z$ only indirectly via $E_i$; the strength integrated over the IVGDR follows the TRK sum rule, which hence is always fulfilled. We stress that our good representation of level densities and IVGDR's together with the previous multiple Coulomb excitation data from the Rochester-Warsaw collaboration clearly hint versus a rigidity of triaxiality; especially for nuclei with intermediate deformation. Other data like the chiral effects in odd nuclei as well as the collective enhancement of level densities had already proven to be at variance with the often made assumption of axial deformation in heavy nuclei. \input{Kazirefs171107.tex} \end{document}
{ "timestamp": "2017-12-15T02:01:08", "yymm": "1712", "arxiv_id": "1712.04999", "language": "en", "url": "https://arxiv.org/abs/1712.04999" }
\section{Introduction}\label{sec:intro} The connections that have been uncovered between gravity and thermodynamics -- with the fundamental intervention of quantum effects -- have been widely regarded as clues leading to a long-sought theory of quantum gravity. In the work \cite{ACQUAVIVA2017317} we propose a new picture that might help solving some of many open issues of such a theory. Our approach starts from re-considering the bound established by Bekenstein \cite{Bekenstein1981}, who showed that the entropy $S$ of any physical system contained in a volume $V$, including the volume itself, is supposed to be bounded from above by the value of the Bekenstein-Hawking entropy $S_{\BH}$ of a black hole whose event horizon coincides with the boundary of $V$~\cite{Bousso1999} \begin{equation}\label{eq:BekensteinBound} S \leq S_{\BH} = \frac{1}{4} \frac{\pd V}{\ell^2_P} \, , \end{equation} where $\ell_{P}$ is the Planck length. The generality of the original bounds for ordinary matter posited by Bekenstein is the subject of intense investigation and debates \cite{Bekenstein2014}. Nonetheless, it is widely accepted that the bound is saturated for black holes. Following the thermodynamic spirit, it is then natural to look for a connection between such entropy bound and the ensemble of microstates of some fundamental degrees of freedom of the system. The number of degrees of freedom $N$ of a quantum physical system is defined as the number of bits of information necessary to describe the generic state of the system. In other words, $N$ is the logarithm of the dimension $\cal{N}$ of the Hilbert space of the quantum system. In the extreme case of a black hole ${\cal N} = e^{S_{\BH}}$. Hence, Eq. \ref{eq:BekensteinBound} means {\it a)} that in nature the information contained in any volume $V$ cannot exceed 1 bit every 4 Planck areas of the boundary of $V$ and {\it b)} that only in the extreme case of a black hole the hypothetical fundamental degrees of freedom are most excited (see, e.g., \cite{Bekenstein2003}). The fundamental entities characterized by such degrees of freedom cannot fully coincide with the particles customarily thought of as elementary: if it were so, the bound would be saturated with ordinary matter; moreover gravity must be included in the counting of the fundamental degrees of freedom, precisely because the saturation happens in the extreme black hole case. Although nothing at this stage can be said about the nature of the fundamental constituents, their behaviour needs to be such that the emergent picture at our scales is that of quantum field theory (QFT) acting on a continuum classical spacetime. However, an assumption that can be made in this context regards the character of their dynamics: does it comply with unitarity or not? Given the central requirement of unitary evolution for the description of quantum systems at our scales, we assume such feature is preserved down to the fundamental level. Now, the fact that both gravity and quantum fields should contribute to the counting of degrees of freedom implies the possibility of a {\it sharing} of such degrees of freedom between them. At the same time, according to a statistical-mechanical picture, one can expect, in general, different microstates which give rise to the same classical geometry. These configurations would yield different numbers of degrees of freedom available for the quantum fields. Consequently, even though we assume unitary evolution on the fundamental level, the rearrangement of the fundamental degrees of freedom would lead to an entanglement between the emergent geometry and the emergent fields \cite{ACQUAVIVA2017317}. It is worth stressing at this point that the idea that gravity is an emergent phenomenon arising from more fundamental degrees of freedom is certainly not new and goes back to Sakharov \cite{Sakharov1967,Visser2002}. Presently there exist many particular models describing how gravity could emerge and the common feature of these models is to consider some kind of underlying discrete lattice representing the mutual interactions between fundamental elements. The fact that crystals with defects can give rise to effective non-Euclidean geometries has been employed in the cosmological ``world crystal model'' \cite{Kleinert1987}; it was proposed in \cite{VanRaamsdonk2010} how the classical properties of the space-time might emerge from the quantum entanglement between the actual fundamental degrees of freedom and a specific model along these lines has been proposed recently in \cite{Cao2016}, with the interesting possibility of recovering the ER=EPR conjecture \cite{maldacena2013,jensen2013}; finally, in quantum graphity \cite{Konopka2006,Konopka2008} the fundamental degrees of freedom and their interactions are represented by a complete graph with dynamical structure (for more approaches see, e.g., \cite{Baez1999,Ambjorn2006,Lombard2016,Oriti2014,Rastgoo2016,Requardt2015,Trugenberger2016}). At the same time, nonequivalent descriptions of the same underlying dynamics are a built-in characteristic of QFT \cite{Haag1992}, both in its relativistic \cite{Dirac1966} and nonrelativistic regimes \cite{Umezawa1982} (e.g., in condensed matter). The quantum vacuum has in fact a rich structure with nonequivalent sectors or ``phases'' \cite{Milloni2013}. Such structure is understood in QFT as due to the infinite number of degrees of freedom and/or to a nontrivial topology of the system, such as the presence of topological defects \cite{Blasone2011}. On the mathematical level these features are the manifestation of the failure of the Stone-von Neumann theorem \cite{Neumann1931,Hall2013} that holds only for quantum mechanical systems with finite degrees of freedom and trivial topology \cite{Bogolubov2012}. Such failure leads to the existence of different, unitarily inequivalent representations of the field algebra. That is, for a given dynamics, one should expect several different Hilbert spaces representing different phases of the system with distinct physical properties and distinct excitations deemed as elementary in the given phase \footnote{In fact, the concepts of elementary and collective excitations are interchangeable in theories where electromagnetic duality is at play \cite{Montonen1977,Castellani2016}.} \cite{Umezawa1993}, but whose general character is that of the quasiparticles of condensed matter \cite{Landau8,Landau9}. Examples of emergent behaviours in condensed matter are the Cooper pairs of type II superconductors \cite{BSC1957,Altland2010} and the more recently discovered quasiparticles of graphene \cite{CastroNeto2009}. In the latter case, massless Dirac quasiparticles emerge from the dynamics of electrons propagating on carbon honeycomb lattices and give rise to a continuum relativistic-like (2+1)-dimensional field theory on a pseudo-Riemannian geometry \footnote{In fact, in the case of graphene, geometries can indeed be seen as emergent \cite{Iorio2011,Iorio2015}. Inspired by the fact that different arrangements of the carbon atoms can give rise to the same emergent spacetime geometry, in our model we take into account the possibility that the same emergent geometry can be realized through different arrangements of the fundamental degrees of freedom. These microscopic arrangements are indistinguishable at our low energies.}. Similarly one finds examples in the context of black hole physics. Indeed, the vacuum of a freely falling observer in Schwarzschild's spacetime can be seen, by a static observer, as a coherent state of Cooper-like pairs similar to that of a superconductor \cite{Israel1992}. The Hawking radiation itself is related to the existence of distinct elementary excitations in the two frames (see the original derivation by Hawking \cite{Hawking1974,Hawking1976} and also \cite{Israel1976,Iorio2004}). \section{Information loss}\label{sec:info} It is clear from the considerations above that, even without specifying the nature (symmetries, type of interaction, etc.) of the hypothetical fundamental constituents, it is possible to obtain model-independent conclusions based on the validity of the holographic bound and the preservation of unitary evolution at the fundamental level. At this point we can extract an important consequence for the process of black hole evaporation and the resulting information-loss issue. In the standard scenario assuming unitary evolution, the information contained in the collapsing matter is scrambled inside the black hole, but is eventually fully released during evaporation. This paradigm of information conservation is manifested by the so-called ``Page curve'' \cite{Page1993b} (see also \cite{Harlow2016,Chakraborty2017}) which describes the complete information retrieval in the Hawking radiation at the final stage of the black hole evaporation. On the other hand, a loss of information -- in the sense of evolution of a pure state into a mixed state -- can have two causes. The first one is that the laws of quantum theory are indeed violated in some regimes. The second one is that only some subsystem of the universe is accessible, hence there will always be a residual entanglement of the subsystem with the inaccessible parts \cite{Wald-Unruh2017}. We do not consider the first possibility, rather we suggest that part of the total system is always hidden in the following sense. In the emergent picture, the probability that the fundamental degrees of freedom after the complete evaporation reorganize just like before the collapse leading to the black hole is inversely proportional to the number of their possible nonequivalent rearrangements. Therefore, even if one demands the dynamics of the fundamental constituents to be unitary and even if the geometries before the formation of the black hole and after its evaporation are the same, the emerging quantum fields will be in general different (i.e., will live in different Hilbert spaces). Hence, one expects that the entanglement between the geometry and the quantum fields due to the reshuffling of fundamental degrees of freedom could lead to a loss of information in the Hawking radiation \cite{ACQUAVIVA2017317}. \section{Model of black hole evaporation}\label{sec:model} Our goal at this point is to construct a simple kinematical model which mimics the evaporation of the black hole while taking into account the conceptual framework presented above. We consider the following idealized scenario, see \cite{ACQUAVIVA2017317}. Initially, there is a quantum field (in an almost flat space) which collapses and eventually forms a black hole of mass $M_0$. The black hole starts to evaporate in a discrete way: for simplicity we assume that each emitted quantum of the field has the same energy $\eps$, so that $M_0 = N_{\max} \,\eps$ for some integer $N_{\max}$. At the end of the evaporation, the space becomes almost flat again and the field is in the excited state with $N_{\max}$ quanta. The formal assumptions behind such scenario are the following: \begin{enumerate} \item There exists a \emph{fundamental Hilbert space} $\HH$ describing the fundamental degrees of freedom of the total system (geometry and fields). Since we focus on a finite region accessible to a generic observer and big enough to contain the black hole at initial time and the emitted radiation at a later time, $\HH$ is considered finite-dimensional due to the holographic bound; \item For a specific observer at low-energy scales, the states of $\HH$ appear as classical \emph{spatial} geometry and quantum fields propagating on it. \item There are states in $\HH$ which represent the same classical geometry but are microscopically different. \item During the unitary evolution there is an exchange of the number of degrees of freedom between the fields and geometry. \end{enumerate} In order to make connection with low-energy physics, in this model we introduce a space of classical geometries representing spatial slices of space-time containing a black hole of a given mass $M^{(a)} = a\,\eps$. That is, we introduce an orthonormal set of states \begin{equation}\label{eq:ga state} \ket{ g^{(a)} }, \qquad a = 0, 1, \dots N_G - 1, \end{equation} where $N_G$ is therefore the number of geometries allowed in our model. For convenience, we introduce the \emph{Hilbert space of classical geometries} $\HHG$ as the linear span of the states (\ref{eq:ga state}) and define the ``mass operator'' ${\MM}$ by \begin{equation} {\MM} \ket{g^{(a)}} = M^{(a)} \ket{g^{(a)}} \equiv \eps\,a\,\ket{g^{(a)}}. \end{equation} An operator of this kind should represent the possibility of measuring geometric properties of the space, such as the three-dimensional metric, as seen by a specific observer: for simplicity, here we can restrict to cases where the geometry is determined by one macroscopic quantity (the mass, for the purpose at hand). The assumption that the geometry of the space is a result of some coarse-graining procedure associated with a specific observer means there is some mapping $\PPG: \HH \mapsto \HHG$ which assigns to a microscopic state in $\HH$ corresponding classical geometry or an appropriate superposition of such geometries. This is analogous to the {\it emergence map} recently introduced in \cite{doi:10.1142/S0218271817430131}. Similarly, we shall assume the existence of some mapping $\PPF:\HH \mapsto \HHF$ which extracts the ``field content'' of a state in $\HH$. Hence $\HHF$ can be, e.g.,\ an appropriate Hilbert (Fock) space representing the states of the fields; more concrete definitions will depend on the particular theory of quantum gravity. Schematically, the states of the fundamental Hilbert space $\HH$ can be interpreted as states with some classical geometry via the mapping $\PPG$ and with some state of the quantum field via the mapping $\PPF$: \begin{center} \begin{tikzpicture} \draw (0,0) node {$\ket{\psi}\in\HH$}; \draw[->] (-0.5,-0.3)--+(-1.5,-0.5) node[pos=0.75,above=3pt] {\scriptsize $\PPG$}; \draw[->] (0.3,-0.3)--+(1.5,-0.5) node[pos=0.75,above=3pt] {\scriptsize $\PPF$}; \draw (-2, -1.1) node { $\ket{g^{(a)}}\in\HHG$}; \draw (2, -1.1) node { $\ket{\phi} \in \HHF$}; \end{tikzpicture} \end{center} After introducing these mappings, one can label the states in $\HH$ by the values of the coarse-grained quantities, i.e., $\ket{\psi}=\ket{g^{(a)}, \phi}$. For simplicity we assume that any state of $\HH$ can be interpreted in such a way, although in reality this is much more complicated: classical geometries are expected to be very special superpositions of basis states with no classical analogues. Since we are not building a specific model of quantum gravity, we ignore this complication. On the other hand, one can argue that among the states corresponding to definite classical geometries one can choose a subset of (sufficiently distinct) states which are approximately orthogonal and consider only a subspace of $\HH$ generated by this approximately orthonormal set. In \cite{Page1993b} Page considers a splitting of the Hilbert space representing the states of the field into ``inside'' and ``outside'' parts with respect to the horizon of the black hole. We wish instead to implement the idea that the geometry and its fundamental degrees of freedom must be brought into the picture, so that one should split the fundamental space $\HH$ into a direct product of ``geometrical'' and ``field'' part. However, for our argument it is essential to entertain the possibility that the distribution of the microscopic degrees of freedom between the geometry and the fields is not fixed and can change during the evolution of the system. We assume now that the fundamental Hilbert space $\HH$ can be split into a direct sum of the subspaces $T_{(i)}$, \begin{equation} \HH = \bigoplus_{i=1}^{N_T} T_{(i)}, \qquad \dim \HH = N_T \, N, \end{equation} where each $T_{(i)}$ has a fixed dimension $N$ and consists of states with some specific distribution of the degrees of freedom between the geometry and the fields, so that $N_T$ is the number of different available distributions. By assumption, each $T_{(i)}$ has a structure \begin{equation} T_{(i)} = \HH_{\mathrm{G}}^{p_i} \otimes \HH_{\mathrm{F}}^{q_i}, \qquad p_i\,q_i = N, \end{equation} where $\HH_{\mathrm{G}}^{p_i}$ ($\HH_{\mathrm{F}}^{q_i}$) is a Hilbert space of dimension $p_i$ ($q_i$) representing possible microscopic states of the geometry (fields). Considering a generic state $\ket{\psi}\in\HH$, we define its normalized projection $\ket{\psi}_{i}$ onto the subspace $T_{(i)}$. Then the state of the field is described by the density matrix ${\rho}_{(i)}$ defined by tracing over the degrees of freedom of the geometry. The corresponding entanglement entropy will be denoted by \begin{equation}\label{eq:S-i} S_{(i)} = - \Tr_{\HH_{\mathrm{F}}^{q_i}} {\rho}_{(i)} \ln {\rho}_{(i)} . \end{equation} The latter represents the entanglement entropy between the geometry and the fields for a given microscopic arrangement $\ket{\psi}_{i}$ of the fundamental degrees of freedom. The expected value of the entanglement between the fields and the geometrical degrees of freedom will be \begin{equation}\label{eq:S avg} \langle S \rangle = \sum_i p_{(i)} S_{(i)} , \end{equation} where $p_{(i)}$ is the probability of finding the system in the state with the specific arrangement $T_{(i)}$; such arrangements are indistinguishable for the observer. In order to explicitly compute the average entanglement entropy, we specialize to a simplified scenario which nevertheless captures the essence and consequences of the procedure: we assume that only two arrangements are possible ($N_T = 2$) and that both arrangements admit the same family of classical geometries (\ref{eq:ga state}). Let us fix the number of degrees of freedom for each type of arrangement to $N = 1500$ and let us set \begin{eqnarray} T_{(1)} & = & \HH_{\mathrm{G}}^{30} \otimes \HH_{\mathrm{F}}^{50}, \quad p_1 \times q_1 = 30 \times 50 , \nonumber \\ T_{(2)} & = & \HH_{\mathrm{G}}^{60} \otimes \HH_{\mathrm{F}}^{25}, \quad p_2 \times q_2 = 60 \times 25 . \end{eqnarray} Then we have $\dim \HH = 3000$. Finally, we assume that the maximal mass $M_0$ of the black hole is split into $N_G = 30$ quanta. That is, for a black hole of mass $M^{(a)}=a\,\eps$ there is exactly one state in $\HH_{\mathrm{G}}^{30}$ which is mapped to a state $\ket{g^{(a)}}$ by $\PPG$, while in $\HH_{\mathrm{G}}^{60}$ there are two such states. More details on the construction and the calculation of the entanglement entropy in this framework can be found in \cite{ACQUAVIVA2017317}. At the beginning of the evaporation process, let the black hole have its maximal mass $M_0 = (N_G -1) \eps$ and let there be vacuum outside the black hole. That does not necessarily mean that the Hilbert space for the field has dimension 1, as it is in Page's case; indeed, in our model we have chosen the dimension to be either 50 or 25, depending on the microscopic arrangement. However, since there is only one vacuum state in both arrangements, the field is disentangled from the geometry and we have $\avg{S}=0$. Hence, our starting point coincides with the starting point of Page. Now, as the black hole starts to evaporate, we assume that the state in $\HH$ evolves continuously and unitarily; however we take ``snapshots'' of the system when the expected values of mass of the black hole and number of particles are respectively \begin{equation} \avg{M} = (N_G-1-k) \quad {\rm and} \quad \avg{n} = k \,, \end{equation} where $k = 0, 1, \dots N_G-1$. \begin{figure} \centering \includegraphics[width=1 \textwidth]{modified-page} \caption{Entanglement entropy during evaporation in the quasi-particle picture. Entanglement entropy here is a function of the mass of the black hole, decreasing during the process of evaporation. When the evaporation starts, that is when $M=M_0$ and $\avg{S}=0$, this curve exactly corresponds to Page curve. Nonetheless, the end point corresponds to a dramatically different scenario, that is, at the end of the evaporation the entanglement entropy stays finite, due to the unavoidable entanglement between geometry and fields. This lack of pureness of the final state is due to the presence of more than one possible microscopic realization of the same emergent geometry. One should not underestimate this effect due to the small deviation from zero we obtain in the toy model. In fact, first, even a small deviation of $\avg{S}$ from the pure state value signals a dramatic departure from the information-conserved scenario. Second, departing from the toy model, hence allowing for more microscopic realizations of the same macroscopic geometry, would in general increase the final deviation.} \label{fig:page-modified} \end{figure} An increase in $k$ corresponds to a decrease of mass of the black hole and to an increase in the number of expected particles of the field, so that, under the assumption of a dynamical evaporation process, $k$ can be taken as a discrete evolution parameter. In Fig.\ \ref{fig:page-modified} we show the numerical result for the entanglement entropy as a function of decreasing black hole mass. Although the curve starts at the point $(M_0,0)$, which corresponds to the same origin of the Page curve, at the final stage of the evaporation the entanglement entropy does not go back to zero. Such deviation from a pure final state is due to the residual entanglement between geometry and field and it can be traced back to the presence of more than one possible microscopic realization of the same emergent geometry. It is clear that allowing for more microscopic realizations of the same macroscopic geometry would in general increase the final deviation of $\avg{S}$ from the pure state value. \section{Conclusions}\label{sec:conc} In \cite{ACQUAVIVA2017317}, by elaborating on the fact that the number of degrees of freedom, that determine the state of a system in a compact volume $V$, is bounded from above by the Bekenstein-Hawking entropy of a black hole with horizon area $\partial V$, lead us to entertain the possibility that such fundamental degrees of freedom should describe the state of both fields {\it and} geometry contained in said volume. The immediate consequence of such statement is that fields and geometry should be regarded as emergent phenomena at ordinary energy scales. In this picture the particles of the Standard Model are analogous to quasiparticles, arising together with the classical background geometry from the interactions between the fundamental degrees of freedom. We have not provided a framework for such dynamical emergence (see \cite{Cao2016,maldacena2013,jensen2013,Konopka2006,Baez1999,Ambjorn2006,Lombard2016,Oriti2014,Rastgoo2016,Requardt2015,Trugenberger2016} for some examples along these lines), and we assumed that the unitary evolution is preserved down to the fundamental level, although we are currently investigating whether such ``fundamental unitarity'' makes indeed sense \cite{vonNeumannANDBekenstein}. We then can safely conclude that the evolution inevitably leads to a {\it reshuffling} of the fundamental degrees of freedom, and this is reflected on the emergent level as an entanglement between quantum fields and geometry. In order to investigate the consequences of such scenario, we provided a kinematical framework that allows us to address the issue of information-loss in the context of black hole evaporation. Through a simple toy model of evaporation it is shown how the entanglement between fields and geometry can lead, after the evaporation is completed, to an average loss of the initial information. We claim that such modification of the original Page curve should be regarded as a common feature of any theory of quantum gravity in which both the spacetime geometry and the quantum fields propagating on it are emergent features of an underlying fundamental and unitary theory. \section*{Acknowledgments} The authors acknowledge financial support from the Czech Science Foundation (GA\v{C}R), grant no.\ 14-07983S (A.I.), and grant no.\ 17-16260Y (G.A., M.S.), and are indebted to Georgios Luke\v{s}-Gerakopoulos for many fruitful and inspirational discussions, and to Pavel Krtou\v{s} for critical remarks.
{ "timestamp": "2017-12-15T02:09:04", "yymm": "1712", "arxiv_id": "1712.05275", "language": "en", "url": "https://arxiv.org/abs/1712.05275" }
\section{Introduction} An unbiased line survey is an effective method in understanding chemical compositions as well as excitation conditions in the ISM. So far, many line survey observations toward characteristic sources in our Galaxy have been carried out using radio telescopes. For example, line surveys toward massive star forming regions were reported in Sgr B2 (Cummins et al. 1986; Friedel et al. 2004) and Ori-KL (Johansson et al. 1984; Turner 1989; 1991; Crockett et al. 2010). The results of the low-mass starless cores TMC-1 (cyanopolyyne peak) (Kaifu et al. 2004; Kalenskii et al. 2004) and the evolved star IRC+10216 (Avery et al. 1992; Kawaguchi et al. 1995; Cernicharo et al. 2010) were also published. Ultra compact HII regions in W51 and W3 (Bell et al. 1993; Helmich \& van Dishoeck 1997), and the shocked molecular cloud L1157 B1 (Sugimura et al. 2011; Yamaguchi et al. 2012) were observed. Moreover, Takekawa et al. (2014) reported the molecular composition toward the Galactic circumnuclear disk (CND) region and Sgr A*. The previous work showed that line surveys are useful not only in obtaining molecular abundances of known tracers, but also in discovering new chemical probes. Over the last 10 years or so, targets for line surveys have been expanded to not only nearby Galactic sources but, with new instruments, sources with weaker emission from some external galaxies. These new instruments include low noise receivers with superconducting devices, wide band intermediate frequency chains, and wide band and high resolution spectrometers for large single-dish telescopes such as the Nobeyama Radio Observatory (NRO) 45-m telescope\footnote{The 45-m radio telescope is operated by Nobeyama Radio Observatory, a branch of the National Astronomical Observatory of Japan.} (e.g. Nakajima et al. 2008) and the Institut de Radioastronomie Millim\'{e}trique (IRAM) 30-m telescope (Carter et al. 2012). A pioneering spectral line survey toward the nearby galaxy NGC 253 was carried out by Mart\'{i}n et al. (2006) with the IRAM 30-m telescope in the 2-mm band. To date, a number of unbiased line surveys at mm/sub-mm wavelengths have been reported towards external galaxies using single-dish telescopes (e.g. Naylor et al. 2010; van der Werf et al. 2010; Aladro et al. 2011b, 2013, 2015; Snell et al. 2011; Costagliola et al. 2011; Kamenetzky et al. 2011; Spinoglio et al. 2012; Davis et al. 2013; Watanabe et al. 2014). Previous extragalactic line surveys were reviewed by Mart\'{i}n (2011). As a result, about 60 molecular species have been identified in nearby external galaxies (a list is available in CDMS\footnote{The Cologne Database for Molecular Spectroscopy (CDMS) can be accessed at http://www.astro.uni-koeln.de/cdms/.}; M\"{u}ller et al. 2005), and it is now possible to study molecular abundances and chemical reactions in external galaxies. In fact, some groups have suggested that it is possible to diagnose power sources in galactic nuclei using intensity ratios of molecular lines, such as HCN/HCO$^{+}$, HCN/CO, or HCN/CS (e.g. Jackson et al. 1993; Tacconi et al. 1994; Helfer \& Blitz 1995; Kohno et al. 1996, 2008; Usero et al. 2004; Imanishi et al. 2007; 2016; Krips et al. 2007; 2008; Davies et al. 2012; Krips 2012; Izumi et al. 2013; 2016). However recent high spatial resolution images of several molecules with the Atacama Large Millimeter/sub-millimeter Array (ALMA) toward NGC 1068 and NGC 1097 suggest that the interpretation of molecular lines at the vicinity of an active nucleus is still complicated (Garc\'{i}a-Burillo et al. 2014; Mart\'{i}n et al. 2015). In any case, it is important to investigate the relationships between the power sources of galaxies such as active galactic nuclei (AGNs) and/or starbursts, and the chemical properties of the surrounding dense interstellar medium. In order to study the effects on molecular abundances and to probe the power sources, it is needed to observe molecular lines in the irradiation by UV photons from starbursts or by X-rays from AGNs, which produce either a photon-dominated region (PDR) (e.g. Hollenbach \& Tielens 1999) or an X-ray dominated region (XDR) (e.g. Maloney et al. 1996; Lepp \& Dalgarno 1996; Meijerink \& Spaans 2005). It is expected to find different molecular signatures for these two regions. Observations of the molecular gas chemistry of the AGN toward NGC 1068, one of the nearest galaxies with an AGN, have been reported by some groups (Usero et al. 2004; P\'{e}rez-Beaupuits et al. 2009; Garc\'{i}a-Burillo et al. 2010; Nakajima et al. 2011, 2015; Aladro et al. 2013, 2015; Takano et al. 2014; and Viti et al. 2014). However, the observed molecular lines in these previous works except for Aladro et al. (2013; 2015) and Takano et al. (2014) are limited to molecules of strong emission lines, such as CO, HCN, HCO$^{+}$, SiO, and CN. Therefore, further systematic observations of molecular lines using a compact beam without missing flux for seeing the affected molecular gas by the AGN are indispensable to study the impact of the AGN on the interstellar medium. Up to now, there have been two complete line survey observations, which have sufficient velocity resolution ($<$20 km s$^{-1}$) and high sensitivity ($\sim$1 mK) in the 3-mm band toward NGC 1068, one with the IRAM 30-m telescope by Aladro et al. (2013; 2015), and the other with the NRO 45-m telescope by Takano et al. (in prep., hereafter the data paper). The prior observations toward NGC 1068 by Snell et al. (2011) and Kamenetzky et al. (2011) are also line surveys at 74--111 GHz and 190--307 GHz, respectively. But the velocity resolution of their instruments, the Redshift Search Receiver (RSR) and a broadband millimeter-wave grating spectrometer (Z-Spec), is more than 100 km s$^{-1}$. Therefore, only strong lines with wide line widths were detected. From the initial results of the line survey with the NRO 45-m telescope, we successfully detected basic carbon-containing molecules such as C$_{2}$H and cyclic-C$_{3}$H$_{2}$ (Nakajima et al. 2011). The relative abundances of these molecules with respect to CS were found to show no significant differences between NGC 1068 and NGC 253. Thus, it was concluded that these basic carbon-containing molecules are insensitive to the AGN conditions and/or that these molecules exist in a cold gas away from the AGN. Aladro et al. (2013) compared their results in NGC 1068 with the typical starburst galaxies NGC 253 and M 82, and they suggested that the gas in the nucleus of NGC 1068 has a different chemical composition from those of the starburst galaxies. In particular, they determined that the abundances of CN, SiO, HCO$^{+}$, and HCN are higher in NGC 1068. In contrast, H$_{2}$CO and CH$_{3}$CCH, which are abundant species in the starburst galaxies, are not detected in NGC 1068. The feature of abundant CH$_{3}$CCH in starburst galaxies was also suggested in Aladro et al. (2015), who obtained and compared the line survey spectra in the 3-mm band toward the starburst galaxies NGC 253, M 82, and M 83, the AGN-hosting galaxies M 51, NGC 1068, and NGC 7469, and the ultra-luminous infrared galaxies (ULIRGs) Arp 220 and Mrk 231. In this paper, we report analysis results of molecular lines toward NGC 1068 based on the line survey in the 3-mm band using the NRO 45-m telescope. The beam size of this telescope ($\sim$19$^{\prime\prime}$ at 85 GHz) is smaller than the size of the starburst ring in NGC 1068 ($d$$\sim$30$^{\prime\prime}$). This small beam size is essential to study the impact of the AGN on the surrounding molecules, because it will enable us to separate the contamination of the molecular lines from the starburst region in NGC 1068. We also observed the typical starburst galaxies NGC 253 and IC 342 to compare the effects of the AGN and the starburst on molecular abundances. The details of observations and molecular spectra of each galaxy are presented in the data paper. In this paper, we present the results of the fractional abundances of all detected molecules, and discuss the physical and chemical properties, and possible scenarios for the differentiation of the molecular abundances in both types of galaxies. The analysis for the calculation of rotation temperatures and column densities is described in section 2, and these calculated results are presented in section 3. The trends found in comparing the molecular abundances in the three galactic nuclei and the possible explanations for particular molecular abundances are discussed in section 4. \section{Analysis} \subsection{Rotation Diagram} We constructed rotation diagrams of the observed molecules in each galaxy. A list of detected and non-detected molecules is shown in Table 1. For molecules with detections of only one transition in our observations, we refer to the data of other transitions in other bands in previous papers. Although there are a lot of previous observations in the 3-mm band, we used here only our results in that frequency band in the rotation diagrams. Rotational temperatures ($T_{\rm rot}$) were calculated from the slopes of the fitted lines on the rotation diagrams under the assumptions that all lines are optically thin, that a single excitation temperature ($T_{\rm ex}$) characterizes all transitions, and that local thermodynamic equilibrium (LTE) pertains. We used the following equation (e.g., Turner 1991), \begin{equation} {\rm log}\left(\frac{N_{\rm mol}}{Z}\right) = {\rm log}\frac{8{\pi}k{\nu}^{2}}{hc^{3}A_{ul}g_{u}g_{I}g_{K}}W + \frac{E_{u}}{k} \frac{{\rm log} \,e}{T_{\rm rot}}, \end{equation} where $N_{\rm mol}$ is the column density, $Z$ the partition function, $\nu$ the frequency, $g_{u}$ the rotational degeneracy of the upper state (2$J_{u}$ + 1), $g_{I}$ and $g_{K}$ the reduced nuclear spin degeneracy and $K$-level degeneracy, respectively, and $E_{u}$ the energy of the upper state of the transition. The integrated intensity $W = \int T_{\rm mb} dv$ is corrected for the beam filling factor ($\eta_{\rm bf}$), which is calculated as $\eta_{\rm bf}$ = $\theta_{\rm s}^{2}$ / ($\theta_{\rm s}^{2}$ + $\theta_{\rm b}^{2}$). It accounts for the dilution effect due to the coupling between the source and the telescope beam in the approximation of a Gaussian source distribution of size $\theta_{\rm s}$ (FWHM) that is observed with a Gaussian beam of size $\theta_{\rm b}$ (HPBW). $\theta_{\rm b}$ of the receiver used on the NRO 45-m telescope was measured by cross scans of a point source emission such as a quasar (Nakajima et al. 2008). The adopted $\theta_{\rm b}$ for calculation of the beam filling factor is presented in the table in Appendix 2. The error of $\theta_{\rm b}$ obtained with the NRO 45-m telescope is about 2 \%, and the effect on the estimations of $T_{\rm rot}$ and $N_{\rm mol}$ can be negligible. The adopted $\theta_{\rm s}$ values of NGC 1068, NGC 253 and IC 342 are 4$^{\prime\prime}$, 20$^{\prime\prime}$ and 6$^{\prime\prime}$, respectively. They are taken from the distributions of HCN for NGC 1068 (Helfer \& Britz 1995), CS for NGC 253 (Peng et al. 1996) and HCN for IC 342 (Schinnerer et al. 2008), which are estimated from interferometric maps. Moreover, these $\theta_{\rm s}$ are similar to the values in the previous works for easy comparison (Bayet et al. 2009; Aladro et al. 2013; 2015 for NGC 1068, Mart\'{i}n et al. 2006 for NGC 253, and Bayet et al. 2009 for IC 342). Uncertainties of the error of $\pm$50 \% in the assumed source sizes translate into about $\pm$20 \% uncertainty in the estimates of $T_{\rm rot}$ and $N_{\rm mol}$. $A_{ul}$ is the Einstein $A$-coefficient given by \begin{equation} A_{ul} = \frac{64{\pi}^{4}{\nu}^{3}}{3hc^{3}}\frac{S{\mu_{ul}}^{2}}{g_{u}}, \end{equation} where $\mu_{ul}$ is transition dipole, and $S$ is the line strength. We obtain $\nu$ and $\mu_{ul}$ from the CDMS database. All rotation diagrams of the detected molecules in each galaxy are shown in Appendix 1, and the line parameters of the molecules are listed in Appendix 2. The upper or lower limits of the physical parameters for non-detected lines are calculated with their rms noise level (1 $\sigma$). It is noted that some molecular lines are optically thick, and the assumption of LTE is not valid. For example, the column densities of $^{12}$CO, HCN, and CN are possibly underestimated. The optical depths of these molecules are described in section 3.1. \subsection{Column Density} $N_{\rm mol}$ is calculated from the intercept at $E_{u}/k = 0$ of the fitted line on the rotation diagram and equation (1). The partition function $Z$ for each molecular species is calculated based on the following equation. \begin{equation} Z = \sum_{\rm all {\it E_{i}}}g_{u}g_{I}g_{K} {\rm exp}\left(\frac{-E_{i}}{kT_{\rm rot}}\right). \end{equation} We describe $Z$ for linear species, symmetric tops, and asymmetric tops in the next subsections based on Turner (1991). For polyatomic molecules except for linear species, we use the integral approximation for $Z$ because of the density of rotational energy levels. For the calculations of $N_{\rm mol}$ for molecules with only one detected transition, we assumed $T_{\rm rot}$ = 10$\pm$5 K, which is almost the same value of the obtained rotational temperatures of CS among the three galaxies in this observation (Table 2). Because the critical density of CS ({\it J}=2--1) is about 10$^{4}$--10$^{5}$ cm$^{-3}$, it traces the region where the densities are higher than that traced by CO, and this assumption of $T_{\rm rot}$ is probably reasonable for such molecules. \subsubsection{Linear Species} $Z$ for linear species is calculated numerically using the following equation, which we apply to $^{12}$CO, $^{13}$CO, C$^{18}$O, C$^{17}$O, CS, C$^{34}$S, HC$_{3}$N, SiO, HCN, H$^{13}$CN, HCO$^{+}$, H$^{13}$CO$^{+}$, HNC, N$_{2}$H$^{+}$, SO, CN, $^{13}$CN, and C$_{2}$H: \begin{equation} Z = \sum_{\rm {\it E_{i}}}g_{u} {\rm exp}\left(\frac{-E_{i}}{kT_{\rm rot}}\right). \end{equation} Note that $g_{I}$ and $g_{K}$ are equal to one for all levels in the case of the above molecules. \subsubsection{Symmetric Tops} $Z$ for prolate symmetric tops is calculated using the following equation: \begin{equation} Z = \frac{1}{3}\left[\frac{\pi(kT_{\rm rot})^{3}}{h^{3} AB^{2}}\right]^{\frac{1}{2}} \end{equation} for molecules with C$_{3v}$ symmetry such as CH$_{3}$CN and CH$_{3}$CCH, while $A$ and $B$ are rotational constants. The line strength of CH$_{3}$CN and CH$_{3}$CCH are obtained from Boucher et al. (1980) and Bauer et al. (1979), respectively. For these molecules, $g_{K} = 1$ for $K = 0$ and $g_{K} = 2$ for $K \neq 0$, and $g_{I} = 1/2$ for $K = 3n$ and $g_{I} = 1/4$ for $K \neq 3n$ ($n = 0, 1, 2, \cdots$) are used in equation (1). Although the transitions of different $K$ladders are blended in our observations, we apply the method of separating for blended lines based on Mart\'{i}n et al. (2006). \subsubsection{Asymmetric Tops} $Z$ for asymmetric tops is calculated using the following equations: \begin{equation} Z = \left[\frac{\pi(kT_{\rm rot})^{3}}{h^{3} ABC}\right]^{\frac{1}{2}} \end{equation} for molecules with no symmetry such as HNCO, and \begin{equation} Z = \frac{1}{2}\left[\frac{\pi(kT_{\rm rot})^{3}}{h^{3} ABC}\right]^{\frac{1}{2}} \end{equation} for molecules with C$_{2v}$ symmetry such as cyclic-C$_{3}$H$_{2}$. For an internal rotor such as CH$_{3}$OH, we used the following equation for $A$-type and $E$-type species: \begin{equation} Z(A) = Z(E) = \left[\frac{\pi(kT_{\rm rot})^{3}}{h^{3} ABC}\right]^{\frac{1}{2}}, \end{equation} where $A$, $B$ and $C$ are rotational constants. The line strength of cyclic-C$_{3}$H$_{2}$ is obtained from Thaddeus et al. (1985) and Vrtilek et al. (1987), and that of CH$_{3}$OH is obatined from Lees et al. (1973). The $g_{I}$ is 3/4 for the ortho levels and 1/4 for the para levels for cyclic-C$_{3}$H$_{2}$. For CH$_{3}$OH, $g_{K} = 1$ and $g_{I} = 2$ for $A$-type, and $g_{K} = 2$ and $g_{I} = 1$ for $E$-type species are used. \section{Results} \subsection{Optical depth} We calculated the optical depths of C$_{2}$H ({\it N} = 1--0), CN ({\it N} = 1--0), $^{13}$CN ({\it J} = 1--0), CS ({\it J} = 2--1), C$^{34}$S ({\it J} = 2--1), HCO$^{+}$ ({\it J} = 1--0), H$^{13}$CO$^{+}$ ({\it J} = 1--0), HCN ({\it J} = 1--0), H$^{13}$CN ({\it J} = 1--0), and the CO isotopic species ({\it J} = 1--0), and listed these values in Table 2. C$_{2}$H, CN, and $^{13}$CN show multiple line structures, which are caused by fine structure components. The intensities of each fine and hyperfine component were calculated based on spectroscopic theory and previously published experiments in the laboratory as discussed below. We can then obtain the optical depths from the intensity ratios between each component, and the total value of all components in the three galaxies are shown in Table 2. For C$_{2}$H, six Gaussian functions based on the expected intensity ratios with theory, as described by Tucker et al. (1974), were fitted to the C$_{2}$H spectra, as shown in figure 3. For CN (Skatrud et al. 1983) and $^{13}$CN (Bogey et al. 1984), nine and fourteen Gaussian functions, respectively, were fitted to each spectrum as shown in figures 4 and 5. The excitation temperature employed is the obtained rotational temperature or the assumed 10 K. The assumed cosmic background temperature is 2.7 K. For $^{12}$CO, $^{13}$CO, HCO$^{+}$, H$^{13}$CO$^{+}$, HCN and H$^{13}$CN, a carbon isotopic ratio [$^{12}$C]/[$^{13}$C] of 40 was assumed (Henkel et al. 2014). For C$^{18}$O and C$^{17}$O, the oxygen isotopic ratios [$^{16}$O]/[$^{18}$O] and [$^{16}$O]/[$^{17}$O] were assumed to be 145 and 1290, respectively (Henkel et al. 2014). Finally for CS and C$^{34}$S, the sulfur isotopic ratio [$^{32}$S]/[$^{34}$S] was assumed to be 23 (Wilson \& Matteucci 1992). Although previous observations of the sulfur isotopic ratio in external galaxies are limited, Frerking et al. (1980) suggested that the [$^{32}$S]/[$^{34}$S] ratio shows little or no variation at least in our Galaxy, such as in the local ISM, the Galactic center, and the solar system. Therefore, we applied the terrestrial sulfur isotopic ratio in this work. The calculated optical depths of the observed lines of C$_{2}$H, $^{13}$CN, $^{13}$CO, C$^{18}$O, C$^{17}$O, C$^{34}$S, H$^{13}$CO$^{+}$, and H$^{13}$CN are smaller than unity, and we think that these are the optically thin lines. Care should be taken for CS, because its value is almost unity. Moreover, the observed lines of $^{12}$CO, HCN, and HCO$^{+}$ are optically thick in the beam size of the 45-m telescope. Therefore, the observed line intensities of these optically thick molecules are saturated depending on the optical depth, and the abundances based on the LTE assumption are underestimated. The optical depth of CN depends strongly on the galaxy. Those in NGC 1068 and IC 342 are more than unity but that in NGC 253 is 0.39. This value is more or less consistent with the value of $\sim$0.5 in NGC 253 by Henkel et al. (2014). \subsection{Rotation Temperature and Column Density} We calculated $T_{\rm rot}$ and $N_{\rm mol}$ for 23 molecular species in each galaxy. The derived values are listed in Table 3. The errors in $T_{\rm rot}$ and $N_{\rm mol}$ are calculated from the maximum and minimum of the slope and the intercept at the zero level of the energy of the fitting line, respectively, in the rotation diagram due to the errors of the integrated intensity. The errors of the integrated intensity for each line are described in the data paper. Note that for molecules with only one detected transition, such as cyclic-C$_{3}$H$_{2}$, C$_{2}$H, N$_{2}$H$^{+}$, CH$_{3}$OH, $^{13}$CN, and CH$_{3}$CN in NGC 1068, HNC, N$_{2}$H$^{+}$, CH$_{3}$OH, and $^{13}$CN in NGC 253, and cyclic-C$_{3}$H$_{2}$, H$^{13}$CN, H$^{13}$CO$^{+}$, SiO, C$_{2}$H, HNC, N$_{2}$H$^{+}$, and CH$_{3}$OH in IC 342, $T_{\rm rot}$ = 10$\pm$5 K is assumed. For H$^{13}$CN, C$^{34}$S, and SO in NGC 1068, and SO in IC 342, the upper or lower limit of $T_{\rm rot}$ and $N_{\rm mol}$ are calculated. Although these molecules were observed in two transitions, one transition was not detected and only the upper limit of the line intensity could be obtained. In addition, CH$_{3}$CCH in NGC 1068 and $^{13}$CN in IC 342 were not detected. Since C$^{17}$O emission in NGC 1068 is probably too weak to obtain a reasonable signal-to-noise ratio, C$^{17}$O is only marginally detected. Moreover, H$^{13}$CO$^{+}$ is not clearly detected in NGC 1068, because it is difficult to separate the weak H$^{13}$CO$^{+}$ line from the partially blended SiO spectrum. We compare $N_{\rm mol}$ for each molecule among the three galaxies, and the results are shown in figure 1. As a result, although the molecular gas surrounding the CND is irradiated by high energy emission (e.g. X-rays and UV) from the AGN, the CND in NGC 1068 is found to be chemically very rich. The column densities of CN, HNCO, CS, HCN, HCO$^{+}$, HNC, HC$_{3}$N, $^{13}$CN, H$^{13}$CN, and SiO are clearly high, and on the contrary CH$_{3}$CCH is clearly low in column density toward NGC 1068 as compared with the starburst galaxies. The detailed features of each molecule are described below. \subsubsection{$^{12}$CO and isotopic species} There are many previous observations on $^{12}$CO, $^{13}$CO, and C$^{18}$O for all three galaxies, and we can plot more than three transitions for each CO isotopomer in the rotaion diagrams. The references of the previous observations are listed in the footnote of the tables in Appendix 2. The plots of $N_{u}/g_{u}$ of $^{12}$CO, $^{13}$CO, and C$^{18}$O ({\it J} = 1--0) with the IRAM 30-m in NGC 1068, which were reported by Aladro et al. (2013; 2015), are significantly high in comparison with the plots of our results with the NRO 45-m under the assumption of the same source size. As a result, the calculated $N_{\rm mol}$ of $^{12}$CO, $^{13}$CO and C$^{18}$O with the IRAM 30-m ((3.8$\pm$0.3)$\times$10$^{18}$ cm$^{-2}$, (4.6$\pm$0.1)$\times$10$^{17}$ cm$^{-2}$ and (1.29$\pm$0.04)$\times$10$^{17}$ cm$^{-2}$) are about twice as large as our results ((1.9$\pm$0.1)$\times$10$^{18}$ cm$^{-2}$, (1.2$\pm$0.0)$\times$10$^{17}$ cm$^{-2}$ and (5.3$^{+0.6}_{-0.5}$)$\times$10$^{16}$ cm$^{-2}$). These differences are likely to arise because the IRAM 30-m observations contain emission from the starburst ring in addition to the CND, given that the beam (HPBW$\sim$21$^{\prime\prime}$--29$^{\prime\prime}$) is larger than that of the 45-m telescope (HPBW$\sim$15$^{\prime\prime}$--19$^{\prime\prime}$). Molecular gas in NGC 1068 is distributed not only in the CND but also in the surrounding starburst ring region ($d\sim$20$^{\prime\prime}$--40$^{\prime\prime}$), which has been clearly seen with interferometric observations (e.g., Helfer \& Blitz 1995; Papadopoulos et al. 1996; Tacconi et al. 1997; Shinnerer et al. 2000; Garc\'{i}a-Burillo et al. 2014; Takano et al. 2014; Tosaki et al. 2017). This explanation is a possible reason for the finding of two components in the rotation diagram for at least $^{13}$CO by Aladro et al. (2013). Note that the $^{12}$CO opacity is evidently thick, and we tend to underestimate the values of $N_{u}$/$g_{u}$ in the rotation diagram especially at the lower energy level. The $T_{\rm rot}$ are determined to be approximately 20 K, 10 K, and 5 K for $^{12}$CO, $^{13}$CO, and C$^{18}$O, respectively, in NGC 1068 and IC 342. For NGC 253, $T_{\rm rot}$ for these molecules are about 10--20 K higher than in the other galaxies, but the $N_{\rm mol}$ are not higher. The C$^{17}$O lines were clearly detected in NGC 253 and IC 342 but marginally detected in NGC 1068. \subsubsection{CS and C$^{34}$S} The CS {\it J} = 3--2, 5--4, and 7--6 transitions in NGC 1068 were reported by Mauersberger et al. (1989), Mart\'{i}n et al. (2009), and Bayet et al. (2009). However, Bayet et al. (2009) mentioned that the {\it J} = 7--6 line was only marginally detected. Subsequently, with the exception of the {\it J} = 7--6 line, the performed fit is shown in the rotation diagram in Appendix 1, and the obtained {\it T}$_{\rm rot}$ is 7.9$\pm$0.3 K. We fitted the data with only one spatial component in NGC 253 and IC 342, but it is probably better to fit with at least two components, as suggested by Bayet et al. (2009). In our observations toward NGC 1068, C$^{34}$S was not detected, and an upper limit of $N_{\rm mol}$ ($<$1.9$\times$10$^{13}$ cm$^{-2}$) was determined. On the other hand, the previous observations with the IRAM 30-m by Mart\'{i}n et al. (2009) and Aladro et al. (2013) detected C$^{34}$S {\it J} = 3--2 and {\it J} = 2--1, respectively. The derived column densities were (7.3$\pm$1.2)$\times$10$^{12}$ cm$^{-2}$ in Mart\'{i}n et al. (2009) and (1.3$\pm$0.2)$\times$10$^{14}$ cm$^{-2}$ in Aladro et al. (2015). This inconsistency with IRAM 30-m data may be caused in part by the fact that the assumed source sizes were not the same in these two studies. Mart\'{i}n et al. (2009) and Aladro et al. (2015) assumed source sizes of 10$^{\prime\prime}$ and 4$^{\prime\prime}$, respectively. If we compare our results with those of Aladro et al. (2015), which assumed the same source size as our study, the remaining source of the difference is that the emission of C$^{34}$S probably comes from not only the CND but also the starburst ring. In fact, we found that CS ({\it J} = 2--1) is distributed in the CND and also in the starburst ring with our ALMA observations (Takano et al. 2014). \subsubsection{cyclic-C$_{3}$H$_{2}$ and C$_{2}$H} Because there are no previous observations of cyclic-C$_{3}$H$_{2}$ and C$_{2}$H except for the 3-mm band in NGC 1068 and IC 342, we assumed $T_{\rm rot}$ = 10$\pm$5 K for the calculations of the $N_{\rm mol}$. The $N_{\rm mol}$ values of cyclic-C$_{3}$H$_{2}$ and C$_{2}$H are (2.0$^{+1.7}_{-1.0}$)$\times$10$^{14}$ cm$^{-2}$ and (2.6$\pm$1.0)$\times$10$^{15}$ cm$^{-2}$, respectively in NGC 1068, and (6.9$^{+7.8}_{-4.3}$)$\times$10$^{13}$ cm$^{-2}$ and (5.7$^{+2.2}_{-2.1}$)$\times$10$^{14}$ cm$^{-2}$, respectively in IC 342. For NGC 253, the $N_{\rm mol}$ of these molecules were reported by Mart\'{i}n et al. (2006). Their obtained values are (3.0$\pm$6.0)$\times$10$^{13}$ cm$^{-2}$ and (1.2$\pm$0.1)$\times$10$^{15}$ cm$^{-2}$ for cyclic-C$_{3}$H$_{2}$ and C$_{2}$H, respectively, while our results are (4.4$\pm$0.0)$\times$10$^{13}$ cm$^{-2}$ and (1.7$\pm$0.0)$\times$10$^{15}$ cm$^{-2}$, and these values are consistent with their values. Moreover, we had already reported the $N_{\rm mol}$ of these molecules in NGC 1068 and NGC 253 in the paper containing the initial results of our line survey (Nakajima et al. 2011). These values based on the new observational data are not significantly changed from the initial work, as expected. But the $N_{\rm mol}$ of cyclic-C$_{3}$H$_{2}$ in NGC 1068 is changed from the initial result, because we did not take into account the beam filling factor for cyclic-C$_{3}$H$_{2}$ in this galaxy in Nakajima et al. (2011). Thus, the new $N_{\rm mol}$ of cyclic-C$_{3}$H$_{2}$ in NGC 1068 in this paper is likely to be more correct. \subsubsection{HCN, HCO$^{+}$ and their $^{13}$C isotopic species} HCN is one of the characteristic molecules of AGNs, because the intensity ratio of HCN with respect to HCO$^{+}$ or CO is often enhanced in AGN-dominant galaxies as compared with starburst-dominant galaxies (e.g. Kohno et al. 2008). However, HCN is abundant in galactic nuclei, and so easily becomes optically thick. Moreover, the spectra of HCN are complex because they have hyperfine structure and often show self-absorption effects. Therefore, calculations of temperature and column density of HCN accurately with the LTE assumption are difficult (e.g. Loughnane et al. 2012). In fact, the calculated optical depth of HCN from our observations is more than unity. We think H$^{13}$CN will be a useful candidate to investigate the HCN enhancemnent effect of AGNs. Unfortunately, so far the observations of H$^{13}$CN in NGC 1068 have been limited. We had already reported the ${\it J}$ = 1--0 line (Nakajima et al. 2011), and Wang et al. (2014) reported only an upper limit to the ${\it J}$ = 3--2 line with the Atacama Pathfinder Experiment (APEX). We obtained values for $N_{\rm mol}$ of HCN ((4.7$^{+0.0}_{-0.3}$)$\times$10$^{14}$ cm$^{-2}$) and H$^{13}$CN ($>$4.7$\times$10$^{13}$ cm$^{-2}$) in NGC 1068, which are approximately 2--4 times and $>$3 times higher than those in the starburst galaxies, respectively. The HCO$^{+}$ (${\it J}$ = 1--0) line is often compared with the HCN (${\it J}$ = 1--0) line, because the rest frequencies of these lines are similar. Furthermore, the intensities of these lines are typically strong, and detections are facile. Because HCO$^{+}$ is also often optically thick, H$^{13}$CO$^{+}$ is also useful. But the frequency of the H$^{13}$CO$^{+}$ (${\it J}$ = 1--0) line is very close to that of SiO (${\it J}$ = 2--1), and therefore these lines are blended in NGC 1068, as shown in the observed spectra in the data paper. We could not separate these emission lines sufficiently to obtain line parameters. Therefore, only the upper limit ($<$4.5$\times$10$^{12}$ cm$^{-2}$) of $N_{\rm mol}$ of H$^{13}$CO$^{+}$ is estimated in this work. \subsubsection{HNC} HNC is abundant in dark molecular clouds and is deficient in high temperature regions (e.g. Hirota et al. 1998). On the other hand, it has been suggested that one of the causes of a bright HNC emission is the influence of UV radiation in PDRs and/or X-rays in XDRs at densities $n \lesssim$ 10$^{4}$ cm$^{-3}$ and at total column densities $N_{\rm H} >$ 3$\times$10$^{21}$ cm$^{-2}$ (Meijerink \& Spaans 2005). In our results, the $N_{\rm mol}$ of HNC in NGC 1068, (1.7$\pm$0.1)$\times$10$^{14}$ cm$^{-2}$, is approximately two times higher than those in the starburst galaxies, which are (8.7$^{+2.6}_{-1.7}$)$\times$10$^{13}$ cm$^{-2}$ and (7.6$^{+2.2}_{-1.4}$)$\times$10$^{13}$ cm$^{-2}$ in NGC 253 and IC 342, respectively. Probably, this relatively high column density is not due to low temperature but to the large amount of gas and/or effect of the strong emission in the CND. Aladro et al. (2015) reported that $N_{\rm mol}$ of HNC in NGC 1068 is about four times higher than in NGC 253, and this is consistent with our results. \subsubsection{CN and $^{13}$CN} CN is one of the key molecules used to study environments irradiated with extreme UV or X-rays, as mentioned in theoretical approaches (e.g. Lepp \& Dalgarno 1996; Meijerink \& Spaans 2005; Meijerink et al. 2007). According to these papers, an increased X-ray ionization rate in molecular clouds can enhance the abundance of CN with respect to that in PDRs. In the data paper, we found that the integrated intensity ratio of CN, which is calculated by the summation of all fine structure lines, divided by $^{13}$CO, is significantly higher in NGC 1068 ($\sim$5.2) than in NGC 253 ($\sim$2.5) and IC 342 ($\sim$1.3). The obtained $N_{\rm mol}$ of CN in NGC 1068 is 2--4 times higher than those of NGC 253 and IC 342 as might be expected. Note that the optical depth of CN is high as we discuss in section 3.1, and the $^{13}$C isotopic species of CN will be also important to study the CN abundance. In fact, it is remarkable that the $N_{\rm mol}$ of $^{13}$CN in NGC 1068 is an order of magnitude higher than those of the starburst galaxies. The values are (1.0$^{+0.3}_{-0.2}$)$\times$10$^{14}$ cm$^{-2}$, (1.2$^{+0.5}_{-0.3}$)$\times$10$^{13}$ cm$^{-2}$ and $<$4.4$\times$10$^{12}$ cm$^{-2}$ in NGC 1068, NGC 253 and IC 342, respectively. Although the line intensity is very weak and the observations are often difficult, it is possibly useful to trace material irradiated with strong UV and/or X-rays. Henkel et al. (2014) detected CN and $^{13}$CN in NGC 253 with the IRAM 30-m telescope. They reported that the CN excitation temperature and the column density are 3--11 K and 2$\times$10$^{15}$ cm$^{-2}$, respectively. These values in NGC 253 are consistent with our results in this paper. \subsubsection{CH$_{3}$OH, SO, HNCO, \& CH$_{3}$CN} CH$_{3}$OH, SO, and HNCO are typical shock-related species and are well-known probes in Galactic sources such as L1157, to study chemical and physical conditions (e.g. see Bachiller \& P\'{e}rez Guti\'{e}rrez 1997 for CH$_{3}$OH and SO, and Rodr\'{i}guez-Fern\'{a}ndez et al. 2010 for HNCO). CH$_{3}$CN is known to exist in hot cores (e.g., Churchwell et al. 1992; Crockett et al. 2015), and is regarded as a useful tracer to study the dense gas. Codella et al. (2009) confirmed the association of CH$_{3}$CN with gas affected by the passage of a shock wave. For CH$_{3}$CN in NGC 253, Mauersberger et al. (1991) and Mart\'{i}n et al. (2006) observed $J_{K}$ = 5$_{K}$--4$_{K}$, 7$_{K}$--6$_{K}$, 8$_{K}$--7$_{K}$, and 9$_{K}$--8$_{K}$, but we used only the highest transition line for the rotation diagram to better constrain the rotational temperature. We obtained the $N_{\rm mol}$ of CH$_{3}$OH in NGC 1068, NGC 253 and IC 342 to be (1.7$^{+0.3}_{-0.1}$)$\times$10$^{13}$ cm$^{-2}$, (1.7$^{+0.4}_{-0.1}$)$\times$10$^{13}$ cm$^{-2}$, and (1.9$^{+0.5}_{-0.1}$)$\times$10$^{13}$ cm$^{-2}$, respectively, and the $N_{\rm mol}$ of HNCO to be (6.8$^{+0.6}_{-0.1}$)$\times$10$^{14}$ cm$^{-2}$, (1.1$^{+0.1}_{-0.0}$)$\times$10$^{14}$ cm$^{-2}$, and (2.5$^{+0.1}_{-0.0}$)$\times$10$^{14}$ cm$^{-2}$ in each galaxy. As a result, the $N_{\rm mol}$ of HNCO in NGC 1068 is approximately 3--6 times higher than that in the starburst galaxies unlike CH$_{3}$OH. On the other hand, we have already observed these molecules in NGC 1068 with ALMA and reported their column densities (Takano et al. 2014; Nakajima et al. 2015). In Nakajima et al. (2015), the $N_{\rm mol}$ of CH$_{3}$OH and HNCO in the CND were measured to be 1.3 and 5.0 times higher than those in the starburst ring, respectively, under the assumption of the same $T_{\rm rot}$ in both of the regions. Therefore, we obtain a similar comparison between the abundances of CH$_{3}$OH and HNCO in the CND and the starburst environments using ALMA and the single dish telescope. Mart\'{i}n et al. (2009) and Aladro et al. (2013) reported the $T_{\rm rot}$ of HNCO in NGC 1068. Our result $T_{\rm rot}$ = 39.9$^{+9.5}_{-4.9}$ K is closer to that of Aladro et al. (2013), which is 29.7$\pm$0.1 K, rather than the result of 5.8$\pm$0.8 K by Mart\'{i}n et al. (2009). \subsubsection{SiO} It is well known that SiO is a good tracer of shocked gas (e.g. Mikami et al. 1992; Mart\'{i}n-Pintado et al. 1997), and we expect that this molecule is a useful tracer of shocks related to strong UV or X-ray irradiated regions. In fact, Garc\'{i}a-Burillo et al. (2010) suggested that the CND of NGC 1068 has a giant XDR based on the abundance ratios of SiO/CO and CN/CO seen with observation of high spatial resolution. Kelly et al. (2017) found a strong SiO peak to the east of the AGN and discussed shock events in the CND. In our observations, $N_{\rm mol}$ of SiO in NGC 1068 is 2.3--2.9 times higher than those of the starburst galaxies. Our result is almost consistent with that of Aladro et al. (2015), who reported the $N_{\rm mol}$ of SiO in NGC 1068 to be about 2.2 times higher than that in NGC 253. \subsubsection{HC$_{3}$N} We observed the HC$_{3}$N ${\it J}$ = 10--9, 11--10, and 12--11 lines in the 3-mm band. But the ${\it J}$ = 10--9 line toward NGC 1068 was not clearly detected, because this frequency range shows a baseline fluctuation as shown that spectrum in the data paper. Thus, we made the rotation diagram only with the ${\it J}$ = 11--10 and 12--11 lines. As a result, the $T_{\rm rot}$ and $N_{\rm mol}$ in NGC 1068 are 13.4$^{+32.2}_{-5.0}$ K and 1.7$^{+2.7}_{-0.9}\times$10$^{14}$ cm$^{-2}$, respectively. $N_{\rm mol}$ of HC$_{3}$N is approximately 5--6 times higher than those in NGC 253 and IC 342. But its error in NGC 1068 is appreciably large, and this overabundance of HC$_{3}$N must be taken with caution. The reason for this large error is the limited observed lines from only two closely-spaced energy levels. \subsubsection{N$_{2}$H$^{+}$} N$_{2}$H$^{+}$ is well known as a tracer of cold dense gas (e.g. Bergin et al. 2002). So far there are many observations of N$_{2}$H$^{+}$ in Galactic molecular clouds, but observations toward external galaxies are limited, especially in AGNs. In the previous observations in NGC 1068 with the NRAO 12-m and IRAM 30-m, the N$_{2}$H$^{+}$ (${\it J}$ = 1--0) was detected (Sage et al. 1995; Aladro et al. 2013). Aladro et al. (2015) reported that $N_{\rm mol}$ of N$_{2}$H$^{+}$ is (1.2$\pm$0.5)$\times$10$^{14}$ cm$^{-2}$, which is about three times higher than that in NGC 253. Our value is (5.8$^{+2.9}_{-1.8}$)$\times$10$^{13}$ cm$^{-2}$ in NGC 1068, and this is only 1.7--1.9 times higher than those of the starburst galaxies. \subsubsection{CH$_{3}$CCH} For CH$_{3}$CCH in NGC 253, Mauersberger et al. (1991) and Mart\'{i}n et al. (2006) observed $J_{K}$ = 8$_{K}$--7$_{K}$, 9$_{K}$--8$_{K}$, 10$_{K}$--9$_{K}$, and 13$_{K}$--12$_{K}$, but we used only the highest transition line for the rotation diagram to better constrain the rotational temperature as in the case with CH$_{3}$CN. CH$_{3}$CCH was not detected in NGC 1068 in our observations. We obtained an upper limit of $N_{\rm mol}$, $<$9.1$\times$10$^{13}$ cm$^{-2}$, and it is about a factor of 4 lower than in the starburst galaxies. The enhancement of CH$_{3}$CCH is a possible diagnostic tool of starbursts as opposed to AGNs. Aladro et al. (2015) reported that CH$_{3}$CCH emission lines are only seen in the starburst galaxies, and our observations support their results. The $T_{\rm rot}$ of CH$_{3}$CCH in NGC 253 and IC 342 are 40.5$\pm$0.2 K and 39.6$^{+0.9}_{-0.8}$ K, respectively, and those are almost the highest among all detected molecules. The reason for these high temperatures is that the dipole moment of CH$_{3}$CCH is relatively low (0.78 D; Burrell et al. 1980) sothat it is relatively easily thermalized (Aladro et al. 2011a). Inaddition, Guzm\'{a}n et al. (2014) obtained that the rotation temperatures of CH$_{3}$CCH in Galactic sources are about 55 K. They are consistent with our results and are the highest temperatures among their observed complex molecules. \subsection{Column density ratios} The integrated intensity ratios of HCN/HCO$^{+}$, HCN/CO or HCN/CS could be useful discriminators between AGN and starburst activity, as explained in Section 1. But we need to investigate whether the column density ratios of these molecules also show the same trend as the intensity ratios. Several important column density ratios in our observations are listed in Table 4. The ratios of $\frac{N\rm{(HCN)}}{N\rm{(HCO^{+})}}$ are 2.2$^{+0.1}_{-0.2}$, 2.0$^{+0.6}_{-0.8}$, and 1.6$\pm0.1$ in NGC 1068, NGC 253, and IC 342, respectively, while the values of $\frac{N\rm{(HCN)}}{N\rm{(CS)}}$ are 0.89$^{+0.07}_{-0.06}$, 0.87$^{+0.04}_{-0.08}$, and 0.57$^{+0.07}_{-0.09}$ in the same galaxies. We found no clear enhancement of the HCN column density relative to the HCO$^{+}$ or CS column densities in the galaxy hosting an AGN compared with the starburst galaxies on a 1 kpc scale. However, the corresponding ratios of $\frac{N\rm{(H^{13}CN)}}{N\rm{(H^{13}CO^{+})}}$ become $>$10.4, 4.8$^{+1.0}_{-0.6}$, and 3.6$\pm1.5$, while the $\frac{N\rm{(H^{13}CN)}}{N\rm{(CS)}}$ ratios are $>$0.09, 0.05$\pm0.01$, and 0.07$^{+0.01}_{-0.14}$. Although the difference of $\frac{N\rm{(H^{13}CN)}}{N\rm{(CS)}}$ between NGC 1068 and IC 342 is small, its difference from NGC 253 and $\frac{N\rm{(H^{13}CN)}}{N\rm{(H^{13}CO^{+})}}$ clearly shows the enhancement in the galaxy hosting AGN (figure 2). Observations of the lines of H$^{13}$CN and H$^{13}$CO$^{+}$, which are optically thin unlike the lines of HCN and HCO$^{+}$, in the nucleus of galaxies will be more useful for differentiation, though high sensitivity observations are necessary. CN and $^{13}$CN, which are produced in PDRs but also particularly in XDRs as described in section 3.2.6, are expected to be enhanced in the galaxies hosting AGNs compared with starburst galaxies. In fact, the CN line intensities are thought to show enhancement in extremely X-ray irradiated environments (e.g. Aalto et al. 2002), and we confirmed these features as column density ratios. The ratios of $\frac{N\rm{(CN)}}{N\rm{(CS)}}$ are 5.7$^{+0.6}_{-0.4}$, 4.8$^{+0.0}_{-0.4}$, and 3.9$^{+0.4}_{-0.6}$ in NGC 1068, NGC 253, and IC 342, respectively. But the ratio of $\frac{N\rm{(CN)}}{N\rm{(CS)}}$ in the galaxy hosting an AGN is a factor of only 1.2--1.5 larger than those in the starburst galaxies. On the other hand, the $^{13}$C isotopic species show a clearer enhancement in the galaxy hosting an AGN. The ratios of $\frac{N\rm{(^{13}CN)}}{N\rm{(CS)}}$ are 0.19$^{+0.06}_{-0.04}$, 0.05$^{+0.02}_{-0.01}$, and $<$0.02 in NGC 1068, NGC 253, and IC 342, respectively. This ratio is 4--10 times larger in NGC 1068 than those in the starburst galaxies. These results are probably due to a difference in the optical depth because the $^{13}$C species reveal the difference in the physical conditions deep inside of the molecular clouds, which may not be probed with the optically thick normal species. The CN/HCN column density ratio is expected to probe radiation (UV-photon / X-rays) of the ISM in theory as discussed in 3.2.6. Meijerink et al. (2007) suggested that the $\frac{N\rm{(CN)}}{N\rm{(HCN)}}$ ratio is expected to be larger than unity in PDRs, but particularly large in XDRs. In our results, $\frac{N\rm{(CN)}}{N\rm{(HCN)}}$ is 6.4$^{+0.4}_{-0.0}$, 5.5$^{+0.3}_{-0.0}$, and 6.8$\pm$0.6. Although we should not forget concerns about the CN and HCN opacities, these ratios are almost the same among these three galaxies, and do not allow us to distinguish between the AGN and starbursts. On the other hand, the ratio of the $^{13}$C isotopic species $\frac{N\rm{(^{13}CN)}}{N\rm{(H^{13}CN)}}$ are $<$2.1, 1.0$^{+0.5}_{-0.3}$ and $<$0.3. The upper limit in NGC 1068 is about twice the actual values in NGC 253 and IC 342. Finally, we show the ratio of $\frac{N\rm{(CH_{3}CCH)}}{N\rm{(C^{18}O)}}$ in Table 4. This ratio has already been reported by Aladro et al. (2015) in the line survey with the IRAM 30-m telescope (see Table 4 in their paper). They reported that CH$_{3}$CCH was not detected toward the galaxies hosting an AGN among eight observed various galaxies, and the ratio in NGC 1068 is less than about half of that in NGC 253. In our observations, the upper limit of this ratio in NGC 1068 is 7 times lower than those ratios in the starburst galaxies, and these results are consistent with the observations using the IRAM 30-m. Moreover, the observations with the NRO 45-m telescope place a more stringent constraint on the low-abundance of CH$_{3}$CCH in NGC 1068. \section{Discussion} \subsection{Trend of Molecular Abundances surrounding the AGN} Comparison among the molecular abundances in each galaxy are shown in figures 6 and 7. In these figures we calculated the fractional abundances with respect to $^{13}$CO or CS. Although the fundamental fractional abundance is relative to molecular hydrogen ($N_{\rm H_{2}}$), we take a ratio of the column density of each species over the $^{13}$CO or CS column density in these figures. We do this because an estimate of $N_{\rm H_{2}}$ is quite difficult based on CO line measurment using an assumed so called $X_{\rm CO}$ factor, which is a CO-to-H$_{2}$ conversion factor (e.g. Dickman 1978). The $X_{\rm CO}$ factor is easily affected depending on the physical conditions in the interstellar molecular gas. On the one hand, the obtained column densities of $^{13}$CO in our obeservations are not significantly different between the AGN and the starburst galaxies, and in addition their errors are the smallest among all detected molecules in our observations. The difference of $N$($^{13}$CO) between the AGN and the starbursts is within a factor of 1.4, and this is the smallest variation among all detected molecules as shown in figure 1 and Table 3. Therefore, we would expect to clearly see the effect of an AGN to compare with starbursts using the fractional abundances relative to $^{13}$CO. However, we found that the $^{13}$CO $J$ = 1--0 line in the CND toward NGC 1068 is extremely weaker than that in the starburst ring from the observations with higher spatial resolution using ALMA (Takano et al. 2014). Although this signature is not seen with the single-dish telescope observations, a number of determined fractional abundances relative to $^{13}$CO may show excessive enhancement in NGC 1068. Therefore, we also calculate the fractional abundances with respect to CS, which is known to show little variation in abundance among galaxies with different activity such as starbursts, AGNs, ULIRGs, and normal galaxies (Mart\'{i}n et al. 2009). In fact, the difference of $N$(CS) between the AGN and the starbursts in our results is within a factor of 2.5, and this value is one of the smallest variations next to CO and its isotopic species. In addition, the critical density of CS ($J$ = 2--1) is about 10$^{4-5}$ cm$^{-3}$, and therefore it traces regions where the densities are higher than that traced by $^{13}$CO. It is much more reasonable for comparisons involving molecules of high critical density. Figure 6 shows correlation plots of the fractional abundances relative to (a) $^{13}$CO and (b) CS between NGC 253 and IC 342, which are both starburst galaxies. All differences of the fractional abundance are within an order of magnitude, and a good correlation of the abundances can be seen between these two galaxies. The correlation coefficients between these starburst galaxies are 0.996 for both $^{13}$CO and CS. This result suggests that average environments causing these chemical compositions are similar within the beam size of the 45-m telescope. Figure 7 contains correlation plots for NGC 1068 against both NGC 253 and IC 342 for the fractional abundances of the assorted molecules normalized by $^{13}$CO and CS. These are roughly in good correlation between the galaxy with an AGN and the starburst galaxies. In addition, we found that some molecules show significantly higher or lower fractional abundances in NGC 1068 relative to both of the starburst galaxies. It is interesting to note that such features among these galaxies are common. Molecules of high fractional abundance in NGC 1068 include CN, $^{13}$CN, HCN, H$^{13}$CN, and HC$_{3}$N, while CH$_{3}$CCH is deficient in this galaxy. Figure 8 shows more clearly the ratios of the fractional abundances of assorted species with respect to $^{13}$CO and CS between NGC 1068 and the starburst galaxies. In these figures, the molecules are arranged in descending order of the abundance ratio from the left side on the horizontal axis. Therefore, the fractional abundance ratios greater than unity are on the left side of the figures, where the more abundant molecules in NGC 1068 are located. The highest ratio occurs for $^{13}$CN, for which it is greater than or equal to 10, while the lowest ratio is for CH$_{3}$CCH. The basic carbon-containing molecules, cyclic-C$_{3}$H$_{2}$ and C$_{2}$H, show similar abundances with respect to CS within factors of a few among the three galaxies. Therefore, the conclusion of our initial results (Nakajima et al. 2011) that the abundances are insensitive to the effect of the AGN for these molecules and/or that these molecules exist in a cold gas away from the AGN, is supported. So far, there have been many observations of HCN and CN in NGC 1068 with single-dish telescopes (e.g. Nguyen-Q-Rieu et al. 1992; Paglione et al. 1997; Perez-Beaupuits et al. 2007, 2009; Krips et al. 2008; Kamenetzky et al. 2011; Aladro et al. 2013, 2015; and Wang et al. 2014). The line intensities of HCN and CN are higher than most of the molecules in these previous observations. Aladro et al. (2015) reported the column densities of HCN, CN and CO isotopic species in NGC 1068 as well as those in four starburst galaxies: NGC 253, M 82, M 83, and Arp 220. We calculated the ratios of $\frac{N\rm{(HCN)}}{N\rm{(^{13}CO)}}$ and $\frac{N\rm{(CN)}}{N\rm{(^{13}CO)}}$ in NGC 1068 based on their reported column densities, and these values are 2.3 and 1.9 times, respectively higher than the averaged ratios among those four starburst galaxies. On the other hand, the ratios of $\frac{N\rm{(HCN)}}{N\rm{(^{13}CO)}}$ and $\frac{N\rm{(CN)}}{N\rm{(^{13}CO)}}$ in NGC 1068 in our results are 3.7 and 3.9 times higher than the averaged ratios in the two starburst galaxies NGC 253 and IC 342. These higher ratios than those in Aladro et al. (2015) imply that the observations with the NRO 45-m telescope, the beam size of which is smaller than the IRAM 30-m telescope, tend to obtain molecular abundances in the CND with small contributions of emission lines from the starburst ring region. \subsection{Possible Scenario for the Differentiation of Molecular Abundances} We found that CH$_{3}$CCH is deficient, and that CN, $^{13}$CN, HCN, H$^{13}$CN, and HC$_{3}$N are abundant in the galaxy hosting an AGN, NGC 1068, relative to the starburst galaxies on a 1 kpc scale with the 45-m telescope. We discuss a possible scenario for the difference in these molecular abundances. \subsubsection{CH$_{3}$CCH and HC$_{3}$N} The low abundance of CH$_{3}$CCH relative to H$_{2}$, C$^{34}$S, and C$^{18}$O in NGC 1068 compared with those in the starburst galaxies NGC 253, M 82 and M83 obtained with the IRAM 30-m observations were already reported by Aladro et al. (2011a; 2011b; 2013; 2015). They obtained that the $\frac{N\rm{(CH_{3}CCH)}}{N\rm{(C^{18}O)}}$ ratios are $\gtrsim$2, $\gtrsim$10 and $\gtrsim$2 times more abundant in NGC 253, M 82 and M 83, respectively, than that in NGC 1068. They claimed that the fractional abundances of CH$_{3}$CCH are likely to be intrinsically related to the different nuclear activities in galaxies, implying that the chemistry of this molecule is different between starbursts and AGNs. $\frac{N\rm{(CH_{3}CCH)}}{N\rm{(C^{18}O)}}$ ratios in NGC 253 and IC 342 based on our results are more than $\gtrsim$5 times higher than that in NGC 1068, as can be seen in Table 4. Such a tendency is also seen in the $\frac{N\rm{(CH_{3}CCH)}}{N\rm{(CS)}}$ ratios in our results, as shown in figures 7 and 8. Therefore, our observations using a more compact beam strongly support the previous observational studies. As shown in figure 8, the fractional abundance of HC$_{3}$N in NGC 1068 is higher than that in the starburst galaxies. In particular, the abundance of HC$_{3}$N with respect to $^{13}$CO in NGC 1068 is 6--9 times higher than those in NGC 253 and IC 342. But the error bar of the column density of HC$_{3}$N in NGC 1068 is quite large due to the low quality of the observational data with the 45-m telescope, and we think it may not be an accurate feature, as previously discussed in section 3.2.9. In fact, such a feature of HC$_{3}$N in galaxies hosting an AGN is not seen in some previous studies (e.g. Costagliola et al. 2011; Aladro et al. 2013; 2015). However, we clearly detected HC$_{3}$N $J$ = 11--10 and 12--11 lines in the CND toward NGC 1068 with ALMA (Takano et al. 2014), and we obtained that the abundance of HC$_{3}$N is approximately 30 times higher than that in the starburst ring region in NGC 1068 (Nakajima et al. 2015). Therefore, the tendency towards a relatively high abundance of HC$_{3}$N in the CND in NGC 1068 is already confirmed with ALMA observations. According to model calculations, CH$_{3}$CCH and HC$_{3}$N are both easily dissociated by UV (PDRs) and/or X-ray radiations (XDRs) (Aladro et al. 2013). Model calculations using a gas-phase network including high-temperature reactions (Harada et al. 2010) show that fractional abundances of HC$_{3}$N and CH$_{3}$CCH both increase with temperature. At the same time, the fractional abundance of CH$_{3}$CCH produced in a high-temperature environment is only $\sim$10$^{-9}$, much lower than that of HC$_{3}$N ($\sim$10$^{-7}$). On the other hand, there are some production routes of CH$_{3}$CCH involving grain reactions. For example, a precursor of CH$_{3}$CCH, C$_{2}$H$_{4}$ can be made efficiently on grains. When C$_{2}$H$_{4}$ desorbs from grains, the following gas-phase reaction forms CH$_{3}$CCH: ${\rm CH + C_{2}H_{4} \longrightarrow CH_{3}CCH + H}$ (Miettinen et al. 2006). Our calculations with a gas-grain code (Hersant et al. 2009) also show that CH$_{3}$CCH on ice can be made in high abundance by hydrogenation of cyclic-C$_{3}$H$_{2}$. As noted previously, the regions shielded from strong radiation are needed to help these molecules survive. We claimed that the emission of HC$_{3}$N in the CND in NGC 1068 must be coming from regions shielded from X-rays (Harada et al. 2013; Nakajima et al. 2015), but the reason for the non-detection of CH$_{3}$CCH in NGC 1068 is not clearly understood on $\sim$kpc scale with the single-dish observations. In contrast, Costagliola et al. (2015) reported the possibility of both HC$_{3}$N and CH$_{3}$CCH enhancement toward NGC 4418, which is a galaxy with a very compact and luminous nucleus, as seen with ALMA. We think that the differences in the fractional abundances between these molecules among the galaxies may be caused by a difference in the type and/or frequency of shocks, or by another heating mechanism that affects the synthesis of these molecules in active galactic nuclei. In any case, observations of extragalactic CH$_{3}$CCH are limitted so far, and relationships between active nuclei and its abundances are still uncertain. \subsubsection{CN and HCN} The radical CN and its isotopic species $^{13}$CN are more abundant in NGC 1068 compared with the starburst galaxies, as shown in figures 7 and 8. In particular, the ratio of $\frac{N\rm{(CN)}}{N\rm{(CS)}}$ in NGC 1068 is approximately 1.2--1.5 times higher than those in the starburst galaxies. Furthermore, $\frac{N\rm{(^{13}CN)}}{N\rm{(CS)}}$ is from 3.8 times to an order of magnitude larger in NGC 1068 than in the starburst galaxies as already shown in figure 2. This CN enhancement is also found in the high-spatial resolution observations toward NGC 1068 with ALMA (Nakajima et al. 2015). We reported the comparison of $\frac{N\rm{(CN)}}{N\rm{(CS)}}$ between the central $\sim$100 pc of the CND and the starburst ring region in NGC 1068, and showed that ratio in the CND is approximately 7.2 times higher than in the starburst ring region. This suggests that the reason for CN enhancement in NGC 1068 is a high abundance of CN in the CND. Both chemical models and observations suggest that the CN fractional abundance is high when the chemistry is at an early stage of evolution of molecular clouds (Le Petit et al. 2006) or when there is high flux of UV photons, X-rays, and/or cosmic rays to ionize and dissociate precursor molecules such as HCN (Jansen et al. 1995; Lepp \& Dalgarno 1996; Hirota et al. 1999; Meijerink et al. 2007; Harada et al. 2013). In addition, Harada et al. (2013) suggested that CN is not increased by mechanical heating such as turbulence/shock heating. In our observations, HCN and H$^{13}$CN in NGC 1068 are also abundant, as shown in figure 8. The ratios $\frac{N\rm{(HCN)}}{N\rm{(CS)}}$ and $\frac{N\rm{(H^{13}CN)}}{N\rm{(CS)}}$ in NGC 1068 are approximately 1.0--1.6 times and more than 1.8 times, respectively, larger than in the starburst galaxies. Note that the ratios of $\frac{N\rm{(CN)}}{N\rm{(CS)}}$ and $\frac{N\rm{(HCN)}}{N\rm{(CS)}}$ are not so high compared with the starburst galaxies as those of $\frac{N\rm{(^{13}CN)}}{N\rm{(CS)}}$ and $\frac{N\rm{(H^{13}CN)}}{N\rm{(CS)}}$, but this reason is likely the high optical depths of CN and HCN as already discussed in section 3.3. The optically thin lines of $^{13}$CN and H$^{13}$CN are more useful for this discussion. Meijerink et al. (2007) suggested that the range of $\frac{N\rm{(CN)}}{N\rm{(HCN)}}$ lies between 40 (at $n\sim$10$^{6}$ cm$^{-3}$) to over 1000 (at $n\sim$10$^{4}$ cm$^{-3}$) in an XDR based on their calculations. On the other hand, we obtained $\frac{N\rm{(CN)}}{N\rm{(HCN)}}$ and $\frac{N\rm{(^{13}CN)}}{N\rm{(H^{13}CN)}}$ ratios of 6.4 and 2.1, respectively in NGC 1068, whereas Aladro et al. (2015) reported that these values are 2.8 and $<$1.9, respectively. Although small differences in these ratios among these observations are probably caused by the difference of the beam sizes, these ratios are significantly low relative to a pure XDR environment. In fact, P\'{e}rez-Beaupuits et al. (2009) suggested that $\frac{N\rm{(CN)}}{N\rm{(HCN)}}$ in NGC 1068, obtained from the observations with the James Clerk Maxwell Telescope (JCMT), is relatively low ($<$ 1) with respect to what would be expected in a pure XDR scenario. Therefore, these abundance ratios are not explained only by the XDR scenario, because $\frac{N\rm{(CN)}}{N\rm{(HCN)}}$ as well as $\frac{N\rm{(^{13}CN)}}{N\rm{(H^{13}CN)}}$ are small ratios which are unexpected in a pure XDR environment. In hot cores, a high temperature reaction ${\rm CN + H_{2} \longrightarrow HCN + H}$ efficiently converts CN into HCN reducing the fractional abundance of CN at an elevated temperature and increasing that of HCN (Harada et al. 2010). Therefore, one possible scenario for the high abundance of both CN and HCN in the CND in NGC 1068 is that the abundances of CN, $^{13}$CN are increased by X-ray and/or cosmic ray irradiation, while HCN and H$^{13}$CN are increased by mechanical heating due to turbulance and shock such as an AGN jet. This scenario has already been reported in another AGN, NGC 1097, by Izumi et al. (2013). In fact, Garc\'{i}a-Burillo et al. (2014) identified the signature of an AGN-driven outflow in the CND of NGC 1068 based on the CO velocity field. Beam sizes of single-dish observations in our study and previous studies correspond to $\sim$1 kpc. If X-rays are not attenuated, they can cause a significant ionization rate even at this scale (e.g., Eq. 4 in Maloney et al. 1996). However, X-rays can be attenuated by a column density $N_{\rm H}\sim$10$^{24}$ cm$^{-2}$, and it is likely that they are attenuated in a 10-pc or 100-pc scale within the CND. Generally single-dish telescopes observe the whole structure of molecular clouds, which are both irradiated by strong radiations and shielded from them. Therefore, it is difficult to compare the observations and the model calculations. If we discuss the relationship between such physical phenomena and chemical characteristics in NGC 1068, we need to clearly separate the molecular abundances in the CND and the starburst ring. Finally, metallicity or elemental abundance is one of the important factors for the molecular abundance. While the galaxies have a metallicity gradient in general, the metallicities in NGC 253 and IC 342 are known to be close to the solar value, $Z$$\sim$$Z_{\odot}$ (Webstar \& Smith 1983; Yamagishi et al. 2011), and nearly solar, 0.8 $Z_{\odot}$ (Crosthwaite et al. 2001), respectively. Although the metallicity of AGNs are generally high, for example the values of super solar ($\gtrsim$3 $Z_{\odot}$) especially in the narrow line region (NLR) (e.g. Groves et al. 2006), all elements are consistent with solar value except for Nitrogen (N/H) in NGC 1068 (Kramer et al. 2015, Martins et al. 2010, Brinkman et al. 2002, Marshall et al. 1993). As a result, we judged that the metallicities in our objects observed with our beam size of $\sim$19$^{\prime\prime}$ are almost consistent with the solar value. Therefore, the difference of metallicity is likely to be not a main cause of the enhancement of some species such as HCN (at least in our observations with large beam size). The significant effect of metallicity is likely to be possible closer to the AGN core. In order to investigate the effect of the AGN on the surrounding molecular gas, it is important to obtain the distribution of molecules in the CND with much higher spatial resolution. However, the investigation of molecular composition in nearby galaxies with single-dish telescopes is important. For example, it is necessary to observe molecular gas in high-redshift galaxies with interferometry for understanding the scenario of galaxy evolution. But these galaxies are not well resolved in their inner structure even with ALMA. At that time, the results of line surveys with single-dish telescopes will be the most basic template. \section{Conclusions} We carried out unbiased molecular line survey observations in the 3-mm band toward NGC 1068, NGC 253, and IC 342 with the NRO 45-m telescope. The rotation temperatures and column densities for 23 molecular species in each galaxy were obtained and added to other transition data in previous studies under the assumption of local thermodynamic equilibrium. In order to investigate the effect of an AGN on the surrounding interstellar medium, we determined molecular fractional abundances relative to $^{13}$CO and CS, and compared these abundances between NGC 1068, the galaxy hosting an AGN, and two starburst galaxies with correlation plots. The main results of this work are summarized as follows:\\ \begin{enumerate} \item We found that the fractional abundances of the assorted molecules normalized by $^{13}$CO and CS are roughly in good correlation between NGC 1068 and the starburst galaxies NGC 253 and IC 342. This means that significant effects on the chemistry due to the existence of an AGN or starburst are not prominently seen over a large scale ($\sim$1 kpc) with the NRO 45-m telescope. But some molecules show higher or lower abundances in NGC 1068 relative to the starburst galaxies. \item The molecules with higher fractional abundances in NGC 1068 include HCN, H$^{13}$CN, CN, $^{13}$CN and HC$_{3}$N, while only CH$_{3}$CCH shows a serious deficiency in its fractional abundance in NGC 1068. Although these trends are almost consistent with a prior line survey using the IRAM 30-m telescope, our results with the NRO 45-m telescope place more stringent constraints on the trend of the molecular abundances in NGC 1068. \item The radical CN and its isotopic species $^{13}$CN are clearly more abundant in NGC1068 compared with the starburst galaxies. In addition, HCN and H$^{13}$CN in NGC 1068 are also more abundant in our observations. We successfully detected the $^{13}$C isotopic species of CN and HCN with more than 3-sigma, and obtained their abundances in NGC 1068. The optically thin lines of $^{13}$CN and H$^{13}$CN are more useful for studies of the CN and HCN abundances. \item The obtained $\frac{N\rm{(CN)}}{N\rm{(HCN)}}$ and $\frac{N\rm{(^{13}CN)}}{N\rm{(H^{13}CN)}}$ ratios are 6.4 and 2.1, respectively in NGC 1068, and these values are relatively low with respect to what would be expected in chemical models of a pure XDR environment. One possible scenario for the high abundances of both CN and HCN in the CND is that the fractional abundances of CN and $^{13}$CN are increased by X-ray and/or cosmic ray irradiation. On the other hand, HCN and H$^{13}$CN are possibly increased by mechanical heating due to turbulence and shocks such as an AGN jet. \item The reason for the non-detection of CH$_{3}$CCH in NGC 1068 is likely to be dissociation by high energy radiation, a lack of formation on grains, or less sublimation of a precursor of CH$_{3}$CCH, C$_{2}$H$_{4}$, from grains. On the contrary, the fractional abundance of HC$_{3}$N in NGC 1068 is found to be higher than those in the starburst galaxies. The emission of HC$_{3}$N must be coming from regions shielded from X-rays in the CND, and these regions are expected to be in a high temperature environment. This senario is consistent with the results from abundances of HCN and CN. \end{enumerate} \bigskip This work is based on observations with the NRO 45-m telescope, as a part of the line surveys of the legacy projects. The authors thank the project members for helpful discussion. E. H. thanks the National Science Foundation (US) for their support of his program in astrochemistry. We also thank all of the staff of the 45-m telescope for their support. We are also grateful to Ryohei Kawabe for his advice and support of the line survey project. \clearpage \begin{table} \caption{Detected and non-detected molecules in the 3-mm band.}\label{} \begin{center} \begin{tabular}{cccc} \hline Molecule & NGC 1068 & NGC 253 & IC 342 \\ \hline Linear species &&&\\ $^{12}$CO & Yes & Yes & Yes \\ $^{13}$CO & Yes & Yes & Yes \\ C$^{18}$O & Yes & Yes & Yes \\ C$^{17}$O & No & Yes & Yes \\ CS & Yes & Yes & Yes \\ C$^{34}$S & No & Yes & Yes \\ HC$_{3}$N & Yes & Yes & Yes \\ SiO & Yes & Yes & Yes \\ HCN & Yes & Yes & Yes \\ H$^{13}$CN & Yes & Yes & Yes \\ HCO$^{+}$ & Yes & Yes & Yes \\ H$^{13}$CO$^{+}$ & No & Yes & Yes \\ HNC & Yes & Yes & Yes \\ N$_{2}$H$^{+}$ & Yes & Yes & Yes \\ SO & Yes & Yes & Yes \\ CN & Yes & Yes & Yes \\ $^{13}$CN & Yes & Yes & No \\ C$_{2}$H & Yes & Yes & Yes \\ Symmetric tops &&&\\ CH$_{3}$CN & Yes & Yes & Yes \\ CH$_{3}$CCH & No & Yes & Yes \\ Asymmetric tops &&&\\ HNCO & Yes & Yes & Yes \\ CH$_{3}$OH & Yes & Yes & Yes \\ cyclic-C$_{3}$H$_{2}$ & Yes & Yes & Yes \\ \hline \multicolumn{4}{@{}l@{}}{\hbox to 0pt{\parbox{75mm}{\footnotesize Yes and No represent the detected and non-detected molecule, respectively, in our line survey observation with the NRO 45-m telescope. The observed parameter of molecular lines in each galaxy will be reported in the data paper (Takano et al. in prep.). }\hss}} \end{tabular} \end{center} \end{table} \begin{table} \tbl{Optical depth.}{% \begin{tabular}{lccc} \hline Molecule & NGC 1068 & NGC 253 & IC 342 \\ \hline C$_{2}$H & 0.09 & 0.10 & 0.10 \\ CN & 1.42 & 0.39 & 4.09 \\ $^{13}$CN & 0.01 & 0.01 & ---\footnotemark[$*$] \\ $^{12}$CO & 2.06 & 2.68 & 4.20 \\ $^{13}$CO & 0.05 & 0.07 & 0.11 \\ C$^{18}$O & 0.02 & 0.02 & 0.03 \\ C$^{17}$O & ---\footnotemark[$*$] & 0.001 & 0.004 \\ CS & ---\footnotemark[$*$] & 0.91 & 1.28 \\ C$^{34}$S & ---\footnotemark[$*$] & 0.04 & 0.06 \\ HCO$^{+}$ & ---\footnotemark[$*$] & 3.81 & 1.24 \\ H$^{13}$CO$^{+}$ & ---\footnotemark[$*$] & 0.08 & 0.02 \\ HCN & 2.49 & 3.09 & 3.46 \\ H$^{13}$CN & 0.05 & 0.06 & 0.07 \\ \hline \multicolumn{4}{@{}l@{}}{\hbox to 0pt{\parbox{60mm}{\footnotesize \footnotemark[$*$] $^{13}$CN in IC 342, and C$^{17}$O, C$^{34}$S, and H$^{13}$CO$^{+}$ in NGC 1068 are not detected. Optical depths of CS and HCO$^{+}$ in NGC 1068 are not calculated using the elemental isotopic ratios because C$^{34}$S and H$^{13}$CO$^{+}$ are not detected. The values of C$_{2}$H, CN, and $^{13}$CN are calculated as the sum of the optical depths of all components. }\hss}} \end{tabular}} \end{table} \clearpage \begin{table*} \caption{Rotation temperatures and column densities of the observed molecules.}\label{} \begin{center} \begin{tabular}{lcccccccc} \hline & \multicolumn{2}{c}{NGC 1068} & & \multicolumn{2}{c}{NGC 253} & & \multicolumn{2}{c}{IC 342} \\ \cline{2-3} \cline{5-6} \cline{8-9} Molecule & $T_{\rm rot}$ [K] & $N_{\rm mol}$ [cm$^{-2}$] & & $T_{\rm rot}$ [K] & $N_{\rm mol}$ [cm$^{-2}$] & & $T_{\rm rot}$ [K] & $N_{\rm mol}$ [cm$^{-2}$] \\ \hline cyclic-C$_{3}$H$_{2}$ & 10$\pm$5\footnotemark[$*$] & 2.0$^{+1.7}_{-1.0}$(14) & & 9.8$\pm$0.2 & 4.4$\pm$0.0(13) & & 10$\pm$5\footnotemark[$*$] & 6.9$^{+7.8}_{-4.3}$(13) \\ CH$_{3}$CCH & 10$\pm$5\footnotemark[$*$] & $<$9.1(13) & & 40.5$\pm$0.2 & 3.5$\pm$0.3(14) & & 39.6$^{+0.9}_{-0.8}$ & 4.1$\pm0.1$(14) \\ H$^{13}$CN & $<$7.3 & $>$4.7(13) & & 6.1$^{+1.1}_{-0.9}$ & 1.2$^{+0.2}_{-0.1}$(13) & & 10$\pm$5\footnotemark[$*$] & 1.5$^{+0.6}_{-0.4}$(13) \\ H$^{13}$CO$^{+}$ & 10$\pm$5\footnotemark[$*$] & $<$4.5(12) & & 6.7$^{+1.7}_{-1.1}$ & 2.5$\pm$0.3(12) & & 10$\pm$5\footnotemark[$*$] & 4.2$^{+2.1}_{-1.4}$(12) \\ SiO & 5.9$^{+3.2}_{-1.5}$ & 3.2$^{+1.2}_{-0.7}$(13) & & 7.2$^{+0.7}_{-0.5}$ & 1.1$\pm$0.1(13) & & 10$\pm$5\footnotemark[$*$] & 1.4$^{+0.5}_{-0.2}$(13) \\ C$_{2}$H & 10$\pm$5\footnotemark[$*$] & 2.6$\pm$1.0(15) & & 6.5$\pm$1.0 & 1.7$\pm$0.0(15) & & 10$\pm$5\footnotemark[$*$] & 5.7$^{+2.2}_{-2.1}$(14) \\ HNCO & 39.9$^{+9.5}_{-4.9}$ & 6.8$^{+0.6}_{-0.1}$(14) & & 19.1$\pm$2.0 & 1.1$^{+0.1}_{-0.0}$(14) & & 24.0$^{+1.1}_{-0.9}$ & 2.5$^{+0.1}_{-0.0}$(14) \\ HCN & 10.3$^{+0.2}_{-0.5}$ & 4.7$^{+0.0}_{-0.3}$(14) & & 13.9$^{+0.0}_{-0.2}$ & 2.0$^{+0.1}_{-0.0}$(14) & & 10.8$\pm$0.1 & 1.2$\pm$0.1(14) \\ HCO$^{+}$ & 8.3$\pm$0.3 & 2.1$\pm$0.1(14) & & 13.2$\pm$0.2 & 1.0$^{+0.3}_{-0.4}$(14) & & 6.2$\pm$0.3 & 7.5$^{+0.1}_{-0.0}$(13) \\ HNC & 9.5$^{+0.4}_{-0.5}$ & 1.7$\pm$0.1(14) & & 10$\pm$5\footnotemark[$*$] & 8.7$^{+2.6}_{-1.7}$(13) & & 10$\pm$5\footnotemark[$*$] & 7.6$^{+2.2}_{-1.4}$(13) \\ HC$_{3}$N & 13.4$^{+32.2}_{-5.0}$ & 1.7$^{+2.7}_{-0.9}$(14) & & 36.8$^{+2.6}_{-2.8}$ & 3.3$^{+0.3}_{-0.2}$(13) & & 33.8$^{+4.3}_{-3.7}$ & 2.6$\pm$0.3(13) \\ N$_{2}$H$^{+}$ & 10$\pm$5\footnotemark[$*$] & 5.8$^{+2.9}_{-1.8}$(13) & & 10$\pm$5\footnotemark[$*$] & 3.1$^{+1.0}_{-0.6}$(13) & & 10$\pm$5\footnotemark[$*$] & 3.5$^{+1.2}_{-0.8}$(13) \\ C$^{34}$S & $>$13.1 & $<$1.9(13) & & 15.1$^{+10.1}_{-4.3}$ & 2.7$^{+0.7}_{-0.1}$(13) & & 10.4$^{+14.8}_{-3.8}$ & 1.8$^{+0.5}_{-0.0}$(13) \\ CH$_{3}$OH & 10$\pm$5\footnotemark[$*$] & 1.7$^{+0.3}_{-0.1}$(13) & & 10$\pm$5\footnotemark[$*$] & 1.7$^{+0.4}_{-0.1}$(13) & & 10$\pm$5\footnotemark[$*$] & 1.9$^{+0.5}_{-0.1}$(13) \\ CS & 7.9$\pm$0.3 & 5.3$^{+0.4}_{-0.0}$(14) & & 12.1$\pm$0.1 & 2.3$^{+0.0}_{-0.2}$(14) & & 9.8$^{+0.6}_{-0.7}$ & 2.1$^{+0.2}_{-0.3}$(14) \\ SO & $<$13.0 & $>$1.7(14) & & 7.4$^{+0.4}_{-0.3}$ & 2.0$\pm$0.0(14) & & $<$5.1 & $>$8.7(13) \\ $^{13}$CN & 10$\pm$5\footnotemark[$*$] & 1.0$^{+0.3}_{-0.2}$(14) & & 10$\pm$5\footnotemark[$*$] & 1.2$^{+0.5}_{-0.3}$(13) & & 10$\pm$5\footnotemark[$*$] & $<$4.4(12) \\ C$^{18}$O & 4.0$\pm$0.3 & 5.3$^{+0.6}_{-0.5}$(16) & & 19.1$^{+2.7}_{-2.3}$ & 3.5$\pm$0.1(16) & & 7.6$^{+0.9}_{-0.8}$ & 2.9$^{+0.6}_{-0.5}$(16) \\ $^{13}$CO & 11.7$\pm$0.8 & 1.2$\pm$0.0(17) & & 32.2$^{+3.8}_{-3.5}$ & 1.4$\pm$0.1(17) & & 12.5$^{+0.5}_{-0.4}$ & 1.7$\pm$0.0(17) \\ CH$_{3}$CN & 10$\pm$5\footnotemark[$*$] & 3.4$^{+6.1}_{-0.5}$(13) & & 9.5$\pm$0.1 & 1.0$^{+0.1}_{-0.0}$(13) & & 10$\pm$5\footnotemark[$*$] & 9.6$^{+17.1}_{-1.2}$(12) \\ C$^{17}$O & 10$\pm$5\footnotemark[$*$] & $<$4.1(15) & & 26.5\footnotemark[$\dagger$] & 3.7\footnotemark[$\dagger$] & & 7.0$^{+3.5}_{-1.7}$ & 4.2$^{+1.4}_{-0.9}$(15) \\ CN & 4.9$\pm$0.2 & 3.0$\pm$0.2(15) & & 5.4$\pm$0.1 & 1.1$\pm$0.0(15) & & 3.6$\pm$0.1 & 8.1$^{+0.3}_{-0.1}$(14) \\ $^{12}$CO & 19.0$\pm$1.0 & 1.9$\pm$0.1(18) & & 42.8$^{+0.0}_{-10.3}$ & 1.3$^{+0.0}_{-0.2}$(18) & & 24.4$^{+0.2}_{-0.3}$ & 8.2$\pm$0.1(17) \\ \hline \multicolumn{5}{@{}l@{}}{\hbox to 0pt{\parbox{138mm}{\footnotesize The expression a(b) represents $a\times 10^{b}$. The range of error is represented by the maximum and minimum values, which are calculated from the maximum and minimum slopes of the fitted lines in the rotation diagram. \par\noindent \footnotemark[$*$] We assumed $T_{\rm{rot}}$ = 10$\pm$5 K for molecules with only one transition available or non-detection in our observations. \par\noindent \footnotemark[$\dagger$] The error of C$^{17}$O in NGC 253 is not calculated because the fitted line can have a positive slope due to a large error in our observations (see the rotaion diagram in figure 10). }\hss}} \end{tabular} \end{center} \end{table*} \clearpage \begin{table} \tbl{Important column density ratios.}{% \begin{tabular}{lccc} \hline Column density ratio & NGC 1068 & NGC 253 & IC 342 \\ \hline HCN/HCO$^{+}$ & 2.2$^{+0.1}_{-0.2}$ & 2.0$^{+0.1}_{-0.8}$ & 1.6$^{+0.6}_{-0.1}$ \\ CN/HCO$^{+}$ & 14.3$\pm{1.2}$ & 11.0$^{+3.3}_{-4.4}$ & 10.8$^{+0.4}_{-0.1}$ \\ H$^{13}$CN/H$^{13}$CO$^{+}$ & $>$10.4 & 4.8$^{+1.0}_{-0.6}$ & 3.6$^{+2.3}_{-1.5}$ \\ $^{13}$CN/H$^{13}$CO$^{+}$ & $>$22.2 & 4.8$^{+2.1}_{-1.3}$ & $<$1.0 \\ CN/HCN & 6.4$^{+0.4}_{-0.0}$ & 5.5$^{+0.3}_{-0.0}$ & 6.8$\pm{0.6}$ \\ $^{13}$CN/H$^{13}$CN & $<$2.1 & 1.0$^{+0.4}_{-0.3}$ & $<$0.3 \\ CH$_{3}$CCH/C$^{18}$O & $<$1.7$\times$10$^{-3}$ & 10.0$\pm$0.1$\times$10$^{-3}$ & 14.1$\pm$0.3$\times$10$^{-3}$ \\ \hline \end{tabular}}\label{} \end{table} \clearpage \begin{figure*} \begin{center} \FigureFile(160mm,80mm){fig1.eps} \end{center} \caption{Comparison of column densities of each molecule among the three galaxies. The bar graphs of red (left), blue (center), and green (right) are for NGC 1068, NGC 253, and IC 342, respectively. The order of the molecules is arranged in descending order from left to right based on the column density toward NGC 1068. Arrows indicate upper or lower limits (see also Table 3).}\label{somelabel} \end{figure*} \clearpage \begin{figure} \begin{center} \FigureFile(80mm,40mm){fig2.eps} \end{center} \caption{The important column density ratios among the observed three galaxies. The numerical values are shown in Table 4. Arrows indicate upper or lower limits.}\label{somelabel} \end{figure} \clearpage \begin{figure} \begin{center} \FigureFile(80mm,80mm){fig3.eps} \end{center} \caption{C$_{2}$H $N$ = 1--0 lines toward (a) NGC 1068, (b) NGC 253, and (c) IC 342 obtained with a velocity resolution of 20 km s$^{-1}$ for NGC 1068 and NGC 253 and 10 km s$^{-1}$ for IC 342. The spectrum consists of 6 hyperfine components ({\it J} = 3/2--1/2; {\it F} = 1--1, {\it F} = 2--1, {\it F} = 1--0, and {\it J} = 1/2--1/2; {\it F} = 1--1, {\it F} = 0--1, {\it F} = 1--0). The gray curves overlaid on the profiles are the results of Gaussian least-squares fits.}\label{somelabel} \end{figure} \begin{figure} \begin{center} \FigureFile(80mm,80mm){fig4.eps} \end{center} \caption{CN $N$ = 1--0 lines toward (a) NGC 1068, (b) NGC 253, and (c) IC 342. The spectrum consists of 9 hyperfine components ({\it J} = 3/2--1/2; {\it F} = 3/2--1/2, {\it F} = 5/2--3/2, {\it F} = 1/2--1/2, {\it F} = 3/2--3/2, {\it F} = 1/2--3/2) and {\it J} = 1/2--1/2; {\it F} = 1/2--1/2, {\it F} = 1/2--3/2, {\it F} = 3/2--1/2 , {\it F} = 3/2--3/2. See also the caption of figure 3.}\label{somelabel} \end{figure} \begin{figure} \begin{center} \FigureFile(80mm,80mm){fig5.eps} \end{center} \caption{$^{13}$CN $N$ = 1--0 lines toward (a) NGC 1068 and (b) NGC 253. The spectrum consists of 14 hyperfine components. The reason for the possible fitting offset in NGC 1068 may have resulted from a low signal to noise ratio. See also the caption of figure 3.}\label{somelabel} \end{figure} \clearpage \begin{figure*} \begin{center} \FigureFile(160mm,80mm){fig6.eps} \end{center} \caption{Plots of the fractional molecular abundances relative to $^{13}$CO (a) and CS (b) between the two starburst galaxies, NGC 253 and IC 342. The bold dashed line represents equal abundances between NGC 253 and IC 342, and thin dashed lines represent an order of magnitude higher or lower than the equal abundance. For details of the normalization for the calculation of the fractional abundances, see section 4.1.}\label{somelabel} \end{figure*} \begin{figure*} \begin{center} \FigureFile(160mm,160mm){fig7.eps} \end{center} \caption{Plots of the fractional molecular abundances relative to $^{13}$CO ((a) and (b)) and CS ((c) and (d)) between NGC 253 and NGC 1068, and between IC 342 and NGC 1068. These results represent the abundance correlation between the galaxy with an AGN and the two starburst galaxies. Plots above the bold dashed line represent enhanced molecules and plots below represent deficient ones in the galaxy hosting the AGN. For details of the normalization for the calculation of the fractional abundances, see section 4.1.}\label{somelabel} \end{figure*} \clearpage \begin{figure*} \begin{center} \FigureFile(160mm,160mm){fig8.eps} \end{center} \caption{Fractional abundances relative to $^{13}$CO and CS in NGC 1068 compared with (a) NGC 253 and (b) IC 342. Ratios over unity represents enhancement of molecular abundances in NGC 1068. Arrows indicate upper or lower limits. For details of the normalization for the calculation of the fractional abundances, see section 4.1. The abscissa consists of the 23 molecules detected in at least one of the three galaxies.}\label{somelabel} \end{figure*} \clearpage
{ "timestamp": "2017-12-15T02:03:04", "yymm": "1712", "arxiv_id": "1712.05070", "language": "en", "url": "https://arxiv.org/abs/1712.05070" }
\section{Introduction} In \cite{Luo}, we proved the following theorem: \begin{thm}[\cite{Luo}] Let $L:\Sigma\to \mathbb{S}^5$ be a contact stationary Legendrian surface. Then we have \begin{eqnarray*} \int_L\rho^2(3-\frac{3}{2}S+2H^2)d\mu\leq0, \end{eqnarray*} where $\rho^2:=S-2H^2$. In particular, if \begin{eqnarray*} 0\leq S\leq 2, \end{eqnarray*} then either $\rho^2=0$ and $L$ is totally umbilical, or $\rho^2\neq 0$, $S=2, H=0$ and $L$ is a flat minimal Legendrian torus. \end{thm} Compared with the gap theorem of \cite{YKM}, it is very interesting to know if $L$ is totally geodesic in the above alternative when $\rho^2=0$. Hence in the appendix of \cite{Luo}, we asked whether a totally umbilical contact stationary Legendrian surface in $\mathbb{S}^5$ with $0\leq S\leq 2$ is totally geodesic or not. In this note we give an affirmative positive answer to this question. Actually we get a stronger result. \begin{thm}\label{main thm} Assume that $L$ is a totally umbilical contact stationary Legendrian surface in $\mathbb{S}^5$. Then $L$ is totally geodesic. \end{thm} As a corollary of the above two theorems, we have \begin{cor} Assume that $L$ is a contact stationary Legendrian surface in $\mathbb{S}^5$ with $0\leq S\leq 2$. Then either $S=0$ and $L$ is totally geodesic or $S=2$ and $L$ is a flat minimal Legendrian torus. \end{cor} \section{Proof of Theorem \ref{main thm}} Let $L$ be a Legendrian surface in $\mathbb{S}^5$ with the induced metric $g$. Assume that $\{e_1,e_2\}$ is an orthonormal frame on $L$ such that $\{e_1,e_2,Je_1,Je_2,\textbf{R}\}$ be a orthonormal frame on $\mathbb{S}^5$. Here $\textbf{R}$ is the Reeb field of $\mathbb{S}^5$. In the following we use indexes $i,j,k,l,s,t,m$ and $\beta,\gamma$ such that \begin{eqnarray*} 1\leq i,j,k,l,s,t,m&\leq&2, \\1\leq\beta,\gamma&\leq&3, \\ \gamma^\ast=\gamma+2,\s \beta^\ast&=&\beta+2. \end{eqnarray*} Let $B$ be the second fundamental form of $L$ in $\mathbb{S}^5$ and define \begin{eqnarray} h_{ij}^k&=&g_\alpha(B(e_i,e_j),Je_k), \\h^3_{ij}&=&g_\alpha(B(e_i,e_j),\textbf{R}). \end{eqnarray} Then \begin{eqnarray} h_{ij}^k&=&h_{ik}^j=h_{kj}^i, \\h^3_{ij}&=&0. \end{eqnarray} The Gauss equations and Ricci equations are \begin{eqnarray} R_{ijkl}&=&(\delta_{ik}\delta_{jl}-\delta_{il}\delta_{jk})+\sum_s(h^s_{ik}h^s_{jl}-h^s_{il}h^s_{jk}),\label{basic equation 1} \\R_{ik}&=&\delta_{ik}+2\sum_sH^sh^s_{ik}-\sum_{s,j}h^s_{ij}h^s_{jk}, \\2K&=&2+4H^2-S, \\R_{3412}&=&\sum_i(h_{i1}^1h_{i2}^2-h_{i2}^1h_{i1}^2)\nonumber \\&=&\det h^1+\det h^2, \end{eqnarray} where $K$ is the sectional curvature function of $(L,g)$ and $h^1,h^2$ are the second fundamental forms w.r.t. the normal directions $Je_1$, $Je_2$ respectively. In addition we have the following Codazzi equations and Ricci identities \begin{eqnarray} h^\beta_{ijk}&=&h^\beta_{ikj}, \\h^\beta_{ijkl}-h^\beta_{ijlk}&=&\sum_mh^\beta_{mj}R_{mikl}+\sum_mh^\beta_{mi}R_{mjkl}+\sum_\gamma h^\gamma_{ij}R_{\gamma^\ast\beta^\ast kl}.\label{basic equation 2} \end{eqnarray} Using these equations, we can get the following Simons' type inequality: \begin{lem}[\cite{Luo}]\label{main result} Let $L$ be a Legendrian surface in $\mathbb{S}^5$. Then we have \begin{eqnarray}\label{main lemma} \frac{1}{2}\Delta\sum_{i,j,\beta}(h^\beta_{ij})^2&\geq&|\nabla^T h|^2-2|\nabla^T H|^2-2|\nabla^\nu H|^2 +\sum_{i,j,k,\beta}(h^\beta_{ij}h^\beta_{kki})_j \nonumber \\&+&S-2H^2+2(1+H^2)\rho^2-\rho^4-\frac{1}{2}S^2, \end{eqnarray} where $|\nabla^T h|^2=\sum_{i,j,k,s}(h^s_{ijk})^2$ and $|\nabla^T H|^2=\sum_{i,s}(H^s_i)^2$. \end{lem} \proof This lemma was proved in \cite{Luo}. We copy the proof here because we will use several equalities and inequalities in the proof in the following. Using equations from (\ref{basic equation 1}) to (\ref{basic equation 2}), we have \begin{eqnarray}\label{simon type} \frac{1}{2}\Delta\sum_{i,j,\beta}(h^\beta_{ij})^2 &=&\sum_{i,j,k,\beta}(h^\beta_{ijk})^2+\sum_{i,j,k,\beta}h^\beta_{ij}h^\beta_{kijk}\nonumber \\&=&|\nabla h|^2-4|\nabla^\nu H|^2+\sum_{i,j,k,\beta}(h^\beta_{ij}h^\beta_{kki})_j+\sum_{i,j,l,k,\beta} h^\beta_{ij}(h^\beta_{lk}R_{lijk}+h^\beta_{il}R_{lj})\nonumber \\&+&\sum_{i,j,k,\beta,\gamma} h^\beta_{ij}h^\gamma_{ki}R_{\gamma^\ast\beta^\ast jk}\nonumber \\&=&|\nabla h|^2-4|\nabla^\nu H|^2+\sum_{i,j,k,s}(h^s_{ij}h^s_{kki})_j+2K\rho^2-2(\det h^1+\det h^2)^2\nonumber \\&\geq&|\nabla h|^2-4|\nabla^\nu H|^2+\sum_{i,j,k,\beta}(h^\beta_{ij}h^\beta_{kki})_j+2(1+H^2)\rho^2-\rho^4-\frac{1}{2}S^2, \end{eqnarray} where $\rho^2:=S-2H^2$ and in the above calculations we used the following identities \begin{eqnarray*} \sum_{i,j,k,l,\beta} h^\beta_{ij}(h^\beta_{lk}R_{lijk}+h^\beta_{il}R_{lj})&=&2K\rho^2, \\\sum_{i,j,k,\beta,\gamma} h^\beta_{ij}h^\gamma_{ki}R_{\gamma^\ast\beta^\ast jk}&=&-2(\det h^1+\det h^2)^2, \end{eqnarray*} where in the first equality we used $R_{lijk}=K(\delta_{lj}\delta_{ik}-\delta_{lk}\delta_{ij})$ and $R_{lj}=K\delta_{lj}$ in a proper orthonormal frame field, because $L$ is a surface. Note that \begin{eqnarray}\label{main idea1} |\nabla h|^2&=&\sum_{i,j,k,\beta}(h^\beta_{ijk})^2\nonumber =|\nabla^T h|^2+\sum_{i,j,k}(h^3_{ijk})^2\nonumber =|\nabla^T h|^2+\sum_{i,j,k}(h^k_{ij})^2\nonumber \\&=&|\nabla^T h|^2+S, \end{eqnarray} where in the third equality we used \begin{eqnarray*} h^3_{ijk}&=&\langle\bar{\nabla}_{e_k}h(e_i,e_j),\textbf{R}\rangle \\&=&-\langle h(e_i,e_j),\bar{\nabla}_{e_k}\textbf{R}\rangle \\&=&\langle h(e_i,e_j),Je_k\rangle \\&=&h^k_{ij}. \end{eqnarray*} Similarly we have \begin{eqnarray}\label{main idea2} |\nabla^\nu H|^2=|\nabla^TH|^2+H^2. \end{eqnarray} Combing (\ref{simon type}), (\ref{main idea1}) with (\ref{main idea2}), we get (\ref{main lemma}). \endproof We also have \begin{lem}[\cite{Luo}] Let $L:\Sigma\to \mathbb{S}^5$ be a contact stationary Legendrian surface. Then \begin{eqnarray}\label{integral equality} \int_L|\nabla^\nu H|^2d\mu=-\int_L(K-1)H^2d\mu, \end{eqnarray} where $|\nabla^\nu H|^2=\sum_{\beta,i}(H^\beta_i)^2$. \end{lem} Integrating over (\ref{main lemma}) and using $|\nabla^Th|^2\geq 3|\nabla^TH|^2$ (see appendix, Lemma A.1 of \cite{Luo}) we get \begin{eqnarray}\label{ine1} 0&\geq&\int_L[(|\nabla^T h|^2-3|\nabla^T H|^2)-2|\nabla^\nu H|^2+S-2H^2+2(1+H^2)\rho^2-\rho^4-\frac{1}{2}S^2+|\nabla^T H|^2]d\mu \nonumber \\ &\geq& \int_L[-2|\nabla^\nu H|^2+S-2H^2+2(1+H^2)\rho^2-\rho^4-\frac{1}{2}S^2+|\nabla^T H|^2]d\mu \nonumber \\&=&\int_L(2-\rho^2)\rho^2d\mu+\int_L 2H^2\rho^2+2(K-1)H^2-2H^2+S-\frac{1}{2}S^2+|\nabla^T H|^2d\mu \nonumber \\&=&\int_L(2-\rho^2)\rho^2d\mu+\int_L 2H^2\rho^2+(4H^2-S)H^2-2H^2+S-\frac{1}{2}S^2+|\nabla^T H|^2d\mu \nonumber \\&=&\int_L\frac{3}{2}\rho^2(2-S)+2H^2\rho^2+|\nabla^T H|^2d\mu, \end{eqnarray} where in the first equality we used (\ref{integral equality}) and in the second equality we used the Gauss equation $2K=2+4H^2-S$. Therefore we obtain the following integral inequality \begin{eqnarray}\label{ine2} \int_L\rho^2(3-\frac{3}{2}S+2H^2)+|\nabla^T H|^2d\mu\leq0. \end{eqnarray} Particularly if $\rho^2=0$, i.e. $L$ is totally umbilical, then from (\ref{ine2}) we see that $|\nabla^T H|^2=0$. Then from (\ref{main idea2}) we get that $|\nabla^\nu H|^2=H^2$, which implies that $\int_LKH^2d\mu=0$ by (\ref{integral equality}). Now by the Gauss equation $2K=2+4H^2-S=2+2H^2-\rho^2=2+2H^2$ we get $$\int_LH^2(1+H^2)=0.$$ Therefore $H=0$ and hence combing with the assumption that $0=\rho^2=S-2H^2$, we get $S=0$, i.e. $L$ is totally geodesic. This completes the proof of Theorem \ref{main thm}. \endproof \vspace{1cm} \textbf{Acknowledgement.} The author is supported by the NSF of China(No.11501421). {
{ "timestamp": "2019-11-26T02:22:36", "yymm": "1712", "arxiv_id": "1712.05100", "language": "en", "url": "https://arxiv.org/abs/1712.05100" }
\section{Introduction} Nuclear fusion reactors need to be heated to very high temperature to overcome the Coulomb repulsion between nuclei to fuse. \cite{McCracken}. (Mathematically it is manifested in the exponential energy dependent factor in the cross section of fusion reactions [see $\left( \re {sigma}\right) $]. For details see Section II.) The effect of surroundings on nuclear fusion rate in astrophysical condensed and dense laboratory plasmas has been extensively studied in the case of usual nuclear reactions. In tenuous plasmas the effect of spectator nuclei and electrons (the environment) on the Gamow-rate of reacting nuclei which are assumed to interact with bare Coulomb potential is negligible \cite{Ichimaru . Moreover in some (e.g. tokamak-like) devices the presence of impurities during the heat up and working periods is undesirable because of high loss power generated by them \cite{Dolan}. In this paper it will be shown however, that spectator nuclei can significantly enhance nuclear processes and allow new types of reactions. We are going to investigate these processes of new type that can take place due to impurities and their effect on nuclear fusion devices. We focus our attention on Coulomb interaction between fuel nuclei and environment, namely on consequences of interactions with impurities which can activate new type of reactions of cross section of considerable magnitude which may change the condition of necessary plasma temperature and what is more remarkable, the mechanism found does not need plasma state at all. We investigate th \begin{equation} \text{ }_{z_{1}}^{A_{1}}V+p+\text{ }_{z_{3}}^{A_{3}}X\rightarrow \text{ _{z_{1}}^{A_{1}}V^{\prime }+\text{ }_{z_{3}+1}^{A_{3}+1}Y+\Delta \label{Reaction 1} \end{equation process called impurity assisted proton capture, a process among atoms or atomic ions containing $_{z_{1}}^{A_{1}}V$ nuclei (e.g. $Xe$) and protons or hydrogen atoms and ions or atoms of nuclei $_{z_{3}}^{A_{3}}X$ (e.g. deuterons) that are initially supposed to be in a plasma. $\Delta $ is the energy of the reaction. In normal case proton capture may happen in th \begin{equation} p+\text{ }_{z_{3}}^{A_{3}}X\rightarrow \text{ }_{z_{3}+1}^{A_{3}+1}Y+\gamma +\Delta . \label{p-capture} \end{equation reaction where $\gamma $ emission is required by energy and momentum conservation. Accordingly $\left( \ref{Reaction 1}\right) $ describes a new type of $p -capture. In the usual $p$-capture reaction $\left( \re {p-capture}\right) $\ particles $_{z_{3}+1}^{A_{3}+1}Y$ and $\gamma $ take away the reaction energy and the reaction is governed by electromagnetic interaction. In reaction $\left( \ref{Reaction 1}\right) $ the reaction energy is taken away by particles _{z_{1}}^{A_{1}}V^{\prime }$ and $_{z_{3}+1}^{A_{3}+1}Y$ while the reaction is governed by Coulomb as well as strong interactions. First we pay our attention to the impurity assisted $p+d\rightarrow $ ^{3}He $ reactio \begin{equation} _{z_{1}}^{A_{1}}V+p+d\rightarrow \text{ }_{z_{1}}^{A_{1}}V^{\prime }+\text{ _{2}^{3}He+5.493MeV \label{Reaction 2} \end{equation in an impurity contaminated plasma, that will be discussed in more detail. Cross section and rate of process $\left( \ref{Reaction 1}\right) $ to be considered can be calculated by the rules of time-independent perturbation calculation of quantum mechanics \cite{Landau}. Our results indicate that the cross section of process $\left( \ref{Reaction 1}\right) , which is an indirect (second order) reaction, may be essentially higher with decreasing energy than the cross section of a usual, direct (first order) reaction since the huge exponential drop in the cross section $\left( \ref{sigma}\right) $ with decreasing energy disappears from the cross section of process $\left( \ref{Reaction 1}\right) $. The disappearence of the exponential energy dependent factor in $\left( \ref{sigma \right) $ means that due to impurity assisted reactions $\left( \ref{Reaction 1}\right) $ the extra high temperature needed to ignite nuclear fusion in plasma may be appreciably reduced. \begin{figure}[tbp] \resizebox{6.0cm}{!}{\includegraphics*{Fig1.eps}} \caption{The graphs of impurity assisted nuclear reactions. The single lines represent (initial (1) and final (1')) impurity particle of the plasma. The double lines represent free, heavy initial (2) particles (such as $p$, $d$), their intermediate state (2'), target nuclei (3) and reaction products (4, 5, 6). The filled dot denotes Coulomb-interaction and the open circle denotes nuclear (strong) interaction. FIG. 1(a) is a capture process and FIG. 1(b) is a reaction with two fragments.} \end{figure} A less precise picture, still abiding the rules of second order time-independent perturbation calculation of quantum mechanics, can help to understand the effect with the aid of graphs (Fig. 1.) The physics behind the calculation may be interpreted as follows. The Coulomb interaction between particles $_{z_{1}}^{A_{1}}V$ and protons mixes (an intermediate) state of the proton of large momentum to the initially slow protonic state with a small amplitude while particle $_{z_{1}}^{A_{1}}V$ is recoiled. Thus the Coulomb interaction pushes the protons (virtually) into an intermediate state. In this state protons have large enough (virtual) momentum to get over the Coulomb repulsion of nuclei $_{z_{3}}^{A_{3}}X$ and so they may be captured by the nuclei $_{z_{3}}^{A_{3}}X$ due to strong interaction to create a $_{z_{3}+1}^{A_{3}+1}Y$ nucleus. The particles (impurities) _{z_{1}}^{A_{1}}V$ (initial) and $_{z_{1}}^{A_{1}}V^{\prime }$ (final) assist the process only. The virtual momentum of the intermediate state can be determined in the following way. Energy and momentum conservations determine the wave-vectors \mathbf{k}_{1^{\prime }}$ and $\mathbf{k}_{4}$, of particles 1' and 4, respectively, as $\mathbf{k}_{1^{\prime }}=-\mathbf{k _{4}$ and $\left\vert \mathbf{k}_{1^{\prime }}\right\vert =\left\vert \mathbf{k}_{4}\right\vert =k_{0}$ with $k_{0}=\hbar ^{-1}\sqrt 2m_{0}a_{14}\Delta }$. Here $m_{0}c^{2}=931.494$ $MeV$ is the atomic energy unit and $a_{14}$ is determined by $\left( \ref{ajk}\right) $. (The initial momenta and kinetic energies are assumed to be negligible.) Because of momentum conservation in Coulomb scattering of plane waves the wave vector $\mathbf{k}_{2^{\prime }}$ of particle 2' is determined as $\mathbf{k _{2^{\prime }}=-\mathbf{k}_{1^{\prime }}$, i.e. $\left\vert \mathbf{k _{2^{\prime }}\right\vert =k_{0}$ too. Consequently $\left\vert \mathbf{k _{2^{\prime }}\right\vert $ is large enough for particle 2' to effectively overcome the Coulomb repulsion. (For the details of the rigorous calculations see in Section III.) A generalization of $\left( \ref{Reaction 1}\right) $ can be the reactio \begin{equation} _{z_{1}}^{A_{1}}V+\text{ }_{z_{2}}^{A_{2}}w+\text{ }_{z_{3}}^{A_{3}} \rightarrow \text{ }_{z_{1}}^{A_{1}}V^{\prime }+\text{ _{z_{3}+z_{2}}^{A_{3}+A_{2}}Y+\Delta \label{Reaction 3} \end{equation that will be briefly discussed to draw conclusions as to the possible modification of appropriate fuels of nuclear fusion reactors by impurity assisted reactions. As a further generalization the reactio \begin{equation} _{z_{1}}^{A_{1}}V+\text{ }_{z_{2}}^{A_{2}}w+\text{ }_{z_{3}}^{A_{3}} \rightarrow \text{ }_{z_{1}}^{A_{1}}V^{\prime }+\text{ }_{z_{4}}^{A_{4}}Y \text{ }_{z_{5}}^{A_{5}}W+\Delta \label{Reaction 4} \end{equation with two final fragments is also considered. The impurity assisted $d(d,n)_{2}^{3}He$, $d(d,p)t$, $d(t,n)_{2}^{4}He$ and _{2}^{3}He(d,p)_{2}^{4}He$ reactions are numerically investigated and their rate and power densities are also determined. In Section II. the essential role of the Coulomb factor is discussed. Section III. is devoted to the discussion of the model. The change of the wavefunction in the nuclear range due to the impurity is determined. The transition probability per unit time, cross section, rate and power densities of impurity assisted $p+d\rightarrow $ $^{3}He$ reaction, which is the simplest impurity assisted proton capture reaction in an atomic-atom ionic gas mix, are given. The cross section of impurity assisted reactions with two final fragments are determined and the affect of plasma-wall interaction on the process is also considered. In Section IV. the rate and power densities in a $p-d-Xe$ atomic atom-ionic gas mix are calculated numerically. Section V. is a partial overview of some other impurity assisted nuclear reactions and gives account of the estimated power densities of the impurity assisted $d(d,n)_{2}^{3}He$, $d(d,p)t$, $d(t,n)_{2}^{4}He$, _{2}^{3}He(d,p)_{2}^{4}He$, $_{3}^{6}Li(p,\alpha )_{2}^{3}He$, _{3}^{7}Li(p,\alpha )_{2}^{4}He$, $_{4}^{9}Be(p,\alpha )_{3}^{6}Li$, _{4}^{9}Be(p,d)_{4}^{8}Be$, $_{4}^{9}Be(\alpha ,n)_{6}^{12}C$, _{5}^{10}B(p,\alpha )_{4}^{7}Be$ and $_{5}^{11}B(p,\alpha )_{4}^{8}Be$ reactions. Section VI. is a Summary. \section{Coulomb factor} The cross section $\left( \sigma \right) $ of usual fusion reactions between particles $2$ and $3$ reads as \cite{Angulo} \begin{equation} \sigma \left( E\right) =S\left( E\right) \exp \left[ -2\pi \eta _{23}\left( E_{l}\right) \right] /E, \label{sigma} \end{equation where $E$ is the energy taken in the center of mass $\left( CM\right) $ coordinate system \begin{equation} \eta _{23}=z_{2}z_{3}\alpha _{f}\frac{a_{23}m_{0}c}{\hbar \left\vert \mathbf k}\right\vert }=z_{2}z_{3}\alpha _{f}\sqrt{a_{23}\frac{m_{0}c^{2}}{2E}} \label{etajk} \end{equation is the Sommerfeld parameter. Here $\mathbf{k}$ is the wave number vector of the particles $2$ and $3$ in their relative motion, $\hbar $ is the reduced Planck-constant, $c$ is the velocity of light in vacuum an \begin{equation} a_{jk}=\frac{A_{j}A_{k}}{A_{j}+A_{k}} \label{ajk} \end{equation is the reduced mass number of particles $j$ and $k$ of mass numbers $A_{j}$ and $A_{k}$ and rest masses $m_{j}=A_{j}m_{0}$, $m_{k}=A_{k}m_{0}$. m_{0}c^{2}=931.494$ $MeV$ is the atomic energy unit, $\alpha _{f}$ is the fine structure constant. The cross section $\left( \ref{sigma}\right) $ can be\ derived applying an approximate for \begin{equation} \varphi _{Cb,a}(\mathbf{r})=f_{23}(k\left[ E\right] )\exp (i\mathbf{k\cdot r)/}\sqrt{V} \label{Cbapp} \end{equation of the Coulomb solution $\varphi _{Cb}(\mathbf{r})$ \cite{Alder} valid in the nuclear volume that produces the same $\left\vert \varphi (\mathbf{0 )_{Cb,a}\right\vert ^{2}=f_{23}^{2}/V$ contact probability density as \varphi _{Cb}(\mathbf{r})$. Here $\mathbf{r}=\mathbf{r}_{2}-\mathbf{r}_{3}$ is the relative coordinate of particles $2$ and $3$ of coordinate $\mathbf{r _{2}$ and $\mathbf{r}_{3}$, and $f_{23}(k)$ is the Coulomb factor \begin{equation} f_{23}=\left\vert e^{-\pi \eta _{23}/2}\Gamma (1+i\eta _{23})\right\vert \sqrt{\frac{2\pi \eta _{23}}{\exp \left( 2\pi \eta _{23}\right) -1}} \label{Fjk} \end{equation corresponding to particles $2$ and $3$. The cross section is proportional to $f_{23}^{2}$ and one can show that the exponential factor in $\left( \ref{sigma}\right) $ comes from $f_{23}^{2}(E) . Thus the smallness of rate at low energies is the consequence of f_{23}^{2}(E)$ becoming very small at lower energies. So the magnitude of the Coulomb factor $f_{23}(E)$ is crucial from the point of view of magnitude of the cross section. \section{Model of impurity assisted nuclear reactions in an atomic-atom ionic gas mix} \subsection{Change of wavefunction in nuclear range} We focus on the modification of nuclear reactions in a plasma. At first capture processes are dealt with. It is supposed that all components are in atomic, atom-ionic or fully-ionized state while the necessary number of electrons required for electric neutrality are also present. Let us take three screened charged particles of rest masses $m_{j}$ ( j=1,2,3$). Particles are heavy and have nuclear charges $z_{j}e$ with $e$ the elementary charge and $z_{j}$ the charge number. (The coupling strength e^{2}=\alpha _{f}\hbar c$.) The total Hamiltonian which describes this 3-body system is \begin{equation} H_{tot}=H_{1}+H_{23}+V_{Cb}(1,2)+V_{Cb}(1,3), \label{Htot} \end{equation where $H_{1}=H_{kin,1}$ is the Hamiltonian of particle $1$ which is considered to be free ($H_{kin,j}$ denotes the kinetic Hamiltonian of particle $j$) \begin{equation} H_{23}=H_{kin,2}+H_{kin,3}+V_{Cb}(1,3) \label{H23} \end{equation is the Hamiltonian of particles $2$ and $3$. Their nuclear reaction will be discussed later. Here and in $\left( \ref{Htot}\right) $ $V_{Cb}(j,k)$ denotes screened Coulomb interaction between particles $j$ and $k$ with screening parameter $q_{sc,jk}$ and of form in the coordinate representatio \begin{equation} V_{Cb}\left( j,k\right) =\frac{z_{j}z_{k}e^{2}}{2\pi ^{2}}\int \frac{\exp ( \mathbf{qr}_{jk}\mathbf{)}}{q^{2}+q_{sc,jk}^{2}}d\mathbf{q,} \label{Vcb1} \end{equation where $\mathbf{r}_{jk}=\mathbf{r}_{j}-\mathbf{r}_{k}$ is the relative coordinate between particles $j$ and $k$ of coordinate $\mathbf{r}_{j}$ and \mathbf{r}_{k}$. $H_{kin,2}$ and $H_{kin,3}$ are the kinetic Hamiltonians of particles $2$ and $3$. It is supposed that stationary solutions $\left\vert 1\right\rangle $ and \left\vert 2,3\right\rangle _{sc}$ of energy eigenvalues $E_{1}\emph{\ }$and $E_{23}$ of the stationary Schr\"{o}dinger equations $H_{1}\left\vert 1\right\rangle =E_{1}\left\vert 1\right\rangle $ with $E_{1}$ the kinetic energy of particle $1$ and $H_{23}\left\vert 2,3\right\rangle _{sc}=E_{23}\left\vert 2,3\right\rangle _{sc}$ with $E_{23}=E_{CM}+E_{rel}$ are known. Here $E_{CM}$ and $E_{rel}$ are the energies attached to the center of mass $\left( CM\right) $ and relative motions of particles $2$ and $3$. Thus $H_{tot}$ can be written as $H_{tot}=H_{0}+H_{Int}$ with H_{0}=H_{1}+H_{23}$ as the unperturbed Hamiltonian an \begin{equation} H_{Int}=V_{Cb}(1,2)+V_{Cb}(1,3) \label{Hint} \end{equation as the interaction Hamiltonian (perturbation). The stationary solution \left\vert 1,2,3\right\rangle _{sc}$ of $H_{0}\left\vert 1,2,3\right\rangle _{sc}=E_{0}\left\vert 1,2,3\right\rangle _{sc}$ with $E_{0}=E_{1}+E_{23}$ can be written as $\left\vert 1,2,3\right\rangle _{sc}=\left\vert 1\right\rangle \left\vert 2,3\right\rangle _{sc}$ which is the direct product of states $\left\vert 1\right\rangle $ and $\left\vert 2,3\right\rangle _{sc}$. The states $\left\vert 1,2,3\right\rangle _{sc}$ form an orthonormal complete system. The approximate solution of $H_{tot}\left\vert 1,2,3\right\rangle _{sc}=E_{0}\left\vert 1,2,3\right\rangle _{sc}$ in the screened case is obtained with the aid of standard time independent perturbation calculation \cite{Landau} and the first order approximation is expanded in terms of the complete system. The terms which differ from the initial state and which are considered to be intermediate from the point of view of the next step of perturbation calculation taking into account strong interaction will be called intermediate states. The solution of $H_{23}\left\vert 2,3\right\rangle =E_{23}\left\vert 2,3\right\rangle $ in the unscreened case is known and the coordinate representation $\langle \mathbf{R,r}\left\vert 2,3\right\rangle =\varphi _{23}\left( \mathbf{R,r}\right) $ of $\left\vert 2,3\right\rangle $ has the for \begin{equation} \varphi _{23}\left( \mathbf{R,r}\right) =\frac{e^{i\mathbf{KR}}}{\sqrt{V} \varphi _{Cb}(\mathbf{r}), \label{solution23} \end{equation where $\mathbf{K}$, $\mathbf{R}=\left( m_{2}\mathbf{r}_{2}+m_{3}\mathbf{r _{3}\right) /\left( m_{2}+m_{3}\right) $ and $\mathbf{r=r}_{23}$ are wave vector of the $CM$ motion, $CM$ and relative coordinate of particles $2$ and $3$, respectively, $V$ denotes the volume of normalization and \begin{equation} \varphi _{Cb}(\mathbf{r})=e^{i\mathbf{k}\cdot \mathbf{r}}f(\mathbf{k,r}) \sqrt{V} \label{Cb1} \end{equation is the unscreened Coulomb solution \cite{Alder}. Here $\mathbf{k}$ is the wave number vector of the particles $2$ and $3$ in their relative motion and $f(\mathbf{k},\mathbf{r})=e^{-\pi \eta _{23}/2}\times $ $\Gamma (1+i\eta _{23})\times $ $_{1}F_{1}(-i\eta _{23},1;i[kr-\mathbf{k}\cdot \mathbf{r}])$, where $_{1}F_{1}$ is the confluent hypergeometric function, $\Gamma $ is the Gamma function and $\eta _{23}$ is the Sommerfeld parameter, furthermor \begin{equation} E_{rel}=\frac{\hbar ^{2}\mathbf{k}^{2}}{2m_{0}a_{23}} \label{Erel} \end{equation \ and \begin{equation} E_{CM}=\frac{\hbar ^{2}\mathbf{K}^{2}}{2m_{0}\left( A_{2}+A_{3}\right) }. \label{ECM} \end{equation} The two important limits of the eigenstate $\left\vert 2,3\right\rangle _{sc} $ in the case of screened Coulomb potential \ are the solution in the nuclear volume and the solution in the screened regime. (In the screened case the coordinate representation $\langle \mathbf{R,r \left\vert 2,3\right\rangle _{sc}$ is denoted by $\varphi _{23}\left( \mathbf{R,r}\right) _{sc}$.) In the screened case and in the nuclear volume the approximate form $\left( \ref{Cbapp}\right) $ $\left( \varphi _{Cb,a}(\mathbf{r})=e^{i\mathbf{k}\cdot \mathbf{r}}f_{23}(k)/\sqrt{V}\right) $ of the (unscreened) Coulomb solution \left( \ref{Cb1}\right) $ may be used. Here $f_{23}(k)$ is the appropriate Coulomb factor corresponding to particles $2$ and $3$. Thu \begin{equation} \varphi _{23}\left( \mathbf{R,r,}nucl\right) _{sc}=\frac{e^{i\mathbf{KR}}} \sqrt{V}}\varphi _{Cb,a}(\mathbf{k},\mathbf{r}) \label{fi-nucl} \end{equation is used in the range of the nucleus and in the calculation of the nuclear matrix-element. The other important limit of $\left\vert 2,3\right\rangle _{sc}$ is the screened (outer) range where $\varphi _{23}\left( \mathbf{R,r}\right) _{sc}$ become \begin{equation} \varphi _{23}\left( \mathbf{R,r,}out\right) _{sc}=\frac{e^{i\mathbf{KR}}e^{ \mathbf{kr}}}{V} \label{fi-sc} \end{equation of energy eigenvalue also $E_{23}=E_{CM}+E_{rel}$. It is used in the calculation of the Coulomb matrix element. From the point of view of the processes investigated the initial state of negligible wave number and energy $\left( E_{0}=E_{i}=0\right) $ can be written as $\varphi _{i}=V^{-3/2}$ for particles $1$, 2$ and $3$ that are somewhere in the normalization volume. The intermediate states of particles $2$ and $3$ are determined by the wave number vectors \mathbf{K}$ and $\mathbf{k}$. In the case of the assisting particle $1$ the intermediate state is a plane wave of wave number vector $\mathbf{k}_{1}$. The matrix elements $V_{Cb,\nu i}$ of the screened Coulomb potential between the initial and intermediate states ar \begin{eqnarray} V_{Cb}(1,s)_{\nu i} &=&\frac{z_{1}z_{s}}{2\pi ^{2}}e^{2}\frac{\left( 2\pi \right) ^{9}}{V^{3}}\delta \left( \mathbf{k}_{1}+\mathbf{K}\right) \times \label{VCb1snui} \\ &&\times \frac{\delta \left( \mathbf{k}+a(s)\mathbf{k}_{1}\right) }{\mathbf{ }_{1}^{2}+q_{sc,1s}^{2}}\text{ \ \ \ \ } \notag \end{eqnarray wher \begin{equation*} a(s)=\frac{-A_{3}\delta _{s,2}+A_{2}\delta _{s,3}}{A_{2}+A_{3}}\text{ and s=2,3. \end{equation* which according to standard time independent perturbation theory of quantum mechanics \cite{Landau} determine the first order change of the wavefunction in the range $r\lesssim R_{0}$ ($R_{0}$ is the nuclear radius of particle $3$) due to Coulomb perturbation a \begin{equation} \delta \varphi \left( \mathbf{r}\right) =\sum_{s=2,3}\delta \varphi \left( s \mathbf{r}\right) \label{dfi} \end{equation \begin{eqnarray} \delta \varphi \left( s,\mathbf{r}\right) &=&\int \int \frac V_{Cb}(1,s)_{\nu i}}{E_{\nu }-E_{i}}\frac{V}{\left( 2\pi \right) ^{6}}\times \label{dfis} \\ &&\times e^{i(\mathbf{KR}+\mathbf{\mathbf{k}_{1}\mathbf{r}_{1})}}\varphi _{Cb,a}(\mathbf{k},\mathbf{r})d\mathbf{K}d\mathbf{k}, \notag \end{eqnarray where $E_{i}$ and $E_{\nu }$ are the kinetic energies in the initial and intermediate states, respectively. The initial momenta and kinetic energies of particles $1$, $2$ and $3$ are neglected $\left( E_{i}=0\right) $ \begin{equation} E_{\nu }(\mathbf{K},\mathbf{k)}=\frac{\hbar ^{2}\mathbf{K}^{2}} 2m_{0}(A_{2}+A_{3})}+\frac{\hbar ^{2}\mathbf{k}^{2}}{2m_{0}a_{23}}+\frac \hbar ^{2}\mathbf{k}_{1}^{2}}{2m_{0}A_{1}}, \label{Em} \end{equation Thu \begin{eqnarray} \delta \varphi \left( s,\mathbf{r}\right) &=&z_{1}z_{s}\alpha _{f}\frac 4\pi \hbar c}{V^{5/2}}\frac{e^{i(\mathbf{\mathbf{k}_{1}\mathbf{r}_{1}-k}_{1 \mathbf{R)}}}{\mathbf{k}_{1}^{2}+q_{sc,1s}^{2}}\times \label{dfi2} \\ &&\times \frac{2m_{0}a_{1s}}{\hbar ^{2}\mathbf{k}_{1}^{2}}\left[ f_{23}\left( k\right) e^{i\mathbf{kr}}\right] _{\mathbf{k}=a(s)\mathbf{k _{1}}. \notag \end{eqnarray It can be seen that the arguments of $f_{23}\left( k\right) $ are $\frac A_{3}}{A_{2}+A_{3}}k_{1}$ and $\frac{A_{2}}{A_{2}+A_{3}}k_{1}$, here k_{1}=\left\vert \mathbf{k}_{1}\right\vert $. Consequently, if particle $1$ obtains large kinetic energy, as is the case in the nuclear reaction, then the Coulomb factors $f_{23}\left( k\right) $ and the rate of the process too will considerably increase. Furthermore, $\delta \varphi \left( \mathbf{r}\right) $, which causes the effect, is temperature independent. Up to this point the calculation and the results are nuclear reaction and nuclear model independent. \subsection{Transition probability per unit time and cross section of $p -capture} Now we can calculate the rate of nuclear reaction due to the modification caused by particle $1$. (The intermediate and final states of particle $1$ are identical.) The Hamiltonian $V_{st}(2,3)$ of strong interaction which is responsible for nuclear reactions between particles $2$ and $3$ i \begin{eqnarray} V_{st}\left( 2,3\right) &=&-V_{0}\text{ \ if }\left\vert \mathbf{r _{23}\right\vert =\left\vert \mathbf{r}\right\vert \leq b\text{ and} \label{Vst1} \\ V_{st}\left( 2,3\right) &=&0\text{ \ if }\left\vert \mathbf{r _{23}\right\vert =\left\vert \mathbf{r}\right\vert >b. \notag \end{eqnarray We take $V_{0}=25$ $MeV$ and $b=2\times 10^{-13}$ $cm$ \cite{Blatt} in the case of $pd$ reaction. The final state of particle $1$ is a plane wave of wave number $\mathbf{k _{1}$ and of kinetic energy $E_{1f}=\hbar ^{2}\mathbf{k}_{1}^{2}/\left( 2m_{1}\right) $. The final state of the captured proton has the form \begin{equation} \varphi _{4}(\mathbf{R},\mathbf{r})=e^{i\mathbf{k}_{4}\cdot \mathbf{R}}\Phi _{f}\left( \mathbf{r}\right) /\sqrt{V}, \label{fi4} \end{equation where $\Phi _{f}\left( \mathbf{r}\right) $ is the final nuclear state of the proton in particle $4$ and $\mathbf{k}_{4}$ is the wave vector of particle 4 $. It has kinetic energy $E_{4f}=\hbar ^{2}\mathbf{k}_{4}^{2}/\left( 2m_{4}\right) $. The Weisskopf-approximation is used, i.e. $\Phi _{f}\left( \mathbf{r}\right) =\Phi _{fW}\left( \mathbf{r}\right) $ with \begin{equation} \Phi _{fW}\left( \mathbf{r}\right) =\sqrt{\frac{3}{4\pi R_{0}^{3}}} \label{fi4W} \end{equation if $r\leq R_{0}$, where $R_{0}$ is the nuclear radius, and $\Phi _{fW}\left( \mathbf{r}\right) =0$ for $r>R_{0}$. $V_{st,f\nu }$ is the matrix element of the potential of the strong interaction between intermediate $\left( e^{i\mathbf{KR}}\varphi _{Cb,a} \mathbf{k},\mathbf{r})/\sqrt{V}\right) $ and final $\left( e^{i\mathbf{k _{4}\cdot \mathbf{R}}\Phi _{f}\left( \mathbf{r}\right) /\sqrt{V}\right) $ states. Since $\Phi _{fW}\left( \mathbf{r}\right) $ and$\ V_{st}\left( \mathbf{r \right) $ both have spherical symmetry the spherical term $\sin \left( kr\right) /\left( kr\right) $ remains from $e^{i\mathbf{k}\cdot \mathbf{r}} , which is present in $\varphi _{Cb,a}(\mathbf{k},\mathbf{r})$, in the nuclear matrix-element. With the aid of the above wave function and the $b=R_{0}$ assumption \begin{equation} V_{st,f\nu }^{W}=-V_{0}\frac{\sqrt{12\pi R_{0}}}{k}f_{23}(k)H\left( k\right) \frac{\left( 2\pi \right) ^{3}}{V^{3/2}}\delta \left( \mathbf{K}-\mathbf{k _{4}\right) \label{Vstf2'-2} \end{equation where \begin{equation} H\left( k\right) =\int_{0}^{1}\sin (kR_{0}x)xdx.\text{ } \label{I10} \end{equation According to standard time independent perturbation theory of quantum mechanics the transition probability per unit time $\left( W_{fi}^{(2)}\right) $ of the process can be written a \begin{equation} W_{fi}^{(2)}=\frac{2\pi }{\hbar }\int \int \left\vert T_{fi}^{(2)}\right\vert ^{2}\delta (E_{f}-\Delta )\frac{V^{2}}{\left( 2\pi \right) ^{6}}d\mathbf{k}_{1}d\mathbf{k}_{4} \label{Wfie} \end{equation wit \begin{equation} T_{fi}^{(2)}=\int \int \sum_{s=2,3}\frac{V_{st,f\nu }V_{Cb}(1,s)_{\nu i}} E_{\nu }-E_{i}}\frac{V^{2}}{\left( 2\pi \right) ^{6}}d\mathbf{K}d\mathbf{k}, \label{Tif} \end{equation} Collecting everything obtained above, substituting $\left( \ref{Tif}\right) $ into $\left( \ref{Wfie}\right) $, neglecting $q_{sc,jk}^{2}$ since q_{sc,jk}^{2}\ll \mathbf{k}_{1}^{2}=k_{0}^{2}$ wit \begin{equation} k_{0}=\hbar ^{-1}\sqrt{2m_{0}a_{14}\Delta } \label{k0} \end{equation determined by energy conservation one can calculate $W_{fi}^{(2)}$. The cross section $\sigma _{23}^{\left( 2\right) }$ of the process is defined as \begin{equation} \sigma _{23}^{\left( 2\right) }=\frac{N_{1}W_{fi}^{(2)}}{\frac{v_{23}}{V}} \label{sigma23} \end{equation where $N_{1}$ is the number of particles $1$ in the normalization volume $V$ and $v_{23}/V$ is the flux of particle $2$ of relative velocity $v_{23}$ \begin{equation} v_{23}\sigma _{23}^{\left( 2\right) }=n_{1}S_{pd} \label{sigma23-2} \end{equation where $n_{1}=N_{1}/V$ is the number density of particles $1$ an \begin{eqnarray} S_{pd} &=&24\pi ^{2}\sqrt{2}cR_{0}\frac{z_{1}^{2}\alpha _{f}^{2}V_{0}^{2}\left( \hbar c\right) ^{4}}{\Delta ^{9/2}\left( m_{0}c^{2}\right) ^{3/2}}\times \label{SRANR} \\ &&\times \frac{\left( A_{2}+A_{3}\right) ^{2}}{a_{14}^{7/2}}\left[ F\left( 2\right) +F\left( 3\right) \right] ^{2}. \notag \end{eqnarray wit \begin{equation} F(s)=\frac{z_{s}a_{1s}}{A_{3}\delta _{s,2}+A_{2}\delta _{s,3}}f_{23}\left[ a(s)k_{0}\right] H\left[ a(s)k_{0}\right] . \label{Fs} \end{equation The magnitude of quantities $f_{23}\left[ a(s)k_{0}\right] $, $s=2,3$ mainly determines the magnitude of the rate and power densities. \subsection{Rate and power densities} The rate $dN_{pd}/dt$ in the whole volume $V$ can be written as \begin{equation} \frac{dN_{pd}}{dt}=N_{3}\Phi _{23}\sigma _{23}^{\left( 2\right) }, \label{dNfdt} \end{equation where $\Phi _{23}=n_{2}v_{23}$ is the flux of particles $2$ with $n_{2}$ their number density ($n_{2}=$ $N_{2}/V$) and $N_{3}$ is the number of particles $3$ in the normalization volume. The rate density r_{pd}=dn_{pd}/dt=$ $V^{-1}dN_{pd}/dt$ of the process can be written a \begin{equation} r_{pd}=\frac{dn_{pd}}{dt}=n_{3}n_{2}n_{1}S_{pd}, \label{dnfdt} \end{equation where $n_{3}$ is the number density of particles $3$ ($n_{3}=N_{3}/V$). The power density is defined a \begin{equation} p_{pd}=\Delta \frac{dn_{pd}}{dt}=n_{1}n_{2}n_{3}P_{pd} \label{powerdensity} \end{equation with $P_{pd}=S_{pd}\Delta $. The rate and power densities ($dn_{pd}/dt$ and p_{pd}$) are temperature independent. \subsection{Cross section of reactions with two final fragments in long wavelength approximation} In the case of reactions with two final fragments (see Fig.1b and $\left( \ref{Reaction 4}\right) $) the nuclear matrix element can be derived from the S(E)$ quantity of $\left( \ref{sigma}\right) $, i.e. in long wavelength approximation from $S(0)$ which is the astrophysical factor at $E=0$. It can be done in the following manner. Calculating the transition probability per unit time $W_{fi}^{(1)}$ of the usual (first order) process in standard manner \begin{equation*} W_{fi}^{(1)}=\int \frac{2\pi }{\hbar }\left\vert V_{st,fi}\right\vert ^{2}\delta \left( E_{f}-\Delta \right) \frac{V}{\left( 2\pi \right) ^{3}} \mathbf{k}_{f} \end{equation* where $\mathbf{k}_{f}$ is the relative wave number of the two fragments of rest masses $m_{4}=m_{0}A_{4}$, $m_{5}=m_{0}A_{5}$ and atomic numbers $A_{4} , $A_{5}$, $E_{f}=$ $\hbar ^{2}\mathbf{k}_{f}^{2}/(2m_{0}a_{45})$ is the sum of their kinetic energy and the nuclear matrix element is $V_{st,fi}$ having the form $\left\vert V_{st,fi}\right\vert =f_{23}\left( k_{i}\right) h_{fi}/V $ . Here $f_{23}\left( k_{i}\right) $ is the Coulomb factor of the initial particles $2$ and $3$ with $k_{i}$ the magnitude of their relative wave number vector $\mathbf{k}_{i}$. (The Coulomb factor $f_{45}\left( k_{f}\right) \approx 1$ of the final particles $4$ and $5$ with $k_{f}$ the magnitude of their relative wave number vector $\mathbf{k}_{f}$.) It is supposed that $h_{fi}$ does not depend on $\mathbf{k}_{i}$ and $\mathbf{k _{f}$ namely the long wavelength approximation is used. In this case the product of the relative velocity $v_{23}$ of the initial particles $2$, $3 \emph{\ }and the cross section $\sigma _{23}$ is \begin{equation} v_{23}\sigma _{23}^{\left( 1\right) }=\frac{\left\vert h_{fi}\right\vert ^{2 }{\pi \hbar }f_{23}^{2}\left( k_{i}\right) \frac{\left( m_{0}a_{45}\right) ^{3/2}}{\hbar ^{3}}\sqrt{2\Delta }. \label{v23sigma23} \end{equation On the other hand, $v_{23}\sigma _{23}^{\left( 1\right) }$ is expressed with the aid of $\left( \ref{sigma}\right) $ and $v_{23}=\sqrt{2E/\left( m_{0}a_{23}\right) }$. From the equality of the two kinds of $v_{23}\sigma _{23}^{\left( 1\right) }$ one get \begin{equation} \left\vert h_{fi}\right\vert ^{2}=\frac{\left( \hbar c\right) ^{4}S(0)} z_{2}z_{3}\alpha _{f}\left( m_{0}c^{2}\right) ^{5/2}\sqrt{2\Delta a_{45}^{3/2}a_{23}}. \label{hfi2} \end{equation} In the case of the impurity assisted, second order process $\left\vert V_{st,f\nu }\right\vert =f_{23}\left( k\right) \left\vert h_{fi}\right\vert \left( 2\pi \right) ^{3}\delta \left( \mathbf{K}-\mathbf{K}_{f}\right) /V^{2} $ where $\mathbf{K}_{f}$ and $\mathbf{k}_{f}$ are the final wave number vectors attached to $CM$ and relative motions of the two final fragments, particles 4$ and $5$. $\mathbf{k}_{f}$ appears in $E_{f}$\ in the energy Dirac-delta. Repeating the calculation of the transition probability per unit time of the impurity assisted, second order process applying the above expression of \left\vert V_{st,f\nu }\right\vert $ one get \begin{equation} v_{23}\sigma _{23}^{\left( 2\right) }=n_{1}S_{^{\prime }reaction^{\prime }}, \label{v23sigma23-2} \end{equation wher \begin{equation} S_{^{\prime }reaction^{\prime }}=\frac{8\alpha _{f}^{2}z_{1}^{2}} a_{23}a_{123}^{3}}\frac{S(0)c}{m_{0}c^{2}}\left( \frac{\hbar c}{\Delta \right) ^{3}I \label{result2} \end{equation wit \begin{equation} I=\int_{0}^{1}\left( \sum_{s=2,3}\frac{z_{s}a_{1s}\sqrt{A_{s}}}{\sqrt e^{b_{23}A_{s}\frac{1}{x}}-1}}\right) ^{2}\frac{\sqrt{1-x^{2}}}{x^{7}}dx. \label{Iintcharged} \end{equation Here $b_{23}=2\pi z_{2}z_{3}\alpha _{f}\sqrt{m_{0}c^{2}/\left( 2a_{123}\Delta \right) }$ with $a_{123}=A_{1}\left( A_{2}+A_{3}\right) /\left( A_{1}+A_{2}+A_{3}\right) $. In the index $^{\prime }reaction^{\prime }$ the reaction resulting the two fragments will be marked. \subsection{Cross section of reactions with two final fragments beyond long wavelength approximation} If $\left\vert h_{fi}\right\vert $ has $\mathbf{k}_{1}$ dependence then it is manifested through the relative energy $\left( E\right) $ dependence of the astrophysical factor $\left[ S(E)\right] $ and it can be expressed a \begin{equation} \left\vert h_{fi}\left( k_{1}\right) \right\vert =\sqrt{\frac{\left( \hbar c\right) ^{4}S\left[ E(k_{1})\right] }{z_{2}z_{3}\alpha _{f}\left( m_{0}c^{2}\right) ^{5/2}\sqrt{2\Delta }a_{45}^{3/2}a_{23}}} \label{hfiE} \end{equation wher \begin{equation} E(k_{1})=\left[ \frac{\hbar ^{2}\mathbf{k}^{2}}{2m_{0}a_{23}}\right] _ \mathbf{k}=a(s)\mathbf{k}_{1}}=\frac{\hbar ^{2}a^{2}(s)k_{1}^{2}} 2m_{0}a_{23}}. \label{E-S(E)-ben} \end{equation Consequentl \begin{equation} S_{^{\prime }reaction^{\prime }}=\frac{8\alpha _{f}^{2}z_{1}^{2}} a_{23}a_{123}^{3}}\frac{S(0)c}{m_{0}c^{2}}\left( \frac{\hbar c}{\Delta \right) ^{3}J \label{result 3} \end{equation wit \begin{eqnarray} J &=&\int_{0}^{1}\left( \sum_{s=2,3}\frac{z_{s}a_{1s}\sqrt{A_{s}}\sqrt{\frac S\left[ E(s,x)\right] }{S(0)}}}{\sqrt{e^{b_{23}A_{s}\frac{1}{x}}-1}}\right) ^{2} \label{Jint} \\ &&\times \frac{\sqrt{1-x^{2}}}{x^{7}}dx. \notag \end{eqnarray The argument of $S(E)$ in the integrand is \begin{equation} E(s,x)=\Delta \frac{a_{123}}{a_{23}}a^{2}(s)x^{2}. \label{Esx} \end{equation} \subsection{Atomic atom-ionic gas mix and wall interaction} It is plausible to extend the investigation to the possible consequence of plasma-wall interaction. The role of particle $1$ is played by the wall which is supposed to be a solid (metal) from atoms with nuclei of charge and mass numbers $z_{1}$and $A_{1}$. For initial state a Bloch-function of the for \begin{equation} \varphi _{\mathbf{k}_{1,i}}(\mathbf{r}_{1})=\frac{1}{\sqrt{N_{1}}}\sum_ \mathbf{L}}e^{i\mathbf{k}_{1,i}\cdot \mathbf{L}}a(\mathbf{r}_{1}-\mathbf{L}), \label{Bloch} \end{equation is taken, that is localized around all of the lattice points \cite{Ziman}. Here $\mathbf{r}_{1}$ is the coordinate, $\mathbf{k}_{1,i}$ is wave number vector of the first Brillouin zone ($BZ$) of the reciprocal lattice, $a(\mathbf{r _{1}-\mathbf{L})$ is the Wannier-function, which is independent of $\mathbf{k _{1,i}$ within the $BZ$ and is well localized around lattice site $\mathbf{L} $. $N_{1}$ is the number of lattice points of the lattice of particles $1$. Repeating the transition probability per unit time and cross section calculation applying $\left( \ref{Bloch}\right) $ (after a lengthy calculation which is omitted here) it is obtained that cross section results (formulae $\left( \ref{sigma23-2}\right) $, $\left( \ref{SRANR}\right) $ and $\left( \ref{Fs}\right) $ in case of proton capture, $\left( \ref{result2 \right) $, $\left( \ref{Iintcharged}\right) $ and $\left( \ref{result 3 \right) $, $\left( \ref{Jint}\right) $ in case of reactions with two final fragments) remain unchanged and $n_{1}=N_{1c}/v_{c}$, where $v_{c}$ is the volume of elementary cell of the solid and $N_{1c}$ is the number of particles $1$ in the elementary cell. \section{Rate and power densities in a $p-d-Xe$ atomic atom-ionic gas mix} Reaction \begin{equation} p+d\rightarrow _{2}^{3}He+\gamma +5.493MeV \label{normal-pd} \end{equation is not suitable for energy production since its cross section (the $S(0)$ value, see \cite{Angulo}) is rather small compared with other candidate reactions and only a minor part of the reaction energy $\Delta =5.493$ MeV=8.800\times 10^{-13}$ $J$ is taken away by $^{3}He$ ($E_{^{3}He}=\Delta ^{2}/\left( 6m_{0}c^{2}\right) =5.4$ $keV$) and the main part $E_{\gamma }=5.488$ $MeV$ is taken away by $\gamma $ radiation which is difficult to convert to heat. However, in reaction $\left( \ref{Reaction 2}\right) $ the reaction energy is taken away by particles $_{2}^{3}He$ and _{z_{1}}^{A_{1}}V^{\prime }$ as their kinetic energy that they can lose in a very short range to their environment converting the reaction energy efficiently into heat. Therefore the direct observation of $_{2}^{3}He$ and _{z_{1}}^{A_{1}}V^{\prime }$ is hard. The rate $\left( dn_{pd}/dt\right) $ and power $\left( \Delta dn_{pd}/dt\right) $ densities of impurity assisted $p+d\rightarrow $ _{2}^{3}He$ reaction are determined by $\left( \ref{dnfdt}\right) $ and \left( \ref{powerdensity}\right) $ with \begin{equation} S_{pd}=1.89\times 10^{-53}z_{1}^{2}\text{ }cm^{6}s^{-1}, \label{S} \end{equation where $z_{1}$ is the charge number of the assisting nucleus and with \begin{equation} P_{pd}=1.66\times 10^{-65}z_{1}^{2}\text{ }cm^{6}W, \label{P} \end{equation respectively. Taking $z_{1}=54$ ($Xe$) and $n_{1}=n_{2}=n_{3}=2.65\times 10^{20}$ $cm^{-3}$ ($n_{1}$, $n_{2}$ and $n_{3}$ are the number densities of $Xe$, $p$ and $d$, i.e. particles 1, 2 and 3) one gets for rate and power densities considerable values: \begin{equation} r_{pd}=dn_{pd}/dt=1.02\times 10^{12}\text{ }cm^{-3}s^{-1} \label{rate density} \end{equation and \begin{equation} p_{pd}=\Delta dn_{pd}/dt=0.901\text{ }Wcm^{-3} \label{power density} \end{equation}. If the impurity is $Hg$ or $U $ then the above numbers must be multiplied by $2.2$ or $2.9$, respectively. One must emphasize that both rate and power densities ($dn_{pd}/dt$ and p_{pd}$) are temperature independent. It must be mentioned too that the effect is not affected by Coulomb screening and the only condition is that the participants must be in atomic or in atom-ionic state. This requirement and the temperature independence of $dn_{pd}/dt$ and $p_{pd}$ greatly weaken the necessary conditions. \section{Other impurity assisted nuclear reactions} Now let us consider the impurity assisted proton captures $\left( \re {Reaction 1}\right) $ in general. The reaction energy $\Delta $ is the difference between the sum of the initial and final mass excesses, i.e. \Delta =\Delta _{p}+\Delta _{A_{3},z_{3}}-\Delta _{A_{3}+1,z_{3}+1}$. Since particle $1$ assists the nuclear reaction its rest mass does not change. \Delta _{p}$, $\Delta _{A_{3},z_{3}}$ and $\Delta _{A_{3}+1,z_{3}+1}$ are mass excesses of proton, $_{z_{3}}^{A_{3}}X$ and $_{z_{3}+1}^{A_{3}+1}Y$ nuclei, respectively \cite{Shir}. Moreover, the capture reaction may be extended to the impurity assisted capture of particles $_{z_{2}}^{A_{2}}w$ (see reaction $\left( \ref{Reaction 3}\right) $), e.g. the capture of deuteron $\left( d\right) $, triton $\left( t\right) $, $^{3}He$, $^{4}He$, etc.. In this case $\Delta =\Delta _{A_{2},z_{2}}+\Delta _{A_{3},z_{3}}-\Delta _{A_{3}+A_{2},z_{3}+z_{2}}$. $\Delta _{A_{2},z_{2}}$, \Delta _{A_{3},z_{3}}$ and $\Delta _{A_{3}+A_{2},z_{3}+z_{2}}$ are the corresponding mass excesses. The mechanism discovered makes also possible reaction $\left( \ref{Reaction 4}\right) $ with conditions A_{2}+A_{3}=A_{4}+A_{5}$ and $z_{2}+z_{3}=z_{4}+z_{5}$. In this case $\Delta =\Delta _{A_{2},z_{2}}+\Delta _{A_{3},z_{3}}-\Delta _{A_{4},z_{4}}-\Delta _{A_{5},z_{5}}$ where $\Delta _{A_{j},z_{j}}$ are the corresponding mass excesses. Investigating the mass excess data \cite{Shir} one can recognize that in the case of processes $\left( \ref{Reaction 1}\right) $, $\left( \ref{Reaction 3}\right) $ and $\left( \ref{Reaction 4}\right) $ the number of energetically allowed reactions is large, their usefulness from the point of view of energy production is mainly determined by the magnitude of the numerical values of the quantities $f_{23}$ belonging to the particular reaction. Impurity $\left( _{z_{1}}^{A_{1}}V\right) $ assisted $d-Li$ reactions may take place with $_{3}^{6}Li$ and $_{3}^{7}Li$ isotopes \begin{equation} _{z_{1}}^{A_{1}}V+d+\text{ }_{3}^{6}Li\rightarrow \text{ _{z_{1}}^{A_{1}}V^{\prime }+2_{2}^{4}He+22.372\text{ }MeV, \label{Li5} \end{equation \begin{equation} _{z_{1}}^{A_{1}}V+d+\text{ }_{3}^{7}Li\rightarrow _{z_{1}}^{A_{1}}V^{\prime }+2_{2}^{4}He+n+15.122\text{ }MeV \label{Li6} \end{equation an \begin{equation} _{z_{1}}^{A_{1}}V+d+\text{ }_{3}^{7}Li\rightarrow _{z_{1}}^{A_{1}}V^{\prime }+_{4}^{9}Be+16.696\text{ }MeV. \label{Li7} \end{equation If there are deuterons present then th \begin{equation} _{z_{1}}^{A_{1}}V+d+\text{ }_{z_{3}}^{A_{3}}X\rightarrow \text{ _{z_{1}}^{A_{1}}V^{\prime }+\text{ }_{z_{3}+1}^{A_{3}+2}Y+\Delta \label{d-capture} \end{equation impurity assisted $d$ capture processes (see e.g. $\left( \ref{Li7}\right) $ and the $_{z_{1}}^{A_{1}}V+d+d\rightarrow $ $_{z_{1}}^{A_{1}}V^{\prime }+$ _{2}^{4}He+23.847MeV$ reaction), furthermore the $_{z_{1}}^{A_{1}}V+d+ \rightarrow $ $_{z_{1}}^{A_{1}}V^{\prime }+n+$ $_{2}^{3}He+3.269$ $MeV$ , _{z_{1}}^{A_{1}}V+d+d\rightarrow $ $_{z_{1}}^{A_{1}}V^{\prime }+p+t+4.033$ MeV$ impurity assisted $dd$ reactions may also take place where the energy of the reaction is carried by particles $_{z_{1}}^{A_{1}}V^{\prime }$, _{z_{3}+1}^{A_{3}+2}Y$ and $_{z_{1}}^{A_{1}}V^{\prime }$, $_{2}^{4}He$, which have momentum of equal magnitude but opposite direction, by particles _{z_{1}}^{A_{1}}V^{\prime }$, $n$ and $_{2}^{3}He$ and by particles _{z_{1}}^{A_{1}}V^{\prime }$, $p$ and $t$, respectively. The results of $S_{^{\prime }reaction^{\prime }}$ and power density calculations of some $Xe$ assisted reactions in long wavelength approximation and with $n_{1}=n_{2}=n_{3}=2.65\times 10^{20}$ $cm^{-3}$ can be found in Table I. From the point of view of rate and power densities the screening of the Coulomb potential is not essential ($k_{0}\gg q_{sc}$) consequently the above reactions bring up the possibility of a quite new type of apparatus since the processes need atomic state of participant materials only, i.e. need much lower temperature compared to the working temperature of fusion power stations planned to date. \begin{table}[tbp] \tabskip=8pt \centerline {\vbox{\halign{\strut $#$\hfil&\hfil$#$\hfil&\hfil$#$ \hfil&\hfil$#$\hfil&\hfil$#$\hfil&\hfil$#$\cr \noalign{\hrule\vskip2pt\hrule\vskip2pt} Reaction& S(0)&S_{'Reaction'}&\Delta&p_{'Reaction'}\cr \noalign{\vskip2pt\hrule\vskip2pt} d(d,n)_{2}^{3}He & 0.055 & 1.01\times10^{-48} & 3.269 &9.82 \cr d(d,p)t& 0.0571 & 1.10\times 10^{-48}&4.033& 13.2\cr d(t,n)_{2}^{4}He & 11.7 & 1.06\times 10^{-46} &17.59& 5.57\times 10^{3} \cr _{2}^{3}He(d,p)_{2}^{4}He & 5.9 & 1.51\times 10^{-48}&18.25& 82.6\cr _{3}^{6}Li(p,\alpha)_{2}^{3}He & 2.97 & 1.99\times10^{-49} & 4.019 & 2.38 \cr _{3}^{7}Li(p,\alpha )_{2}^{4}He & 0.0594 & 3.85\times10^{-51} & 17.347 & 0.199 \cr _{4}^{9}Be(p,\alpha)_{3}^{6}Li & 17. & 1.79\times10^{-49} & 2.126 & 1.13 \cr _{4}^{9}Be(p,d)_{4}^{8}Be & 17. & 1.66\times10^{-49} & 0.56 &0.277 \cr _{4}^{9}Be(\alpha ,n)_{6}^{12}C & 2.5\times10^{3} & 6.22\times10^{-51} & 5.701 & 0.106 \cr & 6.\times10^{5} & 1.49\times10^{-48} & 5.701 & 25.4 \cr _{5}^{10}B(p,\alpha )_{4}^{7}Be & 4 & 1.04 \times10^{-50} & 1.145 & 0.0356 \cr & 2.\times10^{3} & 5.21\times10^{-48} & 1.145 & 17.8 \cr _{5}^{11}B(p,\alpha )_{4}^{8}Be & 187 & 5.16\times10^{-49} & 8.59 & 13.2 \cr \noalign{\vskip2pt\hrule\vskip2pt\hrule}}}} \caption{$S(0)$ is the astrophysical factor at $E=0$ in $MeVb$ \protect\cit {Angulo}, \protect\cite{Descou}. $S_{^{\prime }Reaction^{\prime }}$ (in cm^{6}s^{-1}$) is calculated using $\left( \protect\ref{result2}\right) $ with $\left( \protect\ref{Iintcharged}\right) $ taking $z_{1}=54$ $\left( Xe\right) $, $\Delta $ is the energy of the reaction in $MeV$ and p_{^{\prime }Reaction^{\prime }}=\Delta n_{1}n_{2}n_{3}S_{^{\prime }Reaction^{\prime }}$ is the power density in $Wcm^{-3}$ that is calculated with $n_{1}=n_{2}=n_{3}=2.65\times 10^{20}$ $cm^{-3}$. In the case of _{4}^{9}Be(\protect\alpha ,n)_{6}^{12}C$ and $_{5}^{10}B(p,\protect\alpha )_{4}^{7}Be$ reactions the astrophysical factor $[S(E)]$ has strong energy dependence therefore the calculation was carried out with two characteristic values of $S(E)$. } \label{Table1} \end{table} If there is $Li$ present the \begin{equation} _{z_{1}}^{A_{1}}V+\text{ }_{3}^{A_{2}}Li+\text{ }_{z_{3}}^{A_{3}} \rightarrow \text{ }_{z_{1}}^{A_{1}}V^{\prime }+\text{ _{z_{3}+3}^{A_{2}+A_{3}}Y+\Delta \label{Li-capture} \end{equation impurity assisted $Li$ capture reactions may happen too. Let us examine the impurity assiste \begin{equation} _{z_{1}}^{A_{1}}V+\text{ }_{3}^{A_{2}}Li+\text{ }_{3}^{A_{3}}Li\rightarrow \text{ }_{z_{1}}^{A_{1}}V^{\prime }+\text{ }_{6}^{A_{2}+A_{3}}C+\Delta \label{PdLi} \end{equation $Li$ capture reactions and as an example let us take $z_{2}=z_{3}=3$, A_{2}=6$, $A_{3}=7$, $A_{2}+A_{3}=A_{4}=13$, that corresponds to th \begin{equation} _{z_{1}}^{A_{1}}V+\text{ }_{3}^{6}Li+\text{ }_{3}^{7}Li\rightarrow _{z_{1}}^{A_{1}}V^{\prime }+\text{ }_{6}^{13}C+25.869\text{ }MeV \label{PdLi2} \end{equation reaction. Taking $A_{1}=130$, $\eta _{23}(\frac{7}{13}k_{0})=0.487$, $\eta _{23}(\frac{6}{13}k_{0})=0.568$\ and $f_{23}(\frac{7}{13}k_{0})=0.388$ and f_{23}(\frac{6}{13}k_{0})=0.322$ (see $\left( \ref{etajk}\right) $, $\left( \ref{Fjk}\right) $, $\left( \ref{k0}\right) $ and $\left( \ref{SRANR}\right) $). These numbers are very promising. The reactions $_{z_{1}}^{A_{1}}V+$ _{3}^{6}Li+$ $_{3}^{6}Li\rightarrow $ $_{z_{1}}^{A_{1}}V^{\prime }+$ _{6}^{12}C+28.174$ $MeV$, $_{z_{1}}^{A_{1}}V+$ $_{3}^{6}Li+$ _{3}^{6}Li\rightarrow $ $_{z_{1}}^{A_{1}}V^{\prime }+$ $3_{2}^{4}He+20.898$ MeV$ and $_{z_{1}}^{A_{1}}V+$ $_{3}^{7}Li+$ $_{3}^{7}Li\rightarrow _{z_{1}}^{A_{1}}V^{\prime }+$ $_{6}^{14}C+26.795$ $MeV$ may have importance too. (The list is incomplete.) \section{Summary} The consequences of impurities in nuclear fusion fuels of plasma state are discussed. According to calculations in certain cases second order processes may produce greatly higher fusion rate than the rate due to direct (first order) processes. In the examined problem it is found that Coulomb scattering of the fusionable nuclei on the screened Coulomb potential of the impurity can diminish the hindering Coulomb factor between them. Since the second order process does not demand the matter to be in ionized state the assistance of impurities can allow to decrease significantly the plasma temperature which is determined only by the requirement that all components must be in atomic or atom-ionic state. The results suggest that, on the other hand, the density of the components has to be considerably increased. The effective influence of wall-gas mix interaction brings up the possible importance of gas mix-metal surface processes too. Promising new fuel mixes are also put forward. Based of these results it may be expected that search for new approach to energy production by nuclear fusion may be started.
{ "timestamp": "2018-01-16T02:12:28", "yymm": "1712", "arxiv_id": "1712.05270", "language": "en", "url": "https://arxiv.org/abs/1712.05270" }
\section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} In recent years, breast cancer has been one of the major causes of death in women \cite{Nishikawa}\cite{Hu}. Clinical evidence indicates that early detection and treatment can significantly reduce the mortality of breast cancer. Various medical imaging modalities play a crucial role in detection, diagnosis and treatment planning of breast cancer. Among these modalities, magnetic resonance imaging (MRI) is used primarily for women at high risk of developing cancer, to identify lesions missed in the mammogram, and typically require accurate lesion segmentation as an initial step in breast MRI-based CAD systems \cite{McClymont}\cite{prior}. However, accurate segmentation of lesions is a challenging problem for many reasons such as, lesions have considerable variation in terms of shape, having an overlapping area with normal tissues which is difficult to differentiate and distribution of intensities in the lesions are very high, etc. However, many researchers have explored various semi-automated and automated approaches to address these challenges. Normally, breast tumor segmentation methods fall into main three categories which are model-based, threshold-based and region growing based methods. Chen et al. \cite{Chen} proposed a fuzzy c-means (FCM) based method. Szabó et al. \cite{szab} performed a pixel-by-pixel lesion segmentation through an artificial neural network based on the time intensity curve. Shanon et al. \cite{Shanon} proposed spectral embedding based active contour to improve image representation for both boundary and region-based segmentation. However, most of these methods require post-processing through morphological operations such as dilation and erosion, to obtain a smooth and continuous lesion boundary. Xu et al. \cite{Xu} proposed a watershed transformation method for mammogram mass segmentation using markers. Their method involved identifying the rough region of the lesion by a combined template matching and thresholding approach as an initial step, followed by identifying internal markers by computing the distance transform from the rough segmentation, and external markers through morphological dilation.\\ In this paper, we propose a novel marker-controlled watershed transformation (MCWT) for the task of 2D breast lesion segmentation in MRI slices. This technique is more robust and simpler in marker selection process, in comparison to conventional methods which used complex features such as template matching and Gaussian mixture model \cite{Xu}\cite{Cui}. Section \ref{sec:methods} describes the data acquisition, the proposed method and evaluation results and our work will be concluded in Section \ref{sec:conclude}. \section{Methods} \label{sec:methods} \subsection{Data Acquisition} MR images for this study were acquired on 1.5 T scanners Magnetom Avanto and 3.0 T Magnetom Verio, Siemens Healthineers, Erlangen, Germany, with dedicated breast array coils and the patient in a prone position. The contrast media was applied into the cubital vein after the first of six dynamic acquisitions with a flow of 1.0 mL/sec chased by a 20 mL saline flush. One hundred and six lesions were identified from a representative set of DCE-MRI exams from 80 female patients by two expert radiologists who have 7 years of experience in evaluation of clinical findings. The mean patient age was 50 $\pm$ 13 and based on histopathologically 42 of the lesions were diagnosed as benign and the remaining 64 as malignant. \subsection{Watershed Transformation} The watershed transformation is a widely used method for image segmentation based on mathematical morphology \cite{Beucher}. In this method, the image is considered as a topographic relief or landscape where the gray level of each pixel corresponds to a physical elevation. The landscape is immersed in a lake, with holes pierced in local minima, and the catchment areas/basins are filled up with water/labels starting at these local minima. The points where water floods from different basins, a dam/border is built. The process terminates when the water level reaches a maximum. Consequently, the landscape is divided into several regions separated by dams which are called watershed lines or basically watersheds \cite{water}. The main drawback of the conventional watershed transformation is over-segmentation which is caused due to the presence of many local minima in the gradient image. To decrease the effect of severe over-segmentation, we determine internal and external markers to guide the watershed algorithm. Each marker is considered to be part of a specific watershed region and after segmentation, the boundaries of the regions are arranged to separate each object of interest from the background. The proposed method starts with selecting a particular slice of breast MRI volume, performed by an expert radiologist, such that it contains at least one lesion. The slice image is normally the subtraction of pre-contrast and post-contrast images. The ground truth segmentation provided by expert radiologists and ROI is drawn around the lesion manually. As a pre-processing step, We applied contrast limited adaptive histogram equalization (CLAHE) \cite{Clahe} on that particular slice globally to improve lesion contrast. Then we computed the morphological gradient of the image, which is the point-wise difference between a unitary dilation and erosion. In MR images, tumor regions are normally brighter and have more uniform intensity than the neighbouring healthy tissue. Based on this fact, we determined the internal and external markers by sorting out the pixel values in ROIs in descending order and chose $n$ pixels with maximum intensity values as markers. After selecting the markers the normal watershed transformation is applied on the ROIs image which is shown in Fig 1 Finally, a binary mask is generated based on watershed output regions. However, we identified the optimal number of markers based on segmentation accuracy evaluated using Dice and Jaccard. \begin{figure}[t] \subfloat[\label{fig:test1}] {\includegraphics[width=2.5cm,height=2.5cm]{subslice}}\hfill \subfloat[\label{fig:test2}] {\includegraphics[width=2.5cm,height=2.5cm]{slice2}}\hfill \subfloat[\label{fig:test3}] {\includegraphics[width=2.5cm,height=2.5cm]{slice3}}\hfill \subfloat[\label{fig:test4}] {\includegraphics[width=2.5cm,height=2.5cm]{slice4}}\hfill \subfloat[\label{fig:test5}] {\includegraphics[width=2.5cm,height=2.5cm]{slice5}}\hfill \subfloat[\label{fig:test6}] {\includegraphics[width=2.5cm,height=2.5cm]{slice6}} \caption{The segmentation pipeline. (a) Subtraction slice with a benign lesion. (b) Contrast enhancement using CLAHE and ROI is drawn. (c) Image gradient. (d) The highest pixel intensities are selected as markers. (e) Watershed transformation applied. (f) Segmentation mask generated based on watershed.} \label{imgss} \centering \end{figure} \subsection{Evaluation and Results} We tested the algorithm by varying the number of markers between $1$ and $150$. Fig~\ref{graph} describes the segmentation results obtained using different numbers of markers. This plot indicates that $45$ markers were found to be optimal using this segmentation approach, yielding satisfactory results. \begin{figure}[h!] \includegraphics[width=8.8cm]{G4} \caption{The mean of total lesions for Jaccard and Dice coefficients with different number of markers. } \label{graph} \centering \end{figure} To evaluate the performance of the proposed method and compare the segmentation results with their corresponding ground truth labels, two well-known similarity metrics are used: (1) the Dice similarity coefficient (DSC), which measures the overlap between two segmentation masks and is sensitive to the lesion size, (2) the Jaccard index (JI), which denotes the average distance between two segmentations \cite{prior}. The metrics are defined as: \begin{equation} DSC (A,B) = \frac{2|A \cap B|}{|A|+|B|} \end{equation} \begin{equation} JI (A,B) = \frac{|A \cup B|}{|A \cap B|} \end{equation} where $A$ refers to the ROIs segmented by our algorithm and $B$ is tumor area as determined by manual segmentation. Table 1 summarizes the segmentation accuracy achieved using the proposed method for all 106 cases. The average dice coefficient was found to be 0.78$\pm$0.17 and average Jaccard index was 0.67$\pm$0.21. Fig~\ref{imgs} demonstrate four sample segmentation outputs which are overlaid on manual segmentations provided by two radiologists. It can be seen, that the proposed method could accurately segment the lesions with some marginal errors for medium to large tumors. However, for cases comprising disjoint lesions, the method failed to segment all small lesions and in some cases incorrectly labeled healthy tissue as lesions. This is because in some cases there is a high degree of overlap in the intensity distributions of healthy breast tissue and lesions, and the ROI drawn by the radiologist is very large in the case of disjoint lesions, in order to cover the entire area over which multiple lesions are distributed. \renewcommand{\arraystretch}{1.5} \begin{table}[h!] \centering \caption{DSC and JI results $(mean\pm Std)$ for 106 lesions.} \label{res} \begin{tabular}{c|c|c} \hline Method& DSC & JI \\ \hline MCWT& 0.7808$\pm$0.1729 & 0.6704$\pm$0.2167 \\ \hline \end{tabular} \end{table} \begin{figure} {\includegraphics[width=4.2cm,height=4.2cm]{4s}}\hfill {\includegraphics[width=4.2cm,height=4.2cm]{2s}}\hfill \\ {\includegraphics[width=4.2cm,height=4.2cm]{3s}}\hfill {\includegraphics[width=4.2cm,height=4.2cm]{1s}}\hfill \caption{Segmentation results on 4 breast lesion cases. The top two images show malignant lesions and the two down show benign lesions. The yellow rectangle represents the ROIs, ground truth segmentation represented in green line and the result of proposed method shown in the red line.} \label{imgs} \end{figure} \section{Discussion and Conclusion} \label{sec:conclude} Segmentation of breast lesions in MR images has been tackled previously in various studies, however, very few have employed the marker-controlled watershed transformation approach for this purpose. In this paper, we proposed a novel marker-controlled watershed transformation approach by selecting the brightest pixels as markers in the ROIs. In terms of complexity, this method is simpler and robust in comparison to conventional marker-based watershed methods which used complex features to determine external and internal markers. However, the diversity of lesion shapes and the presence of multiple disjoint lesions distributed across the breast proved challenging, resulting in low DSC and JI scores in some cases. These preliminary results are encouraging for the application of the proposed approach, as a preprocessing step for subsequent classification of MRI lesions. Manually-created ground truth images are intrinsically subjective and creating such reference images for large data sets is a time-consuming process. In subsequent studies, we will look to extend our proposed 2D watershed algorithm to 3D and combine it with a lesion detection and classification technique, to establish a complete computer-aided-diagnosis system, with minimum manual intervention. \section*{Acknowledgment} The authors gratefully acknowledge the support of Emerging Field Intuitive (EFI) project. We also thank our colleagues from Universit\"atsklinikum Erlangen who provided the dataset and insight that greatly assisted this research. \bibliographystyle{IEEEtran} \section{Introduction} \IEEEPARstart{T}{his} document is an example of preparing your manuscript in {\LaTeX}. It provides instructions for authors that will be presenting a paper at the 2007 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS-MIC), and should be used in preparing and submitting manuscripts to the Conference Record (CR). The Conference Record is a non-refereed, CD-ROM-based publication that is distributed to all conference attendees after the conference. All CR manuscripts will be made available online at http://www.nss mic.org/2007/ConferenceRecord before the CD-ROMs are mailed out. A detailed description of the CR submission procedure is provided in section II below. Submission of a manuscript to the CR is mandatory and an eight page limit is suggested. You may submit your manuscript to the IEEE Transactions on Nuclear Science (TNS), if it represents significant original contributions in the fields associated with the NSS-MIC (i.e., progress reports and preliminary findings are not appropriate). The TNS is a refereed publication, and is published throughout the year. There is no longer ``Conference Issue'' of TNS dedicated to the NSS-MIC, and therefore you can submit your manuscript to TNS at any time. For instructions on TNS manuscript submissions, please visit the IEEE's on-line peer review system Manuscript Central (http://tns-ieee.manuscriptcentral.com/). Please note that submission to TNS is a totally separate process from that of the Conference Record. You can download this document, and the MS Word template, from the 2007 NSS-MIC conference web site at http://www.nss-mic.org/2007/publications.html so that you can use it to prepare your manuscript. For authors using word processors other than {\LaTeX} or Word, please refer to the NSS-MIC07.DOC file for page layout guidelines. \section{Procedure for Manuscript Submission} All manuscripts for the Conference Record must be submitted electronically in PDF format through the conference website. The deadline for submission of your manuscript is Friday, Nov. 16, 2007. All questions regarding CR submission should be directed to Bo Yu, the Guest Editor, at yu@bnl.gov. At the conference, the Guest Editor will be available during the coffee and lunch break periods on Thursday and Friday in the {\it Tiare Suite}. \subsection{Create IEEE Xplore-Compatible PDF File} Starting 2005, all conference record manuscripts submitted to IEEE must meet the PDF Specification for IEEE Xplore\cite{IEEEPDFRequirement401}. To assist authors in meeting this requirement, IEEE has established a web based service called PDF eXpress. You can use this web service to convert your word processor files into Xplore-compatible PDF files, or to check if your own PDF file is Xplore-compatible. PDF eXpress converts the following file types to PDF: Microsoft Word, Rich Text Format (RTF), {\TeX} (dvi and all support files required), PageMaker, FrameMaker, Word Pro, QuarkXpress, and WordPerfect. The PDF eXpress service will be available to the NSS-MIC authors between October 1 and November 16, 2007. To use this service, go to http://216.228.1.34/pdfexpress/log.asp. Enter {\bf nssmic07} as the Conference ID. If you are a first time user of this system, you need to set up an account. Once logged in, follow the instructions on the web site to upload your word processor file or PDF file. Shortly after your file is uploaded to the PDF eXpress, you will receive an email. If you uploaded a word processor file for conversion, the attachment in this email will be the converted Xplore-compatible PDF file. Save this file for the submission step outlined in section II.B below. If you uploaded a PDF file for checking, the email will show if your file passed or failed the check. If your PDF file failed the check, read the error report and fix the identified problem(s). Re-upload your PDF file and have it checked again until your PDF file is Xplore-compatible. You can also bypass the PDF eXpress service and create your own Xplore-complatible PDF files. The key requirements are the following: \begin{itemize} \item[1] Do not protect your PDF file with any password; \item[2] Embed all fonts used in the document; \item[3] Do not embed any bookmarks or hyperlinks. \end{itemize} A detailed description of the IEEE Xplore-compatible PDF requirement is available at http://www.ieee.org/portal/cms\_docs/pubs/confstandards/pdfs/ IEEE-PDF-SpecV401.pdf \cite{IEEEPDFRequirement401}. If you are using a Windows version of the Adobe Distiller to create PDF files, you can download a set of job option files (for Acrobat versions 5 through 8) from http://www.nss-mic.org/2007/publications/xplore\_distiller\_files.ZIP. Install and use the appropriate job option to create your own Xplore-compatible PDF files. IEEE has also provided a list of settings for Ghostscript through GhostView: http://216.228.1.34/pdfexpress/GS81-IEEEXplore\_config.pdf. If you are using other software to generate PDF files, please refer to their manuals for correct conversion settings. The most common problem in creating Xplore-compatible PDF files is not embedding all fonts. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{myFigure.eps} \caption{Daily abstract submission rate of the 2007 NSS-MIC. } \label{fig_sim} \end{figure} \vfill \subsection{Submit the Manuscript and Copyright Form} After you have obtained the Xplore-compatible PDF file, log on to the NSS-MIC conference web site (http://www.nss-mic.org/2007) using the username and password for your abstract submission. Go to the ``My Submissions'' link and check that your paper title and author list are consistent with those in your manuscript. Make appropriate changes using the ``Update Abstract'' button if needed. Click on the ``Upload manuscript'' button to transfer your PDF file. Your PDF file will be checked again for Xplore-compatibility. PDF files that fail the check will not be included in the Conference Record CD. An IEEE Copyright Form should be submitted electronically at the same time your Xplore-compatible manuscript is submitted. Click on the ``Submit Copyright Form'' button on the ``My Abstracts'' link and follow the instructions. Each manuscript submitted to the Conference Record must be accompanied by a corresponding copyright form. \appendices \section{} Appendices, if needed, appear before the acknowledgment. \section*{Acknowledgment} The preferred spelling of the word ``acknowledgment'' in American English is without an ``e'' after the ``g.'' Use the singular heading even if you have many acknowledgments. Avoid the expression, ``One of us (S.B.A.) thanks ...'' Instead, write ``S.B.A. thanks ...'' Put sponsor acknowledgments in the unnumbered footnote on the first page.
{ "timestamp": "2017-12-15T02:07:13", "yymm": "1712", "arxiv_id": "1712.05200", "language": "en", "url": "https://arxiv.org/abs/1712.05200" }
\section{INTRODUCTION} Dark matter (DM) has been discovered through its gravitational interaction with usual matter. DM particles have been and are still searched in various processes at the high energy colliders. For a recent review with many references see \cite{rev1}.\\ Many (hundreds of) models have been proposed for the structure of dark matter and its possible (necessary weak) interaction with usual matter through various types of extensions of the standard model (SM). No signal has been found yet. This may be due to the very small strength of the involved interactions or to the very heavy mass of the DM constituents.\\ Probably because of its invisibility DM should not have gauge (strong, weak, em) interactions with SM particles, but only some new type of interaction.\\ In the worst case DM may consist of a new set of particles with their own self-interactions and only gravitational interactions will appear with the SM sector.\\ But the role played by the mass may suggest other possibilities.\\ One of them consists in assuming that, both in the SM sector and in the dark sector, mass is generated by a Higgs mecanism. If this mechanism is common to the two sectors then it may generate a connection between them; for example simply through Higgs boson exchange or even through a richer set of Higgs bosons; see \cite{Portal} for the "portal" concept.\\ Another possibility is that the mass of the SM particles is generated through special interactions with the dark sector. A naive picture consists in assuming that SM particles get their mass from an environment of DM. This may be the origin of the Higgs potential generating the SM masses but also additional interactions between SM and DM particles.\\ We propose tests of these two possibilities with processes involving the heaviest SM particles ($Z$, $W$, top quark,...) which should be the best places for revealing a connection with DM, either a direct SM-DM connection or a connection through Higgs boson (and possibly other bosons) exchanges betwen SM and DM.\\ We are not working with a precise model for the connection to DM; we use a kind of effective coupling between heavy SM particles and invisible DM ones. For example in the case of the $Z$ boson no gauge coupling "Z-DM-DM" would exist but only some type of mass generating coupling "Z-Z-DM" would appear. A similar assumption will be done for $W$ and for the top quark.\\ In this paper we consider the $e^+e^-$ collision processes $e^+e^-\to Z+X,~W^+W^-+X, ~t\bar t+X$ where $X$ represents the invisible dark matter.\\ Using the mentioned effective couplings we compute the inclusive mass distributions ${d\sigma\over dM_X}$ and discuss their shape reflecting the type of connection between the heavy SM particle and the invisible DM ones.\\ We conclude by mentioning that other processes (more difficult to analyze) may be considered for example in hadronic or in photon-photon collisions.\\ \section{Analysis of $e^+e^-\to Z+X$} As explained in the introduction one possibility of connection to invisible matter would be through the mass generation of the $Z$ boson.\\ Without a specific model the description can be done with an effective $Z-Z-DM$ coupling.\\ With a kind of substructure picture for the dynamical generation of the mass (like in the hadronic case with quark binding) we will use a $ZZss'$ coupling with a pair of scalars $s,s'$ DM "partons" which then (like hadronic partons) automatically create the multiparticle DM final (invisible) state. The $Z$ mass may be generated according to the picture of the upper diagram in Fig.1. The corresponding DM emission will appear from the left lower level diagram.\\ One may then also consider (as a part of the above contribution or as an additional one) the possibility of Higgs boson production like in the SM case with the process $e^+e^-\to Z \to Z+H$ then completed by the $H \to DM$ vertex; see the right lower level diagram of Fig.1. This would correspond to the idea that the Higgs boson, and possibly heavier (excited?) $H'$, are portals to the dark sector \cite{Portal}.\\ Several previous works about this process have been done, for example in \cite{ZH} for the search of possible signals of Higgs boson compositeness \cite{Hcomp2,Hcomp3,Hcomp4}. The possibilities at high energy $e^+e^-$ colliders have been reviewed in \cite{Moortgat, Craig}.\\ We have computed the inclusive (invisible) $M_X$ mass distributions ${d\sigma\over dM_X}$ associated to these different possibilities corresponding to the diagrams of Fig.1 where $M^2_X=(p_{e^+}+p_{e^-}-p_Z)^2$. \\ In Fig.2 we have plotted them (as well as their total) by using arbitrary couplings and masses in order that the shapes appear clearly, with $m_{H'}=0.7$ TeV, $\Gamma_{H'}=0.1$ TeV and (0.01, 0.1, 0.5 TeV) for the $s,s'$ masses.\\ In the upper level we show the individual $H$, $H'$ peaks and the continuum due to the effective $ZZss'$ coupling for a parton mass of $0.01$ TeV. In the lower level we show the total for the three choices of parton mass (0.01, 0.1, 0.5 TeV) with clear threshold effects.\\ The quantitative values have no predictive meaning, our aim is just to see what effects these various dynamical assumptions can produce.\\ Precise analyses should also consider backgrounds with invisible productions like $e^+e^-\to ZZ$ followed by one $Z\to\nu\bar\nu$ decay producing a peak at $M_X=M_Z$.\\ \section{Analysis of $e^+e^- \to W^+W^-+X$} A similar analysis can be done for the process $e^+e^- \to W^+W^-+X$.\\ We had previously considered this process among others for the search of signals of $W$ compositeness, see \cite{CSMrev}. Now we look more specifically at the hypothesis that DM is related to mass generation of the $W$ boson, with the same procedure as above for the $Z$ boson, and that this can be described with an effective coupling to an $ss'$ pair of subconstituents, $W^+W^- ss'$.\\ We then compute the corresponding inclusive (invisible) $M_X$ mass distributions ${d\sigma\over dM_X}$ according to the diagrams of Fig.3.\\ Note that, at the same order, a contribution still appears through the $Z-Z-DM$ coupling.\\ As above we add the possibility of intermediate exchange of Higgs bosons $H$ and $H'$connected to DM.\\ We make the illustrations with the same parameters as in the previous $Z$ process.\\ Fig.4 shows that effects similar to those observable in $e^+e^-\to Z+X$ could confirm the basic hypothesis. Note also the presence of the background $e^+e^- \to W^+W^-Z$ with $Z\to\nu\bar\nu$.\\ \section{Analysis of $e^+e^- \to t\bar t+X$} Finally we consider the process $e^+e^- \to t\bar t+X$ that has also been a part of the studies, for example \cite{Tait, trcomp, ttincl}, of top quark compositeness \cite{partialcomp}. Substructure models have been proposed since a long time \cite{comp}.\\ We again assume that DM is related to mass generation of the top quark and that this can be described with an effective coupling to an $ss'$ pair of subconstituents, $t\bar t ss'$.\\ We then compute the corresponding inclusive (invisible) $M_X$ mass distributions ${d\sigma\over dM_X}$ according to the diagrams of Fig.5.\\ Like in the $e^+e^- \to W^+W^-+X$ case a contribution involving the $Z-Z-DM$ coupling also appears.\\ Diagrams with intermediate exchange of Higgs bosons $H$ and $H'$ are also added.\\ The background from $e^+e^- \to t\bar tZ$ with $Z\to\nu\bar\nu$ is also present.\\ In Fig.6, with the same parameters as in the previous processes, we can see that it should be possible to check if the hypothesis of DM connection related to mass generation of the Z,W gauge bosons also applies to the top quark.\\ \section{Conclusion and further developments} In this paper we have assumed that invisible dark matter, in addition to gravitational interaction, may have other types of interactions with standard particles related to their mass.\\ This assumption suggests studies of $Z$, $W$ and the top quark production for revealing this property.\\ Using arbitrary effective couplings describing this special interaction we have computed the invisible inclusive distribution ${d\sigma\over dM_X}$ for the three processes $e^+e^-\to Z+X,~W^+W^-+X, ~t\bar t+X$. The illustrations show how the shapes of these distributions reflect the various possibilities and the parameters controlling the effective couplings to DM.\\ Other processes involving heavy SM particles and DM may be considered.\\ In $e^+e^-$ collisions, see \cite{Moortgat} for a general review, one example is $e^+e^-\to e^+e^-+ZZ$ with $ZZ$ fusion into DM (directly or through Higgs-like bosons).\\ Obviously many other processes involving $Z$, $W$ and the top quark occur in hadronic collisions but require detailed and difficult phenomenological and experimental analyses, see \cite{rev1} and also \cite{Contino,Richard}.\\ Studies in photon-photon collisions may also be considered \cite{gammagamma}.\\ Other less massive SM particles could also be concerned. It is not yet experimentally established that the Higgs couplings proportional to the fermion masses. So non negligible direct couplings to dark matter may even also appear there. This may concern $b$ quark, muon and even electron. If a direct $e^+e^-$ coupling is too small (see although \cite{light}), a muon collider, already known as a possible Higgs boson factory, could be an interesting place. One would look at the process $\mu^+\mu^- \to \gamma +DM$, involving $\mu^+\mu^- \to DM$ direct production or $\mu^+\mu^- \to H,H'\to DM$ and a photon emission from the $\mu^{\pm}$ line. Obviously this has to overwhelm (at least in some $M_X$ range) the background involving $Z\to\nu\bar\nu$.\\ Finally we should obviously add that quantitative predictions require the use of a precise theoretical description of DM and of its possible relation to the mass generation.\\
{ "timestamp": "2017-12-15T02:10:54", "yymm": "1712", "arxiv_id": "1712.05352", "language": "en", "url": "https://arxiv.org/abs/1712.05352" }
\section{Characteristic kernels on spaces with additional structure}\label{sec:structure} In this section, we apply the developed theory to translation-invariant or isotropic kernels on compact Abelian groups or spheres, respectively. \input{compact-groups} \subsection{Spheres}\label{sec:Sd} In this subsection we consider isotropic kernels $k$ on $\mathbb{bbS}^d$ for $d \ge 1$, that is kernels of the form $k(x,y) = \psi(\theta(x,y))$, where $\theta(\cdot,\cdot)=\arccos \langle \cdot,\cdot\rangle$ is the geodesic distance on $\mathbb{S}^d$ and $\langle \cdot,\cdot\rangle$ denotes the standard scalar product on $\mathbb{R}^{d+1}$. Following \citet{Gneiting2013}, let $\Psi_d$ be the class of all continuous functions $\psi$ on $[0,\pi]$ such that $k(x,y) = \psi(\theta(x,y))$ is a kernel on $\mathbb{bbS}^d$, and define $\Psi_{\infty} := \cap_d \Psi_d$. We write $\Psi_d^+\subset \Psi_d$ for the class of functions that induce a strictly positive definite kernel on $\mathbb{bbS}^d$, and set $\Psi_{\infty}^+:= \cap_d \Psi_d^+$. It holds that $\Psi_{d+1} \subset \Psi_d$ and $\Psi_{d+1}^+ \subset \Psi_d^+$, see \citet[Corollary 1]{Gneiting2013}. The following two theorems are our main results on characteristic kernels on $\mathbb{bbS}^d$. \begin{theorem}\label{thm:32} Let $k$ be an isotropic kernel on $\mathbb{S}^d$ induced by some $\psi \in \Psi_{d+2}$ or by some $\psi\in\Psi_{d+1}^+$. Then the following statements are equivalent: \begin{enumerate} \item $k$ is characteristic. \item $k$ is universal. \item $k$ is strictly positive definite on $\mathbb{bbS}^d$, that is, $\psi \in \Psi_d^+$. \end{enumerate} \end{theorem} Theorem \ref{thm:32} shows in particular that any $\psi \in \Psi_{d+1}^+$ induces a characteristic kernel on $\mathbb{S}^d$. For the practically most relevant case of $\mathbb{S}^2$, all the parametric families of isotropic positive definite functions in \citet[Table 1]{Gneiting2013} are in $\Psi_3^+$ and thus all of them are characteristic and yield strictly proper kernel scores on $\mathbb{S}^2$ by Theorem \ref{thm:one} and Theorem \ref{thm:32}. \begin{theorem}\label{thm:33} Let $k$ be an isotropic kernel on $\mathbb{S}^d$ induced by $\psi \in \Psi_{\infty}$. Then the following statements are equivalent: \begin{enumerate} \item $k$ is characteristic. \item $k$ is universal. \item $k$ is strictly positive definite, that is, $\psi \in \Psi_{\infty}^+$. \end{enumerate} \end{theorem} Theorem \ref{thm:33} is analogous to the result of \citet[Proposition 5]{SriperumbFukumizuETAL2011} for radial kernels on $\mathbb{R}^d$. For the proofs of Theorems \ref{thm:32} and \ref{thm:33}, we need to introduce some preliminaries. By \citet{Schoenberg1942} the functions in $\Psi_{\infty}$ have a representation of the form \begin{equation}\label{eq:Schoenberg} \psi(\theta) = \sum_{n=0}^{\infty}b_n (\cos(\theta))^n\, , \qquad \theta \in [0,\pi], \end{equation} where $(b_n)_n$ is a summable sequence of non-negative coefficients termed the \emph{$\infty$-Schoenberg sequence} of $\psi$. \citet{Schoenberg1942} also showed that the functions in $\Psi_d$ have a representation as \begin{equation* \psi(\theta) = \sum_{n=0}^{\infty}b_{n,d} \frac{C_n^{(d-1)/2}(\cos\theta)}{C_n^{(d-1)/2}(1)}\, , \qquad \theta \in [0,\pi], \end{equation*} where $(b_{n,d})_n$ is a summable sequence of non-negative coefficients termed the \emph{$d$-Schoenberg sequence} of $\psi$, and $C_n^\lambda$, $\lambda > 0$, $n \in \mathbb{N}_0$ are the Gegenbauer polynomials; see the \citet[18.3.1]{dlmf}. For $\lambda =0$, we set $C_n^0(\cos \theta) := \cos (n\theta)$. \begin{definition A sequence of non-negative real numbers $(b_n)_{n\in \mathbb{N}_0}$ fulfills \emph{condition $b$}, if $b_n > 0$ for $\infty$-many even and $\infty$-many odd integers. \end{definition} \begin{remark}\label{rem:spd2cb For $\psi \in \Psi_\infty$ or $\psi \in \Psi_d$, $d\ge 2$, the induced isotropic kernel is strictly positive definite if and only if its Schoenberg sequence fulfills condition $b$, see \citet{Menegatto1992,Menegatto1994} and \citet{ChenMenegattoETAL2003}. For $d = 1$, condition $b$ remains a necessary condition for $\psi \in \Psi_1^+$ as shown by \citet{Menegatto1995}. However, it is not sufficient any more. A simple sufficient condition for $\psi \in \Psi_1^+$ that is useful for our purposes but which is not necessary is that there is $n_0$ such that $b_{n,1} > 0$ for all $n \ge n_0$. See \citet{MenegattoOliveiraETAL2006} for a necessary and sufficient condition in the case $d = 1$. \end{remark} \begin{lemma}\label{prop:1} If $\psi \in \Psi_{d+2}$, then it is strictly positive definite on $\mathbb{bbS}^d$ if and only if $b_{n,d}>0$ for all $n \in \mathbb{N}_0$. \end{lemma} The set $\Psi_d^+ \backslash \Psi_{d+2} \subset \Psi_d$ is not empty and also contains elements with $b_{n,d} > 0$ for all $n \ge 0$. To construct an explicit example, take any summable sequence $(b_n)_{n \in \mathbb{N}_0}$ of positive real numbers such that $b_{2} > (d(d+3)/2)b_0$. Let $\psi$ be the function with $d$-Schoenberg sequence $(b_n)_{n \in \mathbb{N}_0}$. Then $\psi \in \Psi_d^+$, fulfills $b_{n,d} > 0$ for all $n \ge 0$, and, by \citet[Corollary 4]{Gneiting2013}, it is not a member of $\Psi_{d+2}$. \begin{lemma}\label{prop:34} If $\psi \in \Psi_{d+1}^+$, then $b_{n,d} > 0$ for all $n \ge 0$. \end{lemma} After these preliminary considerations concerning Schoenberg sequences, we will now show Theorem \ref{thm:32} by applying Corollary \ref{uni-cor}. To this end, let $(e_{n,j})_{n \in \mathbb{N}_0,j=1,\dots,N(d,n)}$ denote an orthonormal basis o spherical harmonics on $\mathbb{bbS}^d$. The polynomial $e_{n,j}$ has order $n$ and $N(d,n) = \binom{n+d}{n} - \binom{n+d-2}{n-2}$; see for example \citet[Theorem 3.1.4]{Groemer1996}, where we note that he works on $\mathbb{bbS}^{d-1}$ while we work on $\mathbb{bbS}^d$. In particular, $e_{0,0} = 1$. By \citet[Theorem 3.3.3]{Groemer1996}, \begin{equation}\label{eq:legendre} \frac{C_n^{(d-1)/2}(\langle x,y\rangle)}{C_n^{(d-1)/2}(1)} = \frac{1}{N(d,n)}\sum_{j=1}^{N(d,n)} e_{n,j}(x)e_{n,j}(y), \end{equation} hence any isotropic kernel on $\mathbb{S}^d$ induced by $\psi \in \Psi_d$ has a Mercer representation of the form \eqref{kernel_sum_cont} with $\lambda_{n,j} = b_{n,d}/N(d,n)$. Moreover, \citet[Corollary 3.2.7]{Groemer1996} shows that the space $\spann\{e_{n,j}: n \in \mathbb{N}_0,j=1,\dots,N(d,n)\}$ is dense in $C(\mathbb{S}^d)$. Similar to \citet[Theorem 9]{SriperumbGrettonETAL2010} for translation invariant kernels on $\mathbb{R}^d$, Corollary \ref{uni-cor} yields the following theorem. \begin{theorem}\label{thm:Schoenberg} The kernel induced by $\psi \in \Psi_d$ is \begin{enumerate} \item universal if and only if $b_{n,d} > 0$ for all $n \ge 0$. \item characteristic if and only if $b_{n,d} > 0$ for all $n \ge 1$. \end{enumerate} \end{theorem} For the proof of the converse of Theorem \ref{thm:33}, we need the following proposition. \begin{proposition}\label{prop:421} Let $k$ be a kernel on $\mathbb{S}^d$ induced by $\psi \in \Psi_\infty$. If $k$ is characteristic, then $\psi$ is strictly positive definite, that is $\psi \in \Psi_\infty^+$. \end{proposition} \subsection{Compact Abelian Groups}\label{sec:groups} In this subsection we apply the theory developed so far to translation-invariant kernels on compact Abelian groups. Here the main difficulty lies in the fact that one traditionally considers kernels on groups that are $\mathbb{C}$-valued, while we are only interested in $\mathbb{R}$-valued kernels. Although at first glance, one may not expect any problem arising from this discrepancy, it turns out that it actually does make a difference when constructing an ONB of $\Lx 2 \nu$ with the help of characters as soon as we have more than one self-inverse character. Our first goal is to make the introducing remarks precise. To this end, let $(G, +)$ be a compact Abelian group, and $\nu$ be its normalized Haar measure. As usual, we write $\Lx 2 G := \Lx 2 \nu$, and for later use recall that $\nu$ is strictly positive and regular, see e.g.~\citet[p.~193/4]{HeRo63}. Moreover, let $(\hat G, \cdot)$ be the dual group of $G$, which consists of characters $e: G\to \mathbb{T}$, where $\mathbb{T}$ denotes, as usual, the unit circle in $\mathbb{C}$, see e.g.~\citet[Chapter Six]{HeRo63} and \citet[Chapter 4.1]{Folland95}. For notational purposes, we assume that we have another group $(I,+)$ that is isomorphic to $(\hat G, \cdot)$ by some mapping $i\mapsto e_i$. This gives $e_{i+j} = e_i e_j$, $e_0 = \boldsymbol{1}_G$, and since we further have $e \bar e= \boldsymbol{1}_G$ for all $e\in \hat G$, our notation also yields $e_{-i} = \bar e_i$. In particular, we have $\re e_{-i} = \re e_i$ and $\im e_{-i} = -\im e_{i}$ for all $i\in I$, and for all $i\in I$ with $i=-i$ the latter equality immediately yields $\im e_i = 0$. Finally, for $i\in I$ and $x,y\in G$ we have \begin{displaymath} e_i(-y+x) = \frac {e_i(x)}{e_i(y)} = \overline{e_i(y)} e_i(x) = e_{-i}(y) e_i(x)\, , \end{displaymath} and from this it is easy to derive both, $\re e_i(-x) = \re e_i(x)$ and $\im e_i(-x) = -\im e_i(x)$, as well as \begin{align}\label{add-thm} \re e_i(-y+x) = \re e_i(x) \re e_i(y) + \im e_i(x)\im e_i(y)\, . \end{align} Note that for $i\in I$ with $i=-i$, the latter formula can be simplified using $\im e_i = 0$. Now, it is well-know that $([e_i]_\sim)_{i\in I}$ is an ONB of $\Lx 2 {G,\mathbb{C}}$, see e.g.~\citet[Corollary 4.26]{Folland95} and using this fact a quick application of the Stone-Weierstrass theorem shows that $(e_i)_{i\in I}$ is also dense in $C(G, \mathbb{C})$. Let us construct a corresponding ONB in $\Lx 2 G$. To this end, we write $I_0 := \{i\in I: i=-i\}$ for the set of all \emph{self-inverse} elements of $I$. Moreover, we fix a partition $I_+\cup I_- = I\setminus I_0$ such that $i\in I_+$ implies $-i\in I_-$ for all $i \in I\setminus I_0$. Obviously, the sets $I_0, I_+, I_-$ form a partition of $I$. Let us now define the family $(e_i^*)_{i\in I}$ by \begin{equation}\label{real-onb-def} e_i^* := \begin{cases} \re e_i & \mbox{ if }\, i\in I_0 \\ \sqrt{2} \re e_i & \mbox{ if }\, i\in I_+ \\ \sqrt{2} \im e_i & \mbox{ if }\, i\in I_- \, . \end{cases} \end{equation} The next result shows that $(e_i^*)_{i\in I}$ is the desired family. \begin{lemma}\label{real-onb} Let $(G, +)$ be a compact Abelian metric group. Then each family $(e_i^*)_{i\in I}$ given by \eqref{real-onb-def} is an ONB of $\Lx 2 G$ and $\spann\{e_i^*: i\in I\}$ is dense in $C(G)$. Finally, we have $\inorm{e_i^*}\leq \sqrt 2$ for all $i\in I$. \end{lemma} In the following, we call a kernel $k$ on an Abelian group $(G,+)$ translation invariant, if there exists a function $\k:G\to \mathbb{R}$ such that $k(x,x') = \k(-x +x')$ for all $x,x'\in G$. Clearly, $k$ is continuous, if and only if $\k$ is. The following lemma provides a representation of translation invariant kernels. \begin{lemma}\label{real-bochner} Let $(G, +)$ be a compact Abelian group, $(e_i^*)_{i\in I}$ be a family of the form \eqref{real-onb-def}, and $k:G\times G\to \mathbb{R}$ be a bounded, measurable function. Then the following statements are equivalent: \begin{enumerate} \item $k$ is a bounded, measurable, and translation invariant kernel on $G$. \item There exists a summable family $(\lambda_i)_{i\in I}\subset [0,\infty)$ such that \begin{equation}\label{Mercer-on-G} k(x,x') = \sum_{\lambda_i > 0} \lambda_i e_i^*(x) e_i^*(x') = \sum_{\lambda_i> 0} \lambda_i \re e_i(x-x')\, , \end{equation} where the series converge absolutely for all $x,x'\in G$ as well as uniformly in $(x,x')$. \end{enumerate} If one, and thus both, statements are true, then $k$ is continuous, $(\lambda_i)_{i\in I}$ are all, possibly vanishing, eigenvalues of $T_{k,\nu}$, and $(e_i^*)_{i\in I}$ is an ONB of the corresponding eigenfunctions. \end{lemma} For an interpretation of the representation \eqref{Mercer-on-G} recall that $\k(-x+x') = k(x,x')$ for all $x,x\in G$ and hence \eqref{Mercer-on-G} gives \begin{displaymath} \k(x) = \sum_{\lambda_i> 0} \lambda_i \re e_i(-x) = \sum_{\lambda_i> 0} \lambda_i \re e_i(x) \end{displaymath} for all $x\in G$. Consequently, the second equality in \eqref{Mercer-on-G} is Bochner's theorem, see e.g.~\citet[Theorem 30.3]{HeRo70}, in the case of $\mathbb{R}$-valued kernels on compact Abelian groups. Unlike this classical theorem, however, the second equality in \eqref{Mercer-on-G} also describes how the representing measure of $\k$ on $\hat G$ is given by the eigenvalues of $T_k$ or $T_k^\mathbb{C}$. In the following, we do not need this information, in fact, we only mentioned the second equality to provide a link to the existing theory. Instead, the first equality in \eqref{Mercer-on-G}, which replaces the characters of $G$ by the eigenfunctions of $T_k$, is more important for us, since this equality actually is the Mercer representation of $k$ in the sense of \eqref{kernel_sum_cont} and therefore the theory developed in the previous sections becomes applicable. The next result is an extension of Theorem \ref{exists-univer-char-kern} to translation-invariant kernels on compact Abelian groups. \begin{theorem}\label{exist-univ-on-G} Let $(G,+)$ be a compact Abelian group. Then the following statements are equivalent: \begin{enumerate} \item $G$ is metrizable. \item $\hat G$ is at most countable. \item There exists a translation-invariant universal kernel on $G$. \item There exists a universal kernel on $G$. \item There exists a translation-invariant characteristic kernel on $G$. \item There exists a continuous characteristic kernel on $G$. \end{enumerate} \end{theorem} Note that the equivalence between \emph{i)} and \emph{ii)} can also be shown without using translation-invariant kernels, see e.g.~\citet[Proposition 3]{Morris79a} or \citet[Theorem 24.15]{HeRo63}. Our proof, to the contrary, is solely RKHS-based. Our next result characterizes universal and characteristic translation-invariant kernels on compact Abelian groups. In view of Theorem \ref{exist-univ-on-G}, it suffices to consider the metrizable case. \begin{corollary}\label{char-on-G-char} Let $(G,+)$ be a compact metrizable Abelian group and $k$ be a translation-invariant kernel on $G$ with representation \eqref{Mercer-on-G}. Then we have: \begin{enumerate} \item $k$ is universal if and only if $\lambda_i>0$ for all $i\in I$. \item $k$ is characteristic if and only if $\lambda_i>0$ for all $i\neq 0$. \end{enumerate} \end{corollary} Corollary \ref{char-on-G-char} generalizes \citet[Theorem 14 and Corollary 15]{SriperumbGrettonETAL2010} from $\mathbb{T}^d$ to arbitrary compact metrizable Abelian groups. Moreover, recall that these authors also provide a couple of translation-invariant characteristic kernels on $\mathbb{T}$ that enjoy a closed form. As mentioned in the beginning of this section, the major difficulty in deriving a Mercer representation \eqref{Mercer-on-G} for translation-invariant kernels is the handling of self-inverse characters other than the neutral element. The simplest example of a group $G$ whose dual $\hat G$ contains more than one self-inverse is the quotient group $(\mathbb{Z}_2, +)$ of $(\mathbb{Z},+)$ with its subgroup $2\mathbb{Z}$. Indeed, besides the neutral element $e_0$, $\hat \mathbb{Z}_2$ only contains the character $e_1$ given by $e_1(0) := 1$ and $e_1(1) := -1$. Clearly, this gives $e_1^2 = e_0$ and thus $e_1$ is self-inverse. Now note that a function $k:\mathbb{Z}_2\times \mathbb{Z}_2\to \mathbb{R}$ can be uniquely described by a 2-by-2 matrix $K = (k(x,x'))_{x,x'\in \mathbb{Z}_2}$ and a simple calculation shows that $k$ is a kernel if and only if $k(0,1) = k(1,0)$, $k(0,0)\geq 0$, $k(1,1) \geq 0$, and $k(0,0)k(1,1) \geq k^2(0,1)$. Moreover, $k$ is translation-invariant as soon as it is constant on its diagonal, and in this case the previous conditions reduce to \begin{equation}\label{discrete-trala} k(0,1) = k(1,0) \qquad \qquad \mbox{ and } \qquad \qquad k(0,0) = k(1,1) \geq |k(0,1)|\, . \end{equation} Now, let $k$ be a translation-invariant kernel on $\mathbb{Z}_2$ and $\lambda_0, \lambda_1\geq 0$ be the coefficients in \eqref{Mercer-on-G}. Then a simple calculation shows that the describing matrix $K$ is given by \begin{displaymath} K= \begin{pmatrix} \lambda_0+\lambda_1 & \lambda_0-\lambda_1 \\ \lambda_0-\lambda_1 & \lambda_0+\lambda_1 \end{pmatrix}\, , \end{displaymath} and therefore it is not hard to see by Corollary \ref{char-on-G-char} that $k$ is characteristic, if and only if $k(0,0) \neq k(0,1)$. Similarly, $k$ is universal, if and only if $k(0,0) \neq \pm k(0,1)$. While this example seems to be rather trivial, it already has some important applications. For example, assume that our input space $X$ is a product space for which some components belong to a compact metrizable Abelian group, while the remaining components are only allowed to attain the values $0$ and $1$. In other words, $X$ is of the form $X = G \times \mathbb{Z}_2^d$, where $G$ is a compact metrizable Abelian group and $d\geq 1$. Now, an intuitive way to construct a (translation-invariant) characteristic kernel $k$ on $X$ is to take a product $k:= k_C\cdot k_D$, where $k_C$ and $k_D$ are kernels on $G$ and $\mathbb{Z}_2^d$, respectively. By Corollary \ref{product-char} we then know that $k$ is characteristic (or universal) if and only if both $k_C$ and $k_D$ are universal. Clearly, if $k_D$ is itself a product of kernels $k_1,\dots,k_d$ then $k_D$ is almost automatically translation-invariant and universal. Indeed, if all $k_i$ satisfy \eqref{discrete-trala} with $k_i(0,0) \neq \pm k_i(0,1)$, then each $k_i$ is translation-invariant and universal, and thus so is $k_D$. It seems fair to say that most ``natural'' choices of $k_i$ will satisfy these assumptions. On the other hand, translation-invariant universal kernels $k_C$ on $G$ are completely characterized by Corollary \ref{char-on-G-char}, and thus it is straightforward to characterize all translation-invariant characteristic kernels $k$ on $G\times \mathbb{Z}_2^d$ of product type $k:= k_C\cdot k_D$. However, their representation \eqref{Mercer-on-G} is a bit more cumbersome. Indeed, any element $(e, \omega)\in \hat G \times \hat \mathbb{Z}_2^d = \widehat{G \times \mathbb{Z}_2^d}$ with self-inverse $e\in \hat G$ and arbitrary $\omega \in \mathbb{Z}_2^d$ is self-inverse, where the equality in the sense of group isomorphisms can e.g.~be found in \citet[Proposition 4.6]{Folland95}. Consequently, the set $I_0$, which intuitively may be viewed as a small set of unusual characters, may actually be rather large. Note that the set $\mathbb{Z}_2$ appears in data analysis settings whenever we have categorical variables with two possible values, which quite frequently is indeed the case. Now we have seen around \eqref{discrete-trala} that most natural choices of kernels on $\mathbb{Z}_2$ are actually translation-invariant, and for these kernels the results of this subsection applies. Similarly, if we have a categorical variable with an even number $m$ of possible values that have a cyclic nature, for example hours of a day or months of a year, then $\mathbb{Z}_m$ can be used to describe these values, and kernels that respect the cyclic nature are translation-invariant. Clearly $m/2$ is self-inverse in $\mathbb{Z}_m$, and therefore our theory again applies. For more on structural properties of compact Abelian groups as well as for further examples we refer to \citet[\S 25]{HeRo63} and \citet[Chapter 8]{HoMo13}. \section{Introduction}\label{sec:intro} Probabilistic forecasts of uncertain future events are issued in a wealth of applications, see \citet{GneitingKatzfuss2014} and the references therein. To assess the quality and to compare such forecasts, proper scoring rules are a well-established tool, see \citet{GneitingRaftery2007}, and in applications, it is usually even desirable to work with \emph{strictly} proper scoring rules. A broad class of proper scoring rules are so-called kernel scores, which are constructed using a positive definite kernel. Unfortunately, however, no general conditions are available to decide whether a given kernel induces a strictly proper kernel score. As detailed below in Theorem \ref{thm:one}, strict propriety of a kernel score is intimately connected to the kernel being characteristic, a notion that has been studied in the machine learning literature for a decade, see e.g.~\cite{GrBoRaScSm07a, FuSrGrSc09a, SriperumbGrettonETAL2010, SriperumbFukumizuETAL2011} as well as the recent survey of \cite{MuFuSrSc17a} and the references therein. In this paper, we study characteristic kernels on compact spaces extending results of \citet{MiXuZh06a} and \citet{SriperumbGrettonETAL2010,SriperumbFukumizuETAL2011}. As a consequence, we can characterize strictly proper kernel scores on compact Abelian groups and the practically highly relevant example of spheres. To describe our results in more detail, let us formally introduce some of the notions mentioned above. To this end, let $(X,\mathcal{A})$ be a measurable space and let $\mathcal{M}_1(X)$ denote the class of all probability measures on $X$. For $\mathcal{P}\subseteq \mathcal{M}_1(X)$, a \emph{scoring rule} is a function $S:\mathcal{P}\times X \to [-\infty,\infty]$ such that the integral $\int S(P,x) \,\mathrm{d} Q(x)$ exists for all $P, Q \in \mathcal{P}$. The scoring rule is called \emph{proper} with respect to $\mathcal{P}$ if \begin{equation}\label{eq:proper} \int S(P,x) \,\mathrm{d} P(x) \le \int S(Q,x) \,\mathrm{d} P(x), \quad \text{for all $P,Q \in \mathcal{P}$}, \end{equation} and it is called \emph{strictly proper} if equality in \eqref{eq:proper} implies $P=Q$. Recall that if the class $\mathcal{P}$ consists of absolutely continuous probability measures with respect to some $\sigma$-finite measure $\mu$ on $X$ then the logarithmic score $S(P,x) := -\log p(x)$, where $p$ is the density of $P$, is a widely used example of a strictly proper scoring rule for density forecasts. Another well-known example is the Brier score for distributions on $X = \{1,\dots,m\}$ that is defined as $S(P,i) := \sum_{j=1}^m p_j^2 + 1 - 2p_i$, where $p_i = P(\{i\})$, $i=1,\dots,m$. Finally, for $X = \mathbb{R}$, the continuous ranked probability score (CRPS) is given by \[ S(P,x) := \int_\mathbb{R} |y - x|\,\mathrm{d} P(y) - \frac{1}{2} \int_\mathbb{R} \int_\mathbb{R} |y - y'|\,\mathrm{d} P(y)\,\mathrm{d} P(y'). \] It is strictly proper with respect to the class of all probability measures with finite first moment, see e.g.~\citet[Section 4.2]{GneitingRaftery2007}, and consequently it can be used to evaluate predictions of density forecasts as well as probabilistic forecasts of categorical variables. Various other examples can be found in \citet{GneitingRaftery2007}. One general class of proper scoring rules are kernel scores. To this end, let $k:X \times X \to \mathbb{R}$ be a symmetric function. We call $k$ a \emph{kernel}, if it is positive definite, that is, if \begin{equation}\label{posdef} \sum_{i,j=1}^n a_i a_j k(x_i,x_j) \ge 0 \end{equation} for all natural numbers $n$, all $a_1,\dots,a_n \in \mathbb{R}$, and all $x_1,\dots,x_n \in X$. It is strictly positive definite if equality in \eqref{posdef} implies $a_1=\dots=a_n=0$ whenever the points $x_1,\dots,x_n$ are mutually distinct. Let us assume that $k$ is measurable and define \begin{displaymath} \mathcal{M}_1^k(X):=\Bigl\{P \in \mathcal{M}_1(X)\;\bigl|\; \int_{X}\sqrt{k(x,x)} \,\mathrm{d} P(x) < \infty\Bigr\}. \end{displaymath} For a bounded kernel $k$, we have that $\mathcal{M}_1^k(X) = \mathcal{M}_1(X)$, and the Cauchy-Schwarz inequality $k(x,y) \le \sqrt{k(x,x)}\sqrt{k(y,y)}$ for kernels shows that, for all $P,Q \in \mathcal{M}_1^k(X)$, the kernel $k$ is integrable with respect to the product measure $P\otimes Q$. \begin{defn} The \emph{kernel score} $S_k$ associated with a measurable kernel $k$ on $X$ is the scoring rule $S_k:\mathcal{M}_1^k(X)\times X \to \mathbb{R}$ defined by \begin{equation* S_k(P,x) := - \int k(\omega,x) \,\mathrm{d} P(\omega) + \frac{1}{2}\int \int k(\omega,\omega')\,\mathrm{d} P(\omega)\,\mathrm{d} P(\omega'). \end{equation*} \end{defn} Kernel scores are a broad generalization of the CRPS, and in fact, also the Brier score can be rewritten as a kernel score, see \citet[Section 5.1]{GneitingRaftery2007}. However, the logarithmic score does not belong to this class. If $X$ is a Hausdorff space and $k$ is continuous, then \citet[Theorem 4]{GneitingRaftery2007} show that $S_k$ is proper with respect to all Radon probability measures on $X$. Their result is based on \citet[Theorem 2.1, p.~235]{BergChristensETAL1984}, where it is fundamental that the kernel is continuous. In this respect we remark that the definition of a kernel score of \citet{GneitingRaftery2007} is more general than ours as it allows for kernels being only conditionally positive definite. While this level of generality is fruitful for example in the case $X = \mathbb{R}^d$, we believe that it is sufficient to consider only positive definite kernels for compact spaces. Indeed, note that if $X$ is compact and separable and there is a strictly positive probability measure $\nu$ on its Borel sets, then, if we only consider kernels $k$ such that $\int_X k(x,y)\,\mathrm{d} \nu(x)$ does not depend on $y$, we know by \citet[Theorem 2]{Bochner1941} that conditionally positive definite kernels and positive definite kernels are the same up to a constant. In our framework, it is possible to show propriety of the kernel score without requiring continuity of the kernel using the theory of reproducing kernel Hilbert spaces (RKHS); see Theorem \ref{thm:one} below. In addition, we obtain a condition for when the kernel score is actually strictly proper. \begin{thm}\label{thm:one} Let $k$ be a measurable kernel with RKHS $H$ with norm $\|\cdot\|_H$ and $\Phi: \mathcal{M}_1^k(X) \to H$ be the kernel embedding defined by \begin{equation}\label{eq:inj} \Phi(P) := \int k(\cdot,\omega)\,\mathrm{d} P(\omega)\, . \end{equation} Then the kernel score satisfies \begin{equation}\label{main-obs} \mnorm{\P(P)-\P(Q)}_H^2 = 2 \left(\int S_k(Q,x)\,\mathrm{d} P(x) - \int S_k(P,x)\,\mathrm{d} P(x)\right) \end{equation} for all $P,Q\in \mathcal{M}_1^k(X)$. In particular, $S_k$ is a proper scoring rule with respect to $\mathcal{M}_1^k(X)$, and it is strictly proper if and only if $\P$ is injective. \end{thm} In the machine learning literature a bounded measurable kernel is called \emph{characteristic} if the kernel (mean) embedding $\Phi: \mathcal{M}_1(X) \to H$ defined by \eqref{eq:inj} is injective. Consequently, Theorem \ref{thm:one} shows that, for bounded measurable kernels $k$, strictly proper kernel scores $S_k$ are exactly those for which $k$ is characteristic. In particular, the wealth of examples and conditions for characteristic kernels can be directly used to find new strictly proper scoring rules and vice versa. While Theorem \ref{thm:one} is an interesting observation for both machine learning applications and probabilistic forecasting, its proof is actually rather trivial. In the rest of the paper, we therefore focus on more involved aspects of characteristic kernels, and strictly proper kernel scores, respectively. We introduce the necessary mathematical machinery on kernels and their interaction with (signed) finite measures in Section \ref{sec:prelim}. In particular we recall that a bounded measurable kernel $k$ with RKHS $H$ induces a semi-norm $\|\cdot\|_H$ on $\mathcal{M}(X)$, the space of all finite signed measures on $X$, via the kernel mean embedding, that is via the left-hand side of \eqref{main-obs}. In Section \ref{sec:general}, we study this semi-norm for general $X$. In particular, Theorem \ref{norm-equivalence} shows that for injective kernel embeddings, $\|\cdot\|_H$ fails to be equivalent to the total variation norm on $\mathcal{M}(X)$ if and only if $\dim \mathcal{M}(X)=\infty$, and Corollary \ref{different-distances-cor} gives an even sharper result on $\mathcal{M}_1(X)$. In view of \eqref{main-obs}, these results show that the value on the right-hand side of \eqref{main-obs} is not proportional to the (squared) total variation norm. Besides some structural results on characteristic kernels, see Lemmas \ref{char-add} and \ref{product-form}, we further present a simple computation of the left-hand side of \eqref{main-obs} in terms of eigenvalues and -functions of a suitable integral operator. In Section \ref{sec:lcs}, we then exploit our general theory to obtain new results for bounded continuous kernels on locally compact Hausdorff spaces. The main question of interest is when such kernels are universal or characteristic. In Theorem \ref{thm-new-char-3} and Corollary \ref{uni-cor} we give characterization results in terms of eigenfunctions of certain integral operators. We also provide insight concerning the difference of considering Borel or Radon measures on locally compact Hausdorff spaces in the study of kernel embeddings, see Theorems \ref{exist-sipd} and \ref{thm-new-char-2}. As a result, it turns out in Theorem \ref{exists-univer-char-kern} that continuous characteristic kernels on compact Hausdorff spaces only exist if the spaces are metrizable. In Section \ref{sec:structure}, we apply the characterization results of Section \ref{sec:lcs} to translation-invariant kernels on compact Abelian groups and to isotropic kernels on spheres. All proofs can be found in Section \ref{sec:proofs}. \section{New Characterizations In this section we first compare the norms $\hnorm\cdot$ and $\tvnorm\cdot$ and show that in infinite dimensional they are never equivalent. By establishing some structural result for characteristic kernels, we then demonstrate that characteristic kernels cannot reliably distinguish between distributions that are far away with respect to $\tvnorm\cdot$. We further relate the eigenfunctions of the kernel to the metric $\gamma_k$ and with the help of this relation we investigate continuous kernels on (locally) compact spaces. Finally, we characterize on which compact spaces $X$ there do exist characteristic kernels. \subsection{General results}\label{sec:general} In this subsection we investigate the semi-norm $\hnorm \cdot$ on $\sosbm X$ for bounded kernels and general $X$. We begin with a result that compares $\hnorm \cdot$ with $\tvnorm\cdot $. \begin{theorem}\label{norm-equivalence} Let $(X,\ca A)$ be a measurable space and $H$ be the RKHS of a bounded and measurable kernel $k$ on $X$ such that the kernel embedding $\P:\sosbm X\to H$ is injective. Then, the following statements are equivalent: \begin{enumerate} \item The space $\sosbm X$ is finite dimensional. \item The norms $\hnorm \cdot$ and $\tvnorm\cdot$ on $\sosbm X$ are equivalent. \item The norm $\hnorm \cdot$ on $\sosbm X$ is complete. \item The kernel embedding $\P:\sosbm X\to H$ is surjective. \end{enumerate} \end{theorem} Theorem \ref{norm-equivalence} shows that for most cases of interest $(\sosbm X,\hnorm\cdot)$ is \emph{not} a Hilbert space. To illustrate the fourth statement of Theorem \ref{norm-equivalence}, recall that the space \begin{displaymath} H_{\mathrm{pre}} := \biggl\{ \P(\mu): \mu \in \spann\{\d_x: x\in X \} \biggr\} \end{displaymath} is dense in $H$, see e.g.~\citet[Theorem 4.21]{StCh08}. Moreover, the space $\spann\{\d_x: x\in X \}$ is, in a weak sense, dense in $\ca M(X)$, and therefore it is natural to ask whether every $f\in H$ is of the form $f=\P(\mu)$ for some $\mu \in \sosbm X$. Theorem \ref{norm-equivalence} tells us that the answer is no, unless $\sosbm X$ is finite dimensional. In this respect recall that it has been recently mentioned by \citet{SGScXXa} that the kernel embedding $\P$ is, in general, not surjective. However, the authors do not provide any example, or conditions, for non-surjective $\P$. Two \emph{examples} of non-surjective kernel embeddings $\P$ are provided by \citet[Section 3]{PiWuLiMuWo07a}, while our Theorem \ref{norm-equivalence} shows that actually \emph{all} injective $\P$ fail to be surjective whenever we have $\dim \sosbm X = \infty$. Our next goal is to show that for characteristic kernels on infinite dimensional spaces $\sosbm X$ there always exists probability measures that have maximal $\tvnorm\cdot$-distance but arbitrarily small $\hnorm\cdot$-distance. To this end, we need a couple of preparatory results. We begin with the following lemma that investigates the effect of $\boldsymbol{1}_X\in H$. \begin{lemma}\label{m0-props} Let $(X,\ca A)$ be a measurable space and $H$ be the RKHS of a bounded and measurable kernel $k$ on $X$. If $\boldsymbol{1}_X\in H$, then $\sosbmh 0 {} X$ is $\hnorm\cdot$-closed in $\sosbm X$, and if $k$ is, in addition, characteristic, then the kernel embedding $\P:\sosbm X\to H$ is injective. \end{lemma} The next simple lemma computes the $\hnorm\cdot$-norm of measures if $H$ is an RKHS of the sum of two kernels. \begin{lemma}\label{char-add} Let $(X,\ca A)$ be a measurable space, and $k_1$, $k_2$ be bounded measurable kernels on $X$ with RKHSs $H_1$ and $H_2$. Let $H$ be the RKHS of the kernel $k=k_1+k_2$. Then for all $\mu\in \sosbm X$ we have \begin{displaymath} \hnorm \mu^2 = \snorm \mu_{H_1}^2 + \snorm \mu_{H_2}^2\, . \end{displaymath} In particular, if $k_1$ is characteristic or has an injective kernel embedding, then the same is true for $k$. \end{lemma} In \citet[Corollary 11]{SriperumbGrettonETAL2010} it has already be show that the sum of two bounded, continuous translation-invariant kernels on $\mathbb{R}^d$ is characteristic, if at least one summand is characteristic. Lemma \ref{char-add} shows that this kind of inheritance holds in the general case. Our next lemma considers products of kernels. In particular it shows that such products can only be characteristic if the involved factors are strictly integrally positive definite. \begin{lemma}\label{product-form} Let $(X_1,\ca A_1)$ and $(X_2,\ca A_2)$ be measurable spaces and $k_1$, $k_2$ be a bounded, measurable kernels on $X_1$ and $X_2$, respectively. We denote the RKHSs of $k_1$ and $k_2$ by $H_1$ and $H_2$. Moreover, let $H$ be the RKHS of the kernel $k:= k_1 \cdot k_2$ on $X_1\times X_2$. Then, for all $\mu_1\in \sosbm {X_1}$ and $\mu_2\in \sosbm{X_2}$ we have \begin{displaymath} \hnorm{\mu_1\otimes \mu_2} = \snorm{\mu_1}_{H_1} \cdot \snorm{\mu_2}_{H_2}\, . \end{displaymath} In particular, if $\dim \sosbm{X_1} \geq 2$ and $\dim \sosbm{X_2}\geq 2$, and $k$ is characteristic, then $k_1$ and $k_2$ are strictly integrally positive definite with respect to $\sosbm {X_1}$ and $\sosbm {X_2}$, respectively. \end{lemma} At first glance it seems that Lemma \ref{product-form} contradicts \citet[Corollary 11]{SriperumbGrettonETAL2010}, which shows that the product of two bounded, continuous, translation-invariant kernels on $\mathbb{R}^d$ is characteristic on $\mathbb{R}^d$ as soon as at least one factor is characteristic. However, a closer look reveals that their result considers the restriction of the product to the diagonal, whereas we treat the unrestricted kernel. Later in Corollary \ref{product-char}, we will see that, on compact spaces, the product of two strictly integrally positive definite kernels gives a strictly integrally positive definite kernel on the product. The following lemma compares strictly integrally positive definite kernels with respect $\sosbm X$ and $\sosbmh 0 {}X$. In an implicit form, it has already been used, and a similar statement is \citet[Theorem 32]{SGScXXa}. \begin{lemma}\label{plusone-sipd} Let $(X,\ca A)$ be a measurable space, and $k$ be a bounded measurable kernel on $X$. Moreover, let $\ca M\subset \sosbm X$ be a subspace with $\ca M\cap \sosbmh 1 {}X\neq \emptyset$ and $\ca M_0 := \ca M \cap \sosbmh 0 {} X$. Then the following statements are equivalent: \begin{enumerate} \item $k$ is strictly integrally positive definite with respect to $\ca M_0$. \item $k+1$ is strictly integrally positive definite with respect to $\ca M$. \item $k+1$ is strictly integrally positive definite with respect to $\ca M_0$. \end{enumerate} \end{lemma} We have already seen in Theorem \ref{norm-equivalence} that in the infinite-dimensional case the norms $\hnorm\cdot$ and $\tvnorm\cdot$ are not equivalent on $\sosbm X$. Intuitively, this carries over to the subspace $\sosbmh 0{}X$. The following result confirms this intuition as long as $\sosbmh 0{}X$ is a $\hnorm\cdot$-closed subspace of $\sosbm X$. \begin{theorem}\label{non-equivalence} Let $(X,\ca A)$ be a measurable space such that $\dim \sosbm X = \infty$ and $H$ be the RKHS of a bounded and measurable kernel $k$ on $X$ such that the kernel embedding $\P:\sosbm X\to H$ is injective. If $\sosbmh 0{}X$ is a $\hnorm\cdot$-closed subspace of $\sosbm X$, then $\hnorm\cdot$ and $\tvnorm\cdot$ are not equivalent on $\sosbmh 0 {} X$. \end{theorem} The non-equivalence of $\hnorm \cdot$ and $\tvnorm\cdot$ on $\sosbmh 0{} X$ has already been observed in some particular situations. For example, \citet[Theorem 23]{SriperumbGrettonETAL2010} show that for universal kernels on compact metric spaces, $\gamma_k$ metrizes the weak topology (in probabilist's terminology) on $\sobpm X$, and since for $\dim \sosbm X = \infty$ this weak topology is strictly coarser than the $\tvnorm\cdot$-topology, we see that $\hnorm \cdot$ and $\tvnorm\cdot$ cannot be equivalent for such kernels. In addition, the non-equivalence can also be obtained from \citet[Theorems 21 and 24]{SriperumbGrettonETAL2010} for other continuous kernels on certain metric spaces. Finally recall that $\sosbmh 0{}X$ is a $\hnorm\cdot$-closed subspace of $\sosbm X$ if $\boldsymbol{1}_X\in H$ by Lemma \ref{m0-props}. With the help of Theorem \ref{non-equivalence} the next result shows that characteristic kernels cannot reliably distinguish between distributions that are far away in total variation norm. \begin{theorem}\label{different-distances} Let $(X,\ca A)$ be a measurable space such that $\dim \sosbm X = \infty$ and $H$ be the RKHS of a characteristic kernel $k$ on $X$. Then for all $\varepsilon>0$ there exist distributions $Q_1,Q_2\in \sosbmh 1 {} X$ such that $\tvnorm{Q_1-Q_2} = 2$ and $\hnorm{Q_1-Q_2}\leq \varepsilon$. \end{theorem} Theorem \ref{different-distances} only shows that there are some distributions that cannot be reliably distinguished. The following corollary shows that such distributions actually occur everywhere. \begin{corollary}\label{different-distances-cor} Let $(X,\ca A)$ be a measurable space such that $\dim \sosbm X = \infty$ and $H$ be the RKHS of a characteristic kernel $k$ on $X$. Then for all $P\in \sosbmh 1 {} X$, $\d\in (0,2]$, and $\varepsilon\in (0,\d)$ there exist $Q_1,Q_2\in \sosbmh 1 {} X$ such that $\tvnorm{P-Q_i}\leq \d$ for $i=1,2$, $\tvnorm{Q_1-Q_2} = \d$, and $\hnorm{Q_1-Q_2}\leq \varepsilon$. \end{corollary} The next goal of this subsection is to investigate the $\hnorm\cdot$-norm with the help of the eigenvalues and -functions of the integral operator $T_{k,\nu}$. We begin with the following lemma that computes the inner product \eqref{hinner} by these eigenvalues and -functions. \begin{lemma}\label{square-integral-formula} Let $(X,\ca A, \nu)$ be $\sigma$-finite measure space and $k$ be a bounded, measurable kernel with RKHS $H$ for which $\Ikx \nu:H\to \Lx 2 \nu$ is compact and injective. Then, for all $\mu_1, \mu_2 \in \sosbm X$, we have \begin{displaymath} \int_X\int_X k(x,x') \, \,\mathrm{d}\mu_1(x)\,\mathrm{d}\mu_2(x') = \sum_{i\in I} \lambda_i \cdot \Bigl(\int_X e_i \, \,\mathrm{d}\mu_1 \Bigr) \cdot \Bigl(\int_X e_i \, \,\mathrm{d} \mu_2 \Bigr)\, , \end{displaymath} where $(\lambda_i)_{i \in I}\subset (0,\infty)$ and $(e_i)_{i \in I} \subset H$ are as at \eqref{kernel_sum_cont}. \end{lemma} For an interpretation of this lemma, we write, for bounded measurable $f:X\to \mathbb{R}$ and $\mu\in \sosbm X$, \begin{displaymath} \langle f, \mu\rangle := \int_X f\, \,\mathrm{d}\mu \, . \end{displaymath} Combining Lemma \ref{square-integral-formula} with \eqref{hinner} we then have \begin{displaymath} \bigl\langle \P(\mu_1) , \P(\mu_2) \bigr\rangle_H = \sum_{i\in I} \lambda_i \langle e_i, \mu_1\rangle \langle e_i, \mu_2\rangle \, . \end{displaymath} In other words, all calculations regarding inner products and norms of the kernel embedding $\mu\mapsto \P(\mu)$ can be carried over to a weighted $\ell_2$-space. To formulate the following theorem, we denote, for a $\sigma$-finite measure $\nu$ on $(X,\ca A)$, the set of all $\nu$-probability densities contained in $\Lx 2 \nu$ by $\Delta(\nu)$, that is \begin{displaymath} \Delta(\nu):= \Bigl\{ \aec h\in \Lx 2 \nu \cap \Lx 1 \nu: \aec h\geq 0 \mbox{ and } \int_X h\,\mathrm{d}\nu = 1 \Bigr\}\, . \end{displaymath} Moreover, we write $\ca P_2(\nu) := \{ h \,\mathrm{d}\nu: \aec h\in \Delta(\nu)\}$ for the corresponding set of probability measures. With the help of these preparations we can now formulate the following theorem that characterizes non-characteristic kernels on $\ca P_2(\nu)$ and that also establishes a result similar to Theorem \ref{different-distances} for non-characteristic kernels. \begin{theorem}\label{kernel-metric-thm} Let $(X,\ca A, \nu)$ be $\sigma$-finite measure space and $k$ be a bounded, measurable kernel with RKHS $H$ for which $\Ikx \nu:H\to \Lx 2 \nu$ is compact and injective. Then for all $\aec h,\aec g \in \Delta(\nu)$ and $P:= h\,\mathrm{d}\nu$, $Q:=g\,\mathrm{d}\nu$ the kernel mean distance can be computed by \begin{equation}\label{kernel-metric-thm-hxx} \gamma_k^2(P,Q) = \sum_{i\in I} \lambda_i \langle \aec{h-g}, [e_i]_\sim\rangle_{\Lx 2 \nu}^2\, . \end{equation} Moreover, the following statements are equivalent: \begin{enumerate} \item There exist $Q_1,Q_2\in \ca P_2(\nu)$ with $Q_1\neq Q_2$ and $\gamma_k^2(Q_1,Q_2)=0$. \item There exists an $\aec f\in \Lx 1 \nu \cap [H]_\sim^\perp$ with $\aec f \neq 0$ and $\int_X f \,\mathrm{d}\nu = 0$. \item There exist $\aec{h_1},\aec{h_2}\in \Delta(\nu)$ with $\aec {h_1}\neq \aec{h_2}$ such that for all $i\in I$ we have \begin{displaymath} \langle \aec{h_1}, [e_i]_\sim\rangle_{\Lx 2 \nu} = \langle \aec{h_2}, [e_i]_\sim\rangle_{\Lx 2 \nu}\, . \end{displaymath} \end{enumerate} Moreover, if one, and thus all, statements are true we actually find for all $P\in \ca P_2(\nu)$ and $\varepsilon\in (0,2)$ some $Q_1,Q_2\in \ca P_2(\nu)$ with $\tvnorm {P-Q_i}\leq \varepsilon$, $\tvnorm{Q_1-Q_2}=\varepsilon$, and $\gamma_k^2(Q_1,Q_2)=0$. \end{theorem} Equation \eqref{kernel-metric-thm-hxx} can also be used to show that under certain circumstances $\hnorm\cdot$ cannot reliably identify, for example, the uniform distribution. The following result, which is particularly interesting in view of Section \ref{sec:structure}, illustrates this. \begin{corollary}\label{no-uniform} Let $(X,\ca A, \nu)$ be a probability space and $k$ be a bounded, measurable kernel with RKHS $H$ for which $\Ikx \nu:H\to \Lx 2 \nu$ is compact and injective. Assume that there is one eigenfunction $e_{i_0}$ with $e_{i_0} = \boldsymbol{1}_X$. In addition assume that there are constants $c_1>0$ and $c_\infty<\infty$ with $\snorm{e_i}_{\Lx 1 \nu}\geq c_1$ and $\inorm{e_i}\leq c_\infty$ for all $i\in I$. For $\a := c_\infty^{-1}$ and $j\neq i_0$ consider the signed measure $Q_{j} := (\boldsymbol{1}_X + \a e_j) d\nu$. Then $Q_{j}$ is actually a probability measure and for $P:= \nu$ we have \begin{align*} \tvnorm{P-Q_j} &\geq c_1 c_\infty^{-1} \\ \hnorm{P-Q_j} & = \lambda_j c_\infty^{-2}\, . \end{align*} \end{corollary} The last result of this subsection provides some necessary conditions for characteristic kernels. \begin{corollary}\label{cor:codim} Let $(X,\ca A, \nu)$ be finite measure space and $k$ be a bounded, measurable kernel with RKHS $H$ for which $\Ikx \nu:H\to \Lx 2 \nu$ is compact and injective. Then the following statements are true: \begin{enumerate} \item If $\codim [H]_\sim \geq 2$ in $\Lx 2 \nu$, then $k$ is not characteristic. \item If $\codim [H]_\sim \geq 1$ in $\Lx 2 \nu$ and $\boldsymbol{1}_X\in H$, then $k$ is not characteristic. \end{enumerate} \end{corollary} \subsection{Continuous Kernels on Locally Compact Subsets}\label{sec:lcs} In this subsection, we apply the general theory developed so far to bounded continuous kernels on locally compact Hausdorff spaces $(X,\t)$. Let us begin with some preparatory remarks. To this end, let $k$ be a bounded and continuous kernel on $X$ whose RKHS $H$ satisfies $H\subset C_0(X)$. In the following we call such a $k$ a $C_0(X)$-kernel. Our goal in this section is to investigate when $C_0(X)$-kernels are universal or characteristic. We begin with the following result that provides a necessary condition for the existence of strictly integrally positive definite kernels. \begin{theorem}\label{exist-sipd} Let $(X,\t)$ be a locally compact Hausdorff space with $\sosrm X \neq \sosbm X$. Then no $C_0(X)$-kernel is strictly integrally positive definite with respect to $\sosbm X$. \end{theorem} Note that \citet{SriperumbFukumizuETAL2011} restrict their considerations to characteristic kernels on locally compact \emph{Polish} spaces, for which we automatically have $\sosrm X = \sosbm X$ by Ulam's theorem, see e.g.~\citet[Lemma 26.2]{Bauer01}. Some other papers, however, do not carefully distinguish between Borel and Radon measures, which, at last consequence, means that their results only hold if we additionally assume $\sosrm X = \sosbm X$. Theorem \ref{exist-sipd} shows that this restriction is natural, and actually no restriction at all. Furthermore note that for compact spaces $X$ one can use Theorem \ref{exist-sipd} to show that $\sosrm X = \sosbm X$ is necessary for the existence of characteristic kernels. We skip such a result since later in Theorem \ref{exists-univer-char-kern}, we will be able to show an even stronger result. Before we formulate our next result we need a bit more preparation. To this end, let $k$ be a $C_0(X)$-kernel on a locally compact space $(X,\t)$. Then we have $H\subset C_0(x)$ and a quick closed-graph argument shows that the corresponding inclusion operator $I:H\to C_0(X)$ is bounded. By the identification $C_0(X)'= \sosrm X$ in \eqref{riesz-repres} and the simple calculation \begin{displaymath} \langle Ih, \mu\rangle_{C_0(X),\sosrm X} = \int_X h\,\mathrm{d}\mu = \int_X \langle h, k(x,\cdot)\rangle_H\,\mathrm{d}\mu(x) = \langle h, \P(\mu)\rangle_H\, , \end{displaymath} which holds for all $h\in H$, $\mu\in \sosrm X$ we further find that the adjoint $I'$ of $I$ is given by $I'= \P$. This simple observation leads to the following characterization, which has already been shown for compact spaces $X$ by \citet[Proposition 1]{MiXuZh06a} and for locally compact Polish spaces by \citet[Proposition 4]{SriperumbFukumizuETAL2011}. Although the proof of the latter paper also works on general locally compact Hausdorff spaces, we decided to add the few lines for the sake of completeness. \begin{theorem}\label{thm-new-char-2} Let $(X,\t)$ be a locally compact Hausdorff space and $k$ be a $C_0(X)$-kernel. Then the following two statements are equivalent: \begin{enumerate} \item $k$ is strictly integrally positive definite with respect to $\sosrm X$. \item $k$ is universal. \end{enumerate} \end{theorem} With the help of Theorem \ref{thm-new-char-2} we can now show that for characteristic kernels on compact spaces $X$ it suffices to consider metrizable $X$. A similar result for universal kernels, which is included in the following theorem, has already been derived by \citet{StHuSc06a}. \begin{theorem}\label{exists-univer-char-kern} For a compact topological Hausdorff space $(X,\t)$ the following statements are equivalent: \begin{enumerate} \item There exists a universal kernel $k$ on $X$. \item There exists a continuous characteristic kernel $k$ on $X$. \item $X$ is metrizable, i.e.~there exists a metric generating the topology $\t$. \end{enumerate} If one and thus all statements are true, $(X,\t)$ is a compact Polish space and $\sosrm X = \sosbm X$. \end{theorem} Theorem \ref{exists-univer-char-kern} shows that on compact spaces we may only expect universal or characteristic kernels, if the topology is metrizable. Since in this case we have $\sosrm X = \sosbm X$, Theorem \ref{thm-new-char-2} and Proposition \ref{M0-char} show the well-known result that every universal kernel is characteristic. In general, the converse implication is not true, but adding some structural requirements, both notions may coincide. The following corollary illustrates this by showing that for product kernels universal and characteristic kernels coincide. \begin{corollary}\label{product-char} Let $(X_1,\t_1)$ and $(X_2,\t_2)$ be non-trivial compact metrizable spaces and $k_1$, $k_2$ be continuous kernels on $X_1$ and $X_2$, respectively. For the kernel $k:= k_1 \cdot k_2$ on $X_1\times X_2$ the following statements are then equivalent: \begin{enumerate} \item $k$ is universal. \item $k$ is characteristic. \item $k_1$ and $k_2$ are universal. \end{enumerate} \end{corollary} Our next theorem, which provides a characterization of universal kernels with the help of the eigenfunctions of the integral operator $T_{k,\nu}$, is an extension of \citet[Corollary 5]{MiXuZh06a} from compact to arbitrary locally compact Hausdorff spaces. Before we present it, let us first make some preparatory remarks. To this end, let $\nu$ be a strictly positive and $\sigma$-finite Borel measure on $X$. For the matter of concreteness note that if $X$ contains a dense, countable subset $(x_i)_{i\geq 1}$ then $\nu:=\sum_{i\geq 1} \d_{x_{i}}$ satisfies these assumptions and therefore we always find such measures on e.g.~compact metric spaces. Now, let $k$ be a bounded and continuous kernel on $X$ satisfying \eqref{int-diag}. Then $H$ consists of continuous functions and \citet[Corollary 3.5]{StSc12a} show that \eqref{kernel_sum_cont} holds for all $x,x'\in X$, and consequently, the assumptions of Lemma \ref{square-integral-formula}, Theorem \ref{kernel-metric-thm}, and Corollary \ref{cor:codim} are satisfied. With these preparations we can now formulate the following characterization of universal kernels, where we note that the equivalence between \emph{i)} and \emph{ii)} has essentially been shown in \cite[Proposition 12]{SrFuLa10a}. \begin{theorem}\label{thm-new-char-3} Let $(X,\t)$ be a locally compact Hausdorff space, $\nu$ be a strictly positive, $\sigma$-finite Borel measure on $X$, and $k$ be a $C_0(X)$-kernel satisfying \eqref{int-diag}. In addition, let $(e_i)_{i\in I}$ be the eigenfunctions of $T_{k,\nu}$ in \eqref{kernel_sum_cont}. Then the following statements are equivalent: \begin{enumerate} \item $k$ is universal. \item For all $\mu\in \sosrm X$ satisfying $\int_X e_i \,\mathrm{d}\mu =0$ for all $i\in I$ we have $\mu=0$. \item The space $\spann\{ e_i: i\in I\}$ is dense in $C_0(X)$. \end{enumerate} If one, and thus all, statements are true and $\nu\in \sosbmh {}*X$, then $([e_i]_\sim)_{i\in I}$ is an ONB of $\Lx 2 \nu$. \end{theorem} Our next result characterizes universal and characteristic kernels on compact spaces with the help of the eigenfunctions and -values of a suitable $T_{k,\nu}$. In view of Theorem \ref{exists-univer-char-kern} it suffices to consider compact spaces that are Polish. \begin{corollary}\label{uni-cor} Let $(X,\t)$ be a compact metrizable space and $k$ be a continuous kernel with RKHS $H$. Moreover, let $(\lambda_i)_{i\in I}\subset [0,\infty)$ be a family converging to $0$ and $(e_i)_{i\in I} \subset C(X)$ be a family such that $\spann\{e_i:i\in I\} $ is dense in $C(X)$ and \begin{displaymath} k(x,x') = \sum_{i\in I} \lambda_i e_i(x)e_i(x') \end{displaymath} holds for all $x,x'\in X$. If there is a strictly positive, finite and regular Borel measure $\nu$ on $X$ such that $([e_i]_\sim)_{i\in I}$ is an ONS in $\Lx 2 \nu$, then: \begin{enumerate} \item $k$ is universal if and only if $\lambda_i>0$ for all $i\in I$. \item If $e_{i_0}=\boldsymbol{1}_X$ for some $i_0\in I$, then $k$ is characteristic if and only if $\lambda_i >0$ for all $i\neq i_0$. \item If $\boldsymbol{1}_X \in H$ and $e_{i}\not=\boldsymbol{1}_X$ for all $i\in I$, then $k$ is characteristic if and only if $\lambda_i >0$ for all $i\in I$. \end{enumerate} \end{corollary} \section{Preliminaries}\label{sec:prelim} In this section we recall some facts about reproducing kernels and their interaction with measures. To this end, let $(X,\ca A)$ be a measurable space. We denote the space of finite signed measures on $X$ by $\sosbm X$ and write $\sobm X$ and $\sobpm X$ for the subsets of all (non-negative) finite, respectively probability measures. Moreover, we write \begin{displaymath} \sosbmh 0 {} X := \bigl\{ \mu \in \sosbm X: \mu(X) = 0\bigr\}\, . \end{displaymath} As usual we equip $\sosbm X$ and its subsets above with the total variation norm $\tvnorm\cdot$. Recall that $\tvnorm\cdot$ is complete and hence $\sosbm X$ is a Banach space. Moreover, $\sosbmh 0 {} X$ is closed subspace of co-dimension 1, which contains, e.g.~all differences of probability measures. Moreover, for every $P\in \sosbmh 1 {} X$ we have \begin{equation}\label{m0+p} \sosbm X = \mathbb{R} P \oplus \sosbmh 0 {} X\, . \end{equation} Given a measurable function $f:X\to \mathbb{R}$ and a measure $\nu$ on $X$ we write $[f]_\sim$ for the $\nu$-equivalence class of $f$. Similar, we denote the space of $p$-times $\nu$-integrable functions by $\sLx p \nu$ and the corresponding space of $\nu$-equivalence classes by $\Lx p \nu$. Note that this rather pedantic notation becomes very useful when dealing with RKHSs since these spaces consist of functions that can be evaluated pointwise in a continuous fashion, whereas such an evaluation does, in general, not make sense for elements in $\Lx p \nu$. To formally introduce kernel mean embeddings, we need to recall the notion of Pettis integrals. To this end let, let $H$ be a Hilbert space and $f:X\to H$ be a function. Then $f$ is weakly measurable, if $\langle w, f\rangle :X\to \mathbb{R}$ is measurable for all $w\in H$. Similarly, $f$ is weakly integrable with respect to a measure $\nu$ on $(X,\ca A)$, if $\langle w, f\rangle\in \sLx 1 \nu$ for all $w\in H$. In this case, there exists a unique $i_\nu(f)\in H$, called the Pettis integral of $f$ with respect to $\nu$, such that for all $w\in H$ we have \begin{displaymath} \langle w, i_\nu(f) \rangle = \int_X \langle w, f\rangle\, \,\mathrm{d}\nu\, , \end{displaymath} see e.g.~\citet[Chapter II.3]{DiUh77} together with the reflexivity of $H$ and the identity $H=H'$ between $H$ and its dual $H'$. Using the Hahn-Jordan decomposition, it is not hard to see that $i_\mu(f)$ can analogously be defined for finite signed measures $\mu$. In the following, we adopt the more intuitive notation $\int_X f \,\mathrm{d}\mu := i_\mu(f)$, so that the defining equation above becomes \begin{align}\label{pettis-formula} \bigl\langle w, \int_X f \,\mathrm{d}\mu\bigr\rangle = \int_X \langle w, f\rangle\, \,\mathrm{d}\mu\, . \end{align} Furthermore, in the case of probability measures $\mu$, we sometimes also write $\mathbb{E}_\mu f := i_\mu(f)$. Let us now use Pettis integrals to define kernel mean embeddings. To this end, let $H$ be an RKHS over $X$ with kernel $k$ and canonical feature map $\P:X\to H$, that is $\P(x) := k(\cdot,x)$ for all $x\in X$. Then $\P$ is weakly measurable if and only if $\langle h , \P\rangle = h$ is measurable for all $h\in H$, and therefore, we conclude that $\P$ is weakly integrable with respect to some measure $\nu$ on $X$, if and only if $h\in \sLx 1 \nu$ for all $h\in H$. By a simple application of the closed graph theorem, the latter is equivalent to the continuity of the map $[\, \cdot\, ]_\sim :H\to \Lx 1 \nu$. In this respect recall that $H$ consists of measurable functions if and only if $k$ is separately measurable, that is $k(\cdot, x):X\to \mathbb{R}$ is measurable for all $x\in X$, see \citet[Lemma 4.24]{StCh08}, and $[\, \cdot\, ]_\sim :H\to \Lx 1 \nu$ is continuous if, e.g. \begin{align}\label{square-finite} \int \sqrt{k(x,x)} \,\mathrm{d}\nu(x)<\infty\, , \end{align} see e.g.~\citet[Theorem 4.26]{StCh08}. Obviously, the latter condition is still sufficient for finite signed measures if one replaces $\nu$ by $|\nu|$. For a separately measurable kernel $k$ with RKHS $H$ we now write \begin{displaymath} \sosbmh {} k X := \Bigl\{ \mu\in \sosbm X: H\subset \sLx 1\mu \Bigr\} \, , \end{displaymath} and analogously we define $\sosbmh {+} k X $ and $\sosbmh {1} k X $. Obviously, $\sosbmh {} k X$ is a vector space containing all Dirac measures and using \eqref{square-finite} it is not hard to see that $\sosbmh {} k X = \sosbmh {} {} X$, if $k$ is bounded. In fact, the latter is also necessary for $\sosbmh {} k X = \sosbmh {} {} X$ as a combination of \citet[Proposition 2]{SriperumbGrettonETAL2010} with \citet[Lemma 4.23]{StCh08} shows. Moreover, our considerations above show that $\sosbmh {} k X $ is the largest set on which we can define the kernel embedding \begin{displaymath} \P(\mu) := \int_X \P \,\mathrm{d}\mu\ = \int_X k(\cdot, x)\,\mathrm{d} \mu(x)\, . \end{displaymath} Note that the map $\P: \sosbmh {} k X \to H$ is linear, and for Dirac measures $\d_x$ we have $\P(\d_x) = \P(x)$. Consequently \begin{displaymath} \hnorm \mu := \hnorm { \P(\mu) } \end{displaymath} defines a new semi-norm on $\sosbmh {} k X$, and by a double application of \eqref{pettis-formula} we further have \begin{align}\label{hinner} \langle \P(\mu_1) , \P(\mu_2)\rangle_H = \int_X \int_X k(x,x') \,\mathrm{d}\mu_1(x)\,\mathrm{d} \mu_2(x') \, . \end{align} Note that this semi-norm is a norm, if and only if the kernel embedding $\P: \sosbmh {} k X \to H$ is injective and by \eqref{hinner} the latter is equivalent to \begin{align}\label{sipd} \snorm\mu_H^2 = \int \int k(x,x') \,\mathrm{d} \mu(x)\,\mathrm{d} \mu(x') >0 \, . \end{align} for all $\mu\in \sosbmh {} k X\setminus \{0\}$. This leads to the following definition. \begin{definition} Let $k$ be a measurable kernel on $X$ and $\ca M\subset \sosbmh {} k X$. Then $k$ is called strictly integrally positive definite with respect to $\ca M$, if \eqref{sipd} holds for all $\mu\in \ca M$ with $\mu\neq 0$. \end{definition} It is well-known, that using \eqref{pettis-formula} the semi-norm introduced above can also be computed by \begin{align}\nonumber \hnorm \mu = \Hnorm { \int_X \P \,\mathrm{d}\mu } = \sup_{f\in B_H} \Bigl|\bigl\langle f , \int_X \P \,\mathrm{d}\mu\bigr\rangle \Bigr| &= \sup_{f\in B_H} \Bigl| \int_X \langle f , \P \rangle \,\mathrm{d}\mu \Bigr| \\ \label{alternative-comp} &= \sup_{f\in B_H} \Bigl| \int_X f \,\mathrm{d}\mu \Bigr| \, , \end{align} where $B_H$ denotes the closed unit ball of $H$. Consequently, we have $\hnorm \mu \leq \snorm{[\, \cdot\, ]_\sim :H\to \Lx 1 \mu}$, and if $k$ is bounded, then $\P: \sosbmh {} {} X \to H$ is continuous with $\snorm {\P: \sosbmh {} {} X \to H} \leq \inorm{k}^{1/2}$. In particular, if $k$ is bounded and $\P: \sosbmh {} {} X \to H$ is injective, then $\hnorm\cdot$ defines a new norm on $\sosbm X$ that is dominated by $\tvnorm \cdot$ and that describes a Euclidean geometry with inner product \eqref{hinner}. Unless $\dim \sosbm X< \infty$, however, both norms are not equivalent, as we will see in Theorem \ref{norm-equivalence}. With the help of the new semi-norm $\hnorm \cdot$ on $\sosbmh {} k X$ we can now define a semi-metric on $\sosbmh 1 k X$ by setting \begin{displaymath} \gamma_k(P,Q) := \hnorm{P-Q} = \sup_{f\in B_H} \left|\int f\,\mathrm{d} P - \int f\,\mathrm{d} Q\right| \end{displaymath} for $P,Q\in \sosbmh 1 k X$. Here, we note that the second equality, which follows from our considerations above, has already been shown by \citet[Theorem 1]{SriperumbGrettonETAL2010}. Similarly, the following definition is taken from \citet{FuGrSuSc08a, SriperumbGrettonETAL2010}. \begin{definition}\label{def-char-kern} A bounded measurable kernel $k$ on $X$ is called characteristic, if the kernel mean embedding $\P_{|\sosbmh 1 {} X}:\sosbmh 1 {} X\to H$ is injective. \end{definition} Clearly, $k$ is characteristic, if and only if $\gamma_k$ is a metric, and a literal repetition of \citet[Lemma 8]{SriperumbGrettonETAL2010} shows: \begin{proposition}\label{M0-char} Let $(X,\ca A)$ be a measurable space and $k$ be a bounded measurable kernel on $X$. Then the following statements are equivalent: \begin{enumerate} \item $k$ is strictly integrally positive definite with respect to $\sosbmh 0 {} X$. \item $k$ is characteristic. \end{enumerate} \end{proposition} Now, let $\nu$ be a measure on $X$, $k$ be a measurable kernel on $X$, and $H$ be its RKHS. We further assume that the map $\Ikx \nu:H\to \Lx 2 \nu$ given by $f\mapsto [f]_\sim$ is well-defined and compact. For an example of such a situation recall \citet[Lemma 2.3]{StSc12a}, which shows that $\Ikx \nu$ is Hilbert-Schmidt if \begin{align}\label{int-diag} \int_X k(x,x) \, \,\mathrm{d}\nu(x) < \infty\, . \end{align} Obviously, the latter holds, if e.g.~$k$ is bounded and $\nu$ is a finite measure. Now assume that $\Ikx \nu$ is well-defined and compact. Then, the associated integral operator $\Tkx \nu:\Lx 2 \nu \to \Lx 2 \nu$, defined by \begin{equation* \Tkx \nu f = \Bigl[\int_X k(x, \cdot) f(x)\, \,\mathrm{d}\nu(x)\Bigr]_\sim\, , \qquad f\in \Lx 2 \nu\, , \end{equation*} satisfies $\Tkx \nu = \Ikx \nu \circ \Ikxs \nu$, see e.g.~\citet[Lemma 2.2]{StSc12a}, where $\Ikxs \nu$ denotes the adjoint of $\Ikx \nu$. In particular, $\Tkx \nu$ is compact, positive, and self-adjoint, and if \eqref{int-diag} is satisfied, $\Tkx \nu$ is even nuclear. Moreover, the spectral theorem in the form of \citet[Lemma 2.12]{StSc12a} gives us an at most countable, ordered family $(\lambda_i)_{i\in I}\subset (0,\infty)$ converging to $0$ and a family $(e_i)_{i\in I}\subset H$ such that: \begin{itemize} \item $(\lambda_i)_{i\in I}$ are the non-zero eigenvalues of $\Tkx \nu$ including multiplicities, \item $([e_i]_\sim)_{i\in I}$ is an $\Lx 2 \nu$-ONS of the corresponding eigenfunctions with \begin{align}\label{span_ei} \overline{\spann\{[e_i]_\sim : i\in I \}}^{\Lx 2 \nu} = \overline{ [H_\sim]}^{\Lx 2 \nu}\, , \end{align} \item $(\sqrt{\lambda_i} e_i)_{i\in I}$ is an ONS in $H$. \end{itemize} Here, we say that an at most countable family $(\a_i)_{i\in I} \subset [0,\infty)$ converges to 0, if either $I=\{1,\dots,n\}$ or $I=\mathbb{N}$ and $\lim_{i\to \infty}\a_i = 0$. Note that for nuclear $\Tkx \nu$, we additionally have $\sum_{i\in I}\lambda_i < \infty$, and if $k$ is bounded, $\inorm {e_i}<\infty$ holds for all $i\in I$. Finally, \citet[Theorem 3.1]{StSc12a} show that the injectivity of $\Ikx \nu:H\to \Lx 2 \nu$ is equivalent to either of the following statements: \begin{enumerate} \item $(\sqrt{\lambda_i} e_i)_{i\in I}$ is an ONB of $H$. \item For all $x,x'\in X$ we have \begin{align}\label{kernel_sum_cont} k(x,x') = \sum_{i\in I} \lambda_i e_i(x) e_i(x')\, . \end{align} \end{enumerate} Obviously, if one of these conditions is true, then $H$ is separable, and \citet[Corollary 2.10]{StSc12a} show that the convergence in \eqref{kernel_sum_cont} is absolute and we even have $k(\cdot, x) = \sum_{i\in I}\lambda_i e_i(x) e_i$ with unconditional convergence in $H$. Let us now recall some notions related to measures on topological spaces, see e.g.~\citet[Chapter IV]{Bauer01} for details. To this end, let $(X,\t)$ be a Hausdorff (topological) space and $\nu$ be a measure on its Borel-$\sigma$-algebra $\ca B(X)$. Then $\nu$ is a Borel measure, if $\nu(K) < \infty$ for all compact $K\subset X$ and $\nu$ is called strictly positive, if $\nu(O) > 0$ for all non-empty $O\in \t$. Moreover, a finite measure $\nu$ on $\ca B(X)$ is a (finite) Radon measure if it is regular, i.e.~if for all $B\in \ca B(X)$ we have \begin{align*} \nu(B) & = \sup\{ \nu(K): K\mbox{ compact and } K\subset B\} = \inf\{ \nu(O): O\mbox{ open and } B\subset O\}\, . \end{align*} A finite signed Radon measure is simply the difference of two finite Radon measures. In the following, we denote the space of all finite signed Radon measures by $\sosrm X$ and the cone of (non-negative) finite Radon measures by $\sorm X$. As usual, $\sosrm X$ is equipped with the norm of total variation. Obviously, every finite Radon measure is a finite Borel measure, and by Ulam's theorem, see e.g.~\citet[Lemma 26.2]{Bauer01}, the converse implication is true if $X$ is a Polish space. In this respect recall that compact, metrizable spaces are Polish. Now let $X$ be a locally compact Hausdorff space and $C_0(X)$ be the space of continuous functions vanishing at infinity. As usual, we equip $C_0(X)$ with the supremum norm. Then, Riesz's representation theorem for locally compact spaces, see e.g.~\citet[Theorem 20.48 together with Definition 20.41, Theorem 12.40, Definition 12.39, and a simple translation into the real-valued case using Theorem 12.36]{HeSt65} shows that \begin{align}\nonumber \sosrm X &\to C_0(X)'\\ \label{riesz-repres} \mu &\mapsto \Bigl(f\mapsto \langle f,\mu\rangle := \int_Xf\,\mathrm{d}\mu\Bigr) \end{align} is an isometric isomorphism. In the compact case, in which $C_0(X)$ coincides with the space of continuous functions $C(X)$, this can also be found in e.g.~\citet[p.~265, Theorem IV.6.3]{DuSc58}. Given a locally compact Hausdorff space $X$, a continuous kernel $k$ on $X$ with RKHS $H$ is called universal if $H\subset C_0(X)$ and $H$ is dense in $C_0(X)$ with respect to $\inorm\cdot$. Note that for compact $X$ the inclusion $H\subset C_0(X)$ is automatically satisfied. Examples of universal kernels as well as various necessary and sufficient conditions for universality can be found in e.g.~\cite{Steinwart01a,MiXuZh06a,SriperumbFukumizuETAL2011,ChWaZh16a} and the references mentioned therein. \subsection{Proofs related to Section \ref{sec:general}} \begin{proofof}{Theorem \ref{norm-equivalence}} \atob {iv} {iii} By assumption, $\P:\sosbm X\to H$ is bijective and since $H$ is complete, so is $\hnorm \cdot$ on $\sosbm X$. \atob {iii} {ii} Consider the identity map $\id: (\sosbm X, \tvnorm\cdot) \to (\sosbm X, \hnorm\cdot)$. Since we have already seen in Section \ref{sec:prelim} that \begin{displaymath} \hnorm \mu \leq \mnorm{[\, \cdot\, ]_\sim :H\to \Lx 1 \mu} \leq \inorm {k} \cdot \tvnorm \mu \end{displaymath} holds for all $\mu\in \sosbm X$, the identity map is continuous. In addition, it is, of course, bijective, and since both $(\sosbm X, \tvnorm\cdot)$ and $(\sosbm X, \hnorm\cdot)$ are complete, the open mapping theorem, see e.g.~\citet[Corollary 1.6.8]{Megginson98} shows that both norms are equivalent. \atob {ii} i By the arguments of \citet[page 63f]{DiJaTo95} the space $(\sosbm X, \tvnorm\cdot)$ is a so-called ${\cal L}_{1,\lambda}$-space for all $\lambda>1$, see \citet[page 60]{DiJaTo95} for a definition, while the Euclidean structure of $\hnorm\cdot$ on $\sosbm X$, which is inherited from $(H,\hnorm\cdot)$, shows that $(\sosbm X, \hnorm\cdot)$ is an ${\cal L}_{2,\lambda}$-space for all $\lambda>1$. Let us now assume that $\dim \sosbm X = \infty$. Then, \citet[Corollary 11.7]{DiJaTo95} shows that $(\sosbm X, \tvnorm\cdot)$ has only trivial type 1, while $(\sosbm X, \hnorm\cdot)$ has optimal type 2. However, the definition of the type of a space, see \citet[page 217]{DiJaTo95}, immediately shows that equivalent norms always share their type, and hence $\hnorm \cdot$ and $\tvnorm\cdot$ are not equivalent on $\sosbm X$. \atob i {iv} If $\sosbm X$ is finite dimensional, the $\sigma$-algebra $\ca A$ is finite. Since $k$ is assumed to be measurable, we then see that $k$ can only attain finitely many different values, and consequently, the canonical feature map $\P:X\to H$ also attains only finitely many different values, say $f_1,\dots, f_m\in H$. Using \citet[Theorem 4.21]{StCh08}, we conclude that $H = \spann\{f_1,\dots,f_m\}$. We now define $A_i := \P^{-1}(\{f_i\})$ for $i=1,\dots,m$. By construction $A_1,\dots,A_m$ form a partition of $X$ with $A_i\in \ca A$ and $A_i\neq \emptyset$ for all $i=1,\dots,m$. Let us fix some $x_i\in A_i$. For the corresponding Dirac measures we then have $\P(\d_{x_i}) = \P(x_i) = f_i$ and since $\P: \sosbm X\to H$ is linear we find $\spann\{f_1,\dots,f_m\} \subset \P(\sosbm X)$. This shows the surjectivity of the kernel embedding. \end{proofof} \begin{proofof}{Lemma \ref{m0-props}} Let $(\mu_n) \subset \sosbmh 0 {} X$ be a sequence that converges to some $\mu\in \sosbm X$ in $\hnorm\cdot$. Then we have $\mu_n(X) = 0$ for all $n\geq 1$ and we need to show $\mu(X)=0$. The latter, however, follows from \eqref{alternative-comp}, namely \begin{displaymath} |\mu(X)| = \biggl| \int \boldsymbol{1}_X \,\mathrm{d} (\mu_n-\mu) \biggr| \leq \hnorm{\boldsymbol{1}_X} \hnorm{\mu_n-\mu} \to 0\, . \end{displaymath} Let us now assume that $k$ is characteristic. Then Proposition \ref{M0-char} shows that $k$ is strictly integrally positive definite with respect to $\sosbmh 0 {} X$, and hence $\hnorm{\mu} > 0$ for all $\mu\in \sosbmh 0 {} X\setminus\{0\}$. Now let $\mu\in \sosbm X\setminus \sosbmh 0 {} X$ and $P$ be some probability measure on $X$. By \eqref{m0+p} we then find an $\a\in \mathbb{R}$ and some $\mu_0\in \sosbmh 0 {} X$ such that $\a P + \mu_0 = \mu$, and since $\mu\not\in \sosbmh 0 {} X$ we actually have $\a\neq 0$. Using this decomposition, and \eqref{alternative-comp} we now find \begin{displaymath} \hnorm{\mu} = \hnorm{\a P + \mu_0} \geq \frac 1 {\hnorm{1_X}} \biggr|\int \boldsymbol{1}_X \,\mathrm{d} (\a P + \mu_0)\biggr| = \frac {|\a|} {\hnorm{1_X}} > 0 \end{displaymath} and hence $\P:\sosbm X\to H$ is injective, see \eqref{sipd}. \end{proofof} \begin{proofof}{Lemma \ref{char-add}} By the definition of the $\hnorm\cdot$-norm and \eqref{hinner} we have \begin{displaymath} \hnorm \mu^2 = \int_X \int_X k_1(x,x') + k_2(x,x') \,\mathrm{d}\mu(x)\,\mathrm{d} \mu(x') = \snorm \mu_{H_1}^2 + \snorm \mu_{H_2}^2\, . \end{displaymath} Moreover, if $k_1$ is characteristic, then $\snorm \mu_{H_1}^2 > 0$ for all $\mu\in \sosbmh 0 {} X\setminus \{0\}$ by Proposition \ref{M0-char} and the just established formula then yields $\snorm \mu_{H}^2 > 0$ for these $\mu$. Consequently, $k$ is characteristic. Repeating the argument for $\sosbm X$ yields the last assertion. \end{proofof} \begin{proofof}{Lemma \ref{product-form}} By the definition of the $\hnorm\cdot$-norm and \eqref{hinner} we have \begin{align*} \hnorm {\mu_1\otimes \mu_2}^2 &= \int_{X_1\times X_2} \int_{X_1\times X_2} k_1(x_1,x_1') \cdot k_2(x_2,x_2') \,\mathrm{d}\mu_1\!\otimes\! \mu_2(x_1,x_2)\,\mathrm{d} \mu_1\!\otimes\! \mu_2(x_1',x_2') \\ &= \int_{X_1\times X_1} \int_{X_2\times X_2} k_1(x_1,x_1') \cdot k_2(x_2,x_2') \,\mathrm{d}\mu_1(x_1)\,\mathrm{d} \mu_1(x_1')\,\mathrm{d} \mu_2(x_2)\,\mathrm{d} \mu_2(x_2')\\ &= \snorm{\mu_1}_{H_1}^2 \cdot \snorm{\mu_2}_{H_2}^2\, , \end{align*} where we note that the application of Fubini's theorem is possible, since the kernels are bounded and the measures are finite. Now assume that $k$ is characteristic, but, say, $k_1$ is not strictly integrally positive definite with respect to $\sosbm {X_1}$. Then there exists a $\mu_1\in \sosbm{X_1}$ with $\mu_1\neq 0$ but $\snorm{\mu_1}_{H_1}=0$. Moreover, since $\dim \sosbm{X_2}\geq 2$, the decomposition \eqref{m0+p} gives a $\mu_2\in \sosbmh 0{} {X_2}$ with $\mu_2\neq 0$. Let us define $\mu:= \mu_1\otimes \mu_2$. Our construction then yields $\mu\neq 0$ and $\mu(X_1\times X_2) = \mu_1(X_1) \cdot \mu_2(X_2) = 0$, that is $\mu\in \sosbmh 0{}{X_1\times X_2}$, while the already established product formula shows $\hnorm{\mu} = \snorm{\mu_1}_{H_1} \cdot \snorm{\mu_2}_{H_2} = 0$. By Proposition \ref{M0-char} we conclude that $k$ is not characteristic. \end{proofof} \begin{proofof}{Lemma \ref{plusone-sipd}} Let $\P_1$ denote the canonical feature map of the kernel $k_1 = \boldsymbol{1}_{X\times X}$ and $H_1$ its RKHS. Moreover, we write $H+1$ for the RKHS of $k+1$. For later use, we note that \eqref{hinner} implies $\langle\P_1(\mu), \P_1(\mu_0)\rangle_{H_1} = 0$ for all $\mu\in \sosbm X$ and $\mu_0\in \sosbmh 0 {}X$. \atob {iii} {ii} Let us fix a $P\in \ca M\cap \sosbmh 1 {}X$. Similar to \eqref{m0+p} we then have $\ca M = \mathbb{R} P \oplus \ca M_0$. Indeed, ``$\supset$'' and $\mathbb{R} P \cap \ca M_0=\{0\}$ are trivial and for $\mu \in \ca M$ it is easy to see that $\mu - \mu(X) P \in \ca M_0$. Now, we need to prove $\snorm\mu_{H+1}>0$ for all $\mu = \a P + \mu_0\in \mathbb{R} P \oplus \ca M_0$ with $\mu\neq 0$. By \emph{iii)} we already know this in the case $\a=0$, and thus we further assume $\a\neq 0$. Then Lemma \ref{char-add} and \eqref{hinner} together with our initial remark yield \begin{align*} \snorm\mu_{H+1}^2 &= \snorm\mu_{H}^2 + \langle \P_1(\mu) , \P_1(\mu)\rangle_{H_1}\\ &= \snorm\mu_{H}^2 + \a^2\snorm P_{H_1}^2 + 2\a\langle \P_1(P) , \P_1(\mu_0)\rangle_{H_1} + \snorm{\mu_0}_{H_1}^2 \\ & = \snorm\mu_{H}^2 + \a^2 \end{align*} and since $\a\neq 0$, we conclude $ \snorm\mu_{H+1}^2 >0$. \atob {ii} {iii} This is trivial. \aeqb {iii} i For $\mu_0 \in \ca M_0\subset \sosbmh 0 {}X$, Lemma \ref{char-add} together with our initial remark shows $\snorm{\mu_0}_{H+1}^2 = \hnorm{\mu_0}^2 + \snorm{\mu_0}_{H_1}^2 = \hnorm{\mu_0}^2$. From this equality the equivalence immediately follows. \end{proofof} \begin{proofof}{Theorem \ref{non-equivalence}} For some fixed $P\in \sosbmh 1 {} X$ we consider the map \begin{align*} \pi : \mathbb{R} P \oplus \sosbmh 0 {} X &\to \sosbm X\\ \a P + \mu_0 &\mapsto \mu_0\, . \end{align*} By \eqref{m0+p}, $\pi$ is a linear map $\pi:\sosbm X\to \sosbm X$ with $\pi^2 = \pi$ and $\ran \pi = \sosbmh 0 {} X$, and hence a projection onto $\sosbmh 0 {} X$. Moreover, $\sosbmh 0 {} X$ is $\hnorm\cdot$-closed by assumption and therefore $\pi$ is $\hnorm\cdot$-continuous, see \citet[Theorem 3.2.14]{Megginson98}. Let us now assume that $\hnorm\cdot$ and $\tvnorm\cdot$ were equivalent on $\sosbmh 0 {} X$. Our goal is to show that this assumption implies the equivalence of $\hnorm\cdot$ and $\tvnorm\cdot$ on $\sosbmh {}{}X$. To this end we fix some sequence $(\mu_n) \in \sosbm X$ and some $\mu \in \sosbm X$ with $\hnorm{\mu_n-\mu} \to 0$. By \eqref{m0+p} we then find $\a_n, \a\in R$ with $\mu_n = \a_n P + \pi(\mu_n)$ and $\mu = \a P + \pi(\mu)$. Using the $\hnorm\cdot$-continuity of $\pi$, we then find $\hnorm{\pi(\mu_n) - \pi(\mu)} \to 0$, and since we assumed that $\hnorm\cdot$ and $\tvnorm\cdot$ are equivalent on $\sosbmh 0 {} X$, we conclude that $\tvnorm{\pi(\mu_n) - \pi(\mu)} \to 0$. In addition, we have $\hnorm P>0$ by the injectivity of the kernel embedding, and therefore \begin{displaymath} |\a_n-\a| \cdot\hnorm P \leq \hnorm{\mu_n-\mu} + \hnorm{\pi(\mu_n) - \pi(\mu)} \to 0 \end{displaymath} shows that $\a_n\to \a$. Combining these considerations, we find \begin{displaymath} \tvnorm{\mu_n-\mu} \leq |\a_n-\a| \cdot\tvnorm P + \tvnorm{\pi(\mu_n) - \pi(\mu)} \to 0\, . \end{displaymath} Summing up, we have seen that $\hnorm{\mu_n-\mu} \to 0$ implies $\tvnorm{\mu_n-\mu} \to 0$ for every sequence $(\mu_n) \in \sosbm X$, and consequently, the identity map $\id: (\sosbm X, \hnorm\cdot) \to (\sosbm X, \tvnorm\cdot)$ is continuous. This yields $\tvnorm \cdot \leq \snorm\id\cdot\hnorm\cdot$ on $\sosbm X$, and since we also have $\hnorm \cdot \leq \inorm k\cdot\tvnorm\cdot$ we see that $\hnorm\cdot$ and $\tvnorm\cdot$ are indeed equivalent on $\sosbm X$. This, however, contradicts Theorem \ref{norm-equivalence}, and hence our assumption that $\hnorm\cdot$ and $\tvnorm\cdot$ were equivalent on $\sosbmh 0 {} X$ is false. \end{proofof} \begin{proofof}{Theorem \ref{different-distances}} Let us first assume that $\boldsymbol{1}_X\in H$. By Lemma \ref{m0-props} we then see that the kernel embedding $\P:\sosbm X\to H$ is injective, and Theorem \ref{non-equivalence} thus shows that $\hnorm\cdot$ and $\tvnorm\cdot$ are not equivalent on $\sosbmh 0 {} X$. Since $\hnorm\cdot$ is dominated by $\tvnorm\cdot$, we consequently find a sequence $(\mu_n)\subset \sosbmh 0 {} X$ and some $\d>0$ such that $\hnorm {\mu_n} \to 0$ and $\inf_{n\geq 1}\tvnorm {\mu_n} \geq \d$. Let us consider the Hahn-Jordan decomposition $\mu_n = \mu_n^+ - \mu_n^-$ with $\mu_n^+,\mu_n^-\in \sobm X$. Since $\mu_n\in \sosbmh 0 {}X$, we have $\mu_n^+(X) = \mu_n^-(X)$. We define \begin{displaymath} Q_n^{(1)} := \frac {\mu_n^+}{\mu_n^+(X)} \qquad \qquad \mbox{ and } \qquad \qquad Q_n^{(2)} := \frac {\mu_n^-}{\mu_n^+(X)}\, . \end{displaymath} Clearly, this yields $Q_n^{(1)},Q_n^{(2)}\in \sobpm X$ and \begin{displaymath} \tvnorm{Q_n^{(1)} -Q_n^{(2)}} = \frac{\mu_n^+(X) + \mu_n^-(X)}{\mu_n^+(X)} = 2\, . \end{displaymath} Moreover, we have \begin{displaymath} \hnorm{Q_n^{(1)} -Q_n^{(2)}} = \frac{\hnorm{\mu_n}}{\mu_n^+(X)} = \frac{\hnorm{\mu_n}}{\frac{1}{2}\tvnorm{\mu_n}} \leq 2\d^{-1}\hnorm{\mu_n}\, , \end{displaymath} and by choosing $n$ sufficiently large we can therefore guarantee $\hnorm{Q_n^{(1)} -Q_n^{(2)}} \leq \varepsilon$. Let us now consider the case $\boldsymbol{1}_X \not\in H$. By Lemma \ref{char-add} we then see that the kernel $\tilde k := k+1$ is characteristic. Let us write $H_1 := \mathbb{R} \boldsymbol{1}_X$ for the RKHS of the constant kernel $k_1:=\boldsymbol{1}_{X\times X}$. Then $x\mapsto (k(x,\cdot), k_1(x,\cdot)) \in H\times H_1$ is a feature map of the kernel $k+1$ in the sense of \cite[Definition 4.1]{StCh08} if $H\times H_1$ is equipped with the usual Hilbert space norm $\snorm{(w,w_1)} :=\sqrt{ \hnorm w^2 + \snorm{w_1}_{H_1}^2}$ and consequently an application of \cite[Theorem 4.21]{StCh08} shows that the RKHS $\tilde H$ of $k+1$ is given by $\tilde H = H + H_1$. The latter yields $\boldsymbol{1}_X \in \tilde H$. The already considered case thus shows that for all $\varepsilon>0$ there exist distributions $Q_1,Q_2\in \sosbmh 1 {} X$ such that $\tvnorm{Q_1-Q_2} = 2$ and $\snorm{Q_1-Q_2}_{\tilde H}\leq \varepsilon$. Moreover, for $\mu:= Q_1-Q_2 \in \sosbmh 0 {}X$ we see by Lemma \ref{char-add} and \eqref{hinner} that \begin{displaymath} \varepsilon^2 \geq \snorm{Q_1-Q_2}_{\tilde H}^2 = \snorm\mu_{\tilde H}^2 = \hnorm \mu^2 + \int\int \boldsymbol{1}_{X\times X} \,\mathrm{d}\mu \,\mathrm{d}\mu = \hnorm \mu^2 + \mu(X) \cdot\mu(X) = \hnorm \mu^2 = \hnorm{Q_1-Q_2}^2 \end{displaymath} we have shown the assertion in the second case, too. \end{proofof} \begin {proofof}{Corollary \ref{different-distances-cor}} Let us fix some $P\in \sosbmh 1 {} X$, $\d\in (0,1]$ and $\varepsilon\in (0,2\d)$. By Theorem \ref{different-distances} there then exist distributions $\tilde Q_1,\tilde Q_2\in \sosbmh 1 {} X$ with $\tvnorm{\tilde Q_1-\tilde Q_2} = 2$ and $\hnorm{\tilde Q_1-\tilde Q_2}\leq \varepsilon$. We define $Q_i := (1-\d) P + \d \tilde Q_i\in \sosbmh 1 {}X$ for $i=1,2$. A simple calculation then shows that \begin{align*} \tvnorm{Q_1-Q_2} = \d \tvnorm{\tilde Q_1 - \tilde Q_2} = 2\d\, , \end{align*} and analogously, we find $\hnorm{Q_1-Q_2} \leq \d \varepsilon \leq \varepsilon$. Finally, we have $\tvnorm{P- Q_i} = \d \tvnorm{P-\tilde Q_i} \leq 2\d$, and hence we obtain the assertion for $\tilde \d:= 2\d$. \end {proofof} \begin{proofof}{Lemma \ref{square-integral-formula}} Let us first assume that $\mu_1, \mu_2 \in \sobm X$. For $x,x'\in X$ we define \begin{displaymath} \tilde k(x,x') := \sum_{i\in I} \lambda_i |e_i(x)| \cdot |e_i(x')|\, . \end{displaymath} Then the Cauchy-Schwarz inequality together with \eqref{kernel_sum_cont} gives \begin{displaymath} \bigl| \tilde k(x,x') \bigr| = \sum_{i\in I} \lambda_i |e_i(x)| \cdot |e_i(x')| \leq \sum_{i\in I} \lambda_i e_i^2(x) \cdot \sum_{i\in I} \lambda_i e_i^2(x') = k(x,x) \cdot k(x',x') \end{displaymath} for all $x,x'\in X$, and since $k$ is bounded, we conclude that $\tilde k$ is also bounded, and hence $\tilde k\in \Lx 2 {\mu_1\otimes \mu_2}$. Moreover, for finite $J\subset I$ and $x,x'\in X$ we have \begin{displaymath} \Bigl| \sum_{i\in J} \lambda_i e_i(x)e_i(x') \Bigr| \leq \tilde k(x,x')\, . \end{displaymath} Using Fubini's theorem, \eqref{kernel_sum_cont}, Lebesgue's dominated convergence theorem, and yet another time Fubini's theorem, we thus find \begin{align*} \int_X\int_X k(x,x') \, \,\mathrm{d}\mu_1(x)\,\mathrm{d}\mu_2(x') &= \int_{X\times X} \sum_{i\in I} \lambda_i e_i(x) e_i(x')\, \,\mathrm{d}(\mu_1\!\otimes\! \mu_2)(x,x') \\ &= \sum_{i\in I} \int_{X\times X} \lambda_i e_i(x) e_i(x')\, \,\mathrm{d}(\mu_1\!\otimes\! \mu_2)(x,x') \\ & = \sum_{i\in I} \int_X\int_X\lambda_i e_i(x) e_i(x') \, \,\mathrm{d}\mu_1(x)\,\mathrm{d}\mu_2(x') \, . \end{align*} From the latter the assertion immediately follows. Finally, let us assume that $\mu_1,\mu_2 \in \sosbm X$. Using the Hahn decomposition $\mu_1 = \mu_1^+-\mu_1^-$ and $\mu_2=\mu_2^+-\mu_2^-$ as well as the fact that the expressions on the left and right of the desired equation are linear in the involved measures, we then obtain the assertion by the already established case. \end{proofof} \begin{proofof}{Theorem \ref{kernel-metric-thm}} Clearly, the signed measure $\mu:= P-Q$ has the $\nu$-density $h-g$ and we have $\aec{h-g}\in \Lx 2 \nu \cap \Lx 1 \nu$. Now we find \eqref{kernel-metric-thm-hxx} by \eqref{hinner} and Lemma \ref{square-integral-formula}, namely \begin{align*} \gamma_k^2(P,Q) = \snorm{\mu}_H^2 = \int_X\int_X k(x,x') \, \,\mathrm{d}\mu(x)\,\mathrm{d}\mu(x') &= \sum_{i\in I} \lambda_i \cdot \Bigl(\int_X e_i \, \,\mathrm{d}\mu \Bigr)^2 \\ &= \sum_{i\in I} \lambda_i \cdot \Bigl(\int_X e_i \cdot (h-g) \,\mathrm{d}\nu \Bigr)^2\, . \end{align*} \atob {ii} i We split $f$ into $f = f^+ - f^-$ with $f^+\geq 0$ and $f^-\geq 0$. By our assumption we then know that \begin{displaymath} c:= \int_X f^+ \,\mathrm{d}\nu = \int_X f^- \,\mathrm{d}\nu >0 \end{displaymath} For $\d>0$, $h\in \Delta(\nu)$ and $P:= h\,\mathrm{d} \nu\in \ca P_2(\nu)$ we define $h_1 := (1+\d c)^{-1}(h+\d f^+)$ and $h_2 := (1+\d c)^{-1}(h+\d f^-)$ and consider the corresponding measures $Q_1 := h_1\,\mathrm{d}\nu$ and $Q_2 := h_2\,\mathrm{d}\nu$. The construction immediately ensures $Q_1,Q_2\in \ca P_2(\nu)$. Moreover, we have \begin{align*} \tvnorm{P-Q_1} = \int_X|h-h_1| \,\mathrm{d}\nu &= \int_X \Bigl| \frac {h + \d c h} {1+\d c}- \frac{h+\d f^+}{1+\d c} \Bigr| \,\mathrm{d}\nu \\ & = \int_X \Bigl| \frac { \d c h - \d f^+} {1+\d c} \Bigr| \,\mathrm{d}\nu \\ & \leq \frac \d {1+\d c} \int_X | c h| + |f^+| \,\mathrm{d}\nu \\ & = \frac {2\d c} {1+\d c} \, , \end{align*} and analogously we find $\tvnorm{P-Q_2}\leq 2\d c/(1+\d c)$. In addition, we have \begin{align*} \tvnorm{Q_1-Q_2} = \frac 1 {1+\d c} \int_X |\d f^+ - \d f^-| \,\mathrm{d}\nu = \frac \d {1+\d c} \int_X |f| \,\mathrm{d}\nu = \frac {2\d c} {1+\d c} \end{align*} and by using that $\{2\d c/(1+\d c): \d>0\} = (0,2)$ we obtain the norm conditions for $Q_1$ and $Q_2$. Finally, \eqref{kernel-metric-thm-hxx} yields \begin{displaymath} \gamma_k^2(Q_1,Q_2) = \sum_{i\in I} \lambda_i \langle \aec{h_1-h_2}, [e_i]_\sim\rangle_{\Lx 2 \nu}^2 = \frac \d {1+\d c}\sum_{i\in I} \lambda_i \langle \aec f, [e_i]_\sim\rangle_{\Lx 2 \nu}^2 = 0\, , \end{displaymath} which shows \emph{i)} as well as the final assertion. \atob i {iii} Let $Q_1$ and $Q_2$ be according to \emph{i)} and $\aec{h_1}, \aec{h_2}\in \Delta(\nu)$ be their $\nu$-densities. Then $Q_1\neq Q_2$ implies $\aec{h_1}\neq \aec{h_2}$, and \eqref{kernel-metric-thm-hxx} yields \begin{displaymath} 0 = \gamma_k^2(P,Q) = \sum_{i\in I} \lambda_i \bigl\langle \aec{h_1-h_2}, [e_i]_\sim \bigr\rangle_{\Lx 2 \nu}^2 \, . \end{displaymath} Since $\lambda_i> 0$ for all $i\in I$, we then conclude that $\langle \aec{h_1-h_2}, [e_i]_\sim\rangle_{\Lx 2 \nu}^2=0$ for all $i\in I$, which in turn implies \emph{iii)}. \atob {iii} {ii} We define $f:= h_1-h_2$. Clearly, we have $\aec f\in \Lx 2 \nu \cap \Lx 1 \nu$ and $f\neq 0$. Moreover, $h_1,h_2\in \Delta(\nu)$ gives \begin{displaymath} \int_Xf \,\mathrm{d}\nu = \int_X h_1\,\mathrm{d}\nu - \int_X h_2\,\mathrm{d}\nu = 1-1=0\, . \end{displaymath} Finally, the equality $\langle\aec{ h_1}, [e_i]_\sim\rangle_{\Lx 2 \nu} = \langle \aec{h_2}, [e_i]_\sim\rangle_{\Lx 2 \nu}$, which holds for all $i\in I$, implies $\langle \aec{f}, [e_i]_\sim\rangle_{\Lx 2 \nu} = 0 $ for all $i\in I$, and hence $\aec f\in [H]_\sim^\perp$. \end{proofof} \begin{proofof}{Corollary \ref{no-uniform}} Let $h_{j} := (\boldsymbol{1}_X + \a e_j)$ be the $\nu$-density of $Q_{j}$. Using $\inorm{e_j}\leq c_\infty$ and $\a =c_\infty^{-1}$, we then find $h_{j} \geq 0$, and since $\aec{e_{i_0}} \perp \aec{e_j}$ we further find \begin{displaymath} \int_X h_{j} \,\mathrm{d}\nu = 1 + \a \int_X e_j e_{i_0} \,\mathrm{d}\nu = 1\, . \end{displaymath} Consequently, $Q_{j}$ is a probability measure. Moreover, we find \begin{displaymath} \tvnorm{P-Q_j} = \int_X\bigl|\boldsymbol{1}_X - h_{j}\bigr|\,\mathrm{d}\nu = \int_X |\a e_j| \,\mathrm{d}\nu \geq c_1 c_\infty^{-1} \end{displaymath} and \eqref{kernel-metric-thm-hxx} implies \begin{displaymath} \snorm{P-Q_j}_H^2 = \sum_{i\in I} \lambda_i \langle \aec{\boldsymbol{1}_X - h_j}, [e_i]_\sim\rangle_{\Lx 2 \nu}^2= \sum_{i\in I} \lambda_i \langle \aec{- \a e_j}, [e_i]_\sim\rangle_{\Lx 2 \nu}^2 = \lambda_j c_\infty^{-2}\, . \end{displaymath} This shows the assertion. \end{proofof} \begin{proofof}{Corollary \ref{cor:codim}} In both cases it suffices to show \emph{ii)} of Theorem \ref{kernel-metric-thm}. \ada i Since $\codim [H]_\sim \geq 2$ there exist linearly independent $\aec {f_1}, \aec {f_2} \in [H]_\sim^\perp$. If $\int_X f_2 \,\mathrm{d}\nu= 0$ then there is nothing left to prove, and if $\int_X f_2 \,\mathrm{d}\nu\neq 0$ then a quick calculation shows that \begin{displaymath} f := f_1 - \frac{\int_X f_1 \,\mathrm{d}\nu}{\int_X f_2 \,\mathrm{d}\nu} \cdot f_2 \end{displaymath} is the desired function. \ada {ii} Since $\codim [H]_\sim \geq 1$, there exists an $\aec f \in [H]_\sim^\perp\setminus\{0\}$ and from $\boldsymbol{1}_X\in H$ we conclude that $\int_X f \,\mathrm{d}\nu = \langle \aec f, \aec {\boldsymbol{1}_X}\rangle_\Lx 2 \nu = 0$. \end{proofof} \subsection{Proofs related to Section \ref{sec:groups}} \begin{proofof}{Lemma \ref{real-onb}} To check that $(e_i^*)_{i\in I}$ is an ONS, we first observe that the equivalences $i=j \Leftrightarrow -i=-j$ and $i=-j \Leftrightarrow -i=j$ imply for $a_i,a_j\in \{-1,1\}$ that \begin{align*} \langle &[ e_i + a_i \bar e_i ]_\sim , [e_j + a_j \bar e_j ]_\sim\rangle_{\Lx 2 {G, \mathbb{C}}} \\ & = \langle [ e_i ]_\sim + a_i [e_{-i}]_\sim , [e_j ]_\sim + a_j [e_{-j}]_\sim \rangle_{\Lx 2 {G, \mathbb{C}}} \\ & = \langle [ e_i ]_\sim , [e_{j}]_\sim \rangle +a_j \langle [ e_i ]_\sim , [e_{-j}]_\sim \rangle + a_i \langle [ e_{-i} ]_\sim , [e_{j}]_\sim \rangle + a_ia_j \langle [ e_{-i} ]_\sim , [e_{-j}]_\sim \rangle \\ & = (1+a_ia_j) \d_{i,j} + (a_i+a_j) \d_{i,-j} \, . \end{align*} Let us first consider $i,j\in I_0$. Then we have $i=j$ if and only if $i=-j$, that is $\d_{i,j} = \d_{i,-j}$, and thus we find \begin{displaymath} \langle [e_i^*]_\sim , [e_j^*]_\sim \rangle_{\Lx 2 G} = \frac 1 4 \langle [ e_i + \bar e_i ]_\sim , [e_j + \bar e_j ]_\sim\rangle_{\Lx 2 {G, \mathbb{C}}} = \d_{i,j}\, . \end{displaymath} Similarly, for $i,j\in I_+$ we cannot have $\d_{i,-j}=1$, since this would imply $i\in I_-$, and hence we obtain \begin{displaymath} \langle [e_i^*]_\sim , [e_j^*]_\sim \rangle_{\Lx 2 G} = \frac 1 2 \langle [ e_i + \bar e_i ]_\sim , [e_j + \bar e_j ]_\sim\rangle_{\Lx 2 {G, \mathbb{C}}} = \d_{i,j}\, , \end{displaymath} and for $i,j\in I_-$ we find by an analogous reasoning that \begin{displaymath} \langle [e_i^*]_\sim , [e_j^*]_\sim \rangle_{\Lx 2 G} = \frac 1 2 \langle [ e_i - \bar e_i ]_\sim , [e_j - \bar e_j ]_\sim\rangle_{\Lx 2 {G, \mathbb{C}}} = \d_{i,j}\, . \end{displaymath} For the mixed cases, in which $i$ and $j$ belong to different partition elements $I_0$, $I_+$, or $I_-$ we clearly have $i\neq j$, and hence the above calculation reduces to \begin{displaymath} \langle [ e_i + a_i \bar e_i ]_\sim , [e_j + a_j \bar e_j ]_\sim\rangle_{\Lx 2 {G, \mathbb{C}}} = (a_i+a_j) \d_{i,-j} \, . \end{displaymath} Now, if $i\in I_0$ and $j\not \in I_0$, then we cannot have $i=-j$ and hence we obtain $ \langle [e_i^*]_\sim , [e_j^*]_\sim \rangle_{\Lx 2 G} = 0$. For the same reason we find for $i\in I_+$ and $j\in I_-$ with $i\neq -j$ that $ \langle [e_i^*]_\sim , [e_j^*]_\sim \rangle_{\Lx 2 G} = 0$. Finally, for $i\in I_+$ and $j\in I_-$ with $i=-j$ we have \begin{displaymath} \langle [e_i^*]_\sim , [e_j^*]_\sim \rangle_{\Lx 2 G} = \frac 1 2 \langle [ e_i + \bar e_i ]_\sim , \mathrm i\cdot [e_j - \bar e_j ]_\sim\rangle_{\Lx 2 {G, \mathbb{C}}} = -\mathrm i \cdot(1-1)\cdot \d_{i,-j} = 0\, , \end{displaymath} and therefore we conclude that $([e_i^*]_\sim)_{i\in I}$ is an ONS in $\Lx 2 G$. Our next goal is to show that $\spann\{ e_i^*: i\in I\}$ is dense in $C(G)$. To this end, we fix an $f\in C(G)$ and an $\varepsilon>0$. Since $f\in C(G,\mathbb{C})$ and $(e_i)_{i\in I}$, is dense in $C(G, \mathbb{C})$, there then exists a finite set $J\subset I$ and $(a_j)_{j\in J}\subset \mathbb{C}$ such that \begin{displaymath} \sup_{x\in G} \Bigl| \sum_{j\in J} a_j e_{j}(x) - f(x)\Bigl| < \varepsilon\, , \end{displaymath} and from this conclude that \begin{align*} \sup_{x\in G} \Bigl| \sum_{j\in J} \re a_j \re e_{j}(x) - \sum_{j\in J} \im a_j \im e_{j}(x) - f(x)\Bigl| &= \sup_{x\in G} \Bigl| \re \Bigl( \sum_{j\in J} a_j e_{j}(x) \Bigr)- \re f(x)\Bigl| \\ & \leq \sup_{x\in G} \Bigl| \sum_{j\in J} a_j e_{j}(x) - f(x)\Bigl| \\ &< \varepsilon\, . \end{align*} In other words, the span of $(\re e_i)_{i\in I} \cup (\im e_i)_{i\in I}$ is dense in $C(G)$. However, our initial considerations showed that $\re e_i = \re e_{-i}$ and $\im e_i = -\im e_{-i}$ for all $i\in I$ as well as $\im e_i =0$ for all $i\in I_0$, and therefore $\spann\{ e_i^*: i\in I\}$ is dense in $C(G)$, too. Finally, $\nu$ is regular, and therefore $C(G)$ is dense in $\Lx 2 G$, see e.g.~\citet[Theorem 29.14]{Bauer01}. Consequently, $\spann\{ e_i^*: i\in I\}$ is dense in $\Lx 2 G$, and therefore $(e_i^*)_{i\in I}$ is an ONB of $\Lx 2 G$. The estimate $\inorm{e_i^*}\leq \sqrt 2$ follows from $|e_i(x)|= 1$ for all $x\in G$. \end{proofof} \begin{proofof}{Lemma \ref{real-bochner}} Let $\nu$ be the Haar measure of $G$. \atob {i}{ii} For a character $e_i\in \hat G$ and $x\in G$ a simple calculation shows \begin{align*} \int_G k(x,x') e_i(x')\,\mathrm{d}\nu( {x'}) = \int_G \k(-x +x') e_i(x')\,\mathrm{d}\nu( {x'} ) &= \int_G \k(x') e_i(x+x')\,\mathrm{d}\nu ({x'}) \\ &= \lambda_i e_i(x) \, , \end{align*} where $\lambda_i := \int_G \k(x') e_i(x')\,\mathrm{d}\nu ({x'})$. Now, since $k$ is $\mathbb{R}$-valued, the integral operator $T_k^\mathbb{C}:\Lx 2 {G,\mathbb{C}} \to \Lx 2 {G,\mathbb{C}}$ is self-adjoint and the previous calculation shows that each character $e_i$ gives an eigenvector $[e_i]_\sim\in \Lx 2 {G,\mathbb{C}}$ of $T_k^\mathbb{C}$ with eigenvalue $\lambda_i$. Using that eigenvalues of self-adjoint operators are real numbers, we then find \begin{align*} T_k [\re e_i]_\sim = T_k^\mathbb{C} [\re e_i]_\sim &= \lambda_i [\re e_i]_\sim \\ T_k [\im e_i]_\sim= T_k^\mathbb{C} [\im e_i]_\sim &= \lambda_i [\im e_i]_\sim \end{align* for all $i\in I$. Now recall that for $i\in I_0$ we have $\im e_i = 0$ and therefore these functions $\im e_i$ are \emph{not} eigenvectors of $T_k:\Lx 2 G\to \Lx 2 G$. By Lemma \ref{real-onb} we thus conclude that $([e_i^*]_\sim)_{i\in I}$ is an ONB of eigenvectors of $T_k$ with corresponding eigenvalues $(\lambda_i)_{i\in I}$, and in particular, there are no further eigenvalues than these. Moreover, $(\lambda_i)_{i\in I}$ is summable since \eqref{int-diag} holds. Moreover, for $i\in I$ we have $\lambda_i = \lambda_{-i}$, where we note that for $i\not \in I_0$ the corresponding eigenvalues have thus a geometric multiplicity of at least two. Since $\nu$ is finite and strictly positive, we thus see that $k$ enjoys a Mercer representation \eqref{kernel_sum_cont} for the index set $I^*:= \{i\in I: \lambda_i>0\}$ and the sub-family $(e_i^*)_{i\in I^*}$. Moreover, for $x,x'\in G$ this representation yields \begin{align*} \nonumber k(x,x') &= \sum_{\lambda_i > 0} \lambda_i e_i^*(x) e_i^*(x') \\ \nonumber &= \sum_{i\in I_0: \lambda_i>0} \lambda_i \re e_i(x) \re e_i(x') \\ \nonumber &\qquad + 2\sum_{i\in I_+: \lambda_i>0} \lambda_i \bigl( \re e_i(x) \re e_i(x') + \im e_i(x) \im e_i(x') \bigr) \\ \nonumber & = \sum_{i\in I_0: \lambda_i>0} \lambda_i \re e_i(-x'+x) + 2\sum_{i\in I_+: \lambda_i>0} \lambda_i \re e_i(-x' +x) \\ \nonumber & = \sum_{\lambda_i> 0} \lambda_i \re e_i(-x'+x)\, , \end{align*} where in the second to last step we used \eqref{add-thm} and the last step rests on $\re e_i = \re e_{-i}$. In addition, $\sup_{i\in I}\inorm{e_i^*}\leq \sqrt 2$ together with the summability of $(\lambda_i)_{i\in I}$ quickly shows that the series converge both absolutely and uniformly. Finally, the continuity of $k$ follows from the uniform convergence in \eqref{Mercer-on-G} and $e_i^*\in C(G)$ for all $i\in I$. \atob {ii}{i} We first observe that Lemma \ref{real-onb} together with \citet[Lemma 2.6]{StSc12a} shows that \eqref{Mercer-on-G} does indeed define a kernel $k$, and its translation invariance is built into the construction. Clearly, $k$ is measurable and $\sup_{i\in I}\inorm{e_i^*}\leq \sqrt 2$ together with the summability of $(\lambda_i)_{i\in I}$ shows that $k$ is bounded. \end{proofof} \begin{proofof}{Theorem \ref{exist-univ-on-G}} The equivalences \emph{i)} $\Leftrightarrow$ \emph{iv)} $\Leftrightarrow$ \emph{vi)} have already been shown in Theorem \ref{exists-univer-char-kern}, and \emph{iii)} $\Rightarrow$ \emph{iv)} is trivial. In addition, if one of the conditions \emph{iii)} - \emph{vi)} are satisfied, Theorem \ref{exists-univer-char-kern} shows that $\sosrm G = \sosbm G$, where in the case of \emph{v)} we additionally need Lemma \ref{real-bochner}. Therefore Lemma \ref{plusone-sipd} together with Proposition \ref{M0-char} and Theorem \ref{thm-new-char-2} shows that a continuous kernel $k$ is characteristic if and only if $k+1$ is universal. This yields the equivalences \emph{iv)} $\Leftrightarrow$ \emph{vi)}, and by the last statement in Lemma \ref{real-bochner}, also \emph{iii)} $\Leftrightarrow$ \emph{v)}. It thus suffices to show that \emph{ii)} $\Rightarrow$ \emph{iii)} and \emph{i)} $\Rightarrow$ \emph{ii)}. \atob {ii} {iii} If $\hat G$ is at most countable, so is $I$, and hence there exists a family $(\lambda_i)_{i\in I}$ with $\lambda_i>0$ for all $i\in I$ and $\sum_{i\in I}\lambda_i<\infty$. The kernel $k$, which is constructed by \eqref{Mercer-on-G}, is then translation-invariant and continuous, and by Corollary \ref{uni-cor} it is also universal. \atob {i} {ii} By \citet[Theorem 3.2.11 and Corollary 3.3.2]{Schurle79} we know that $G$ is completely regular and hence \citet[Theorem V.6.6.]{Conway90} shows that $G$ is metrizable if and only if $C(G)$ is separable. We therefore see that $C(G)$ is separable. In addition, since $\nu$ is regular, \citet[Theorem 29.14]{Bauer01} shows that $C(G)$ is dense in $\Lx 2 G$. Consequently, $\Lx 2 G$ is separable, and since $(e_i^*)_{i\in I}$ is an ONB of $\Lx 2 G$, we conclude that $I$, and thus $\hat G$, is at most countable. \end{proofof} \begin{proofof}{Corollary \ref{char-on-G-char}} Since the Haar measure $\nu$ on $G$ is a finite, regular, and strictly positive Borel measure, we see by Lemma \ref{real-onb} that all assumptions of Corollary \ref{uni-cor} are satisfied. Moreover, we have $e^*_0 = \boldsymbol{1}_G$. Now the assertions follow from Corollary \ref{uni-cor}. \end{proofof} \subsection{Proofs related to Section \ref{sec:intro}} For the proof of Theorem \ref{thm:one} we assume that the general results on kernel mean embeddings recalled in Section \ref{sec:prelim} up to Definition \ref{def-char-kern} of characteristic kernels are available. Note that these results do not involve kernel scores at all, so that there is no danger of circular reasoning. \vspace*{1ex} \begin{proofof}{Theorem \ref{thm:one}} As explained around \eqref{square-finite}, the condition $P \in \mathcal{M}_1^k(X)$ ensures that $\Phi(P)$ defined by \eqref{eq:inj} is indeed an element of $H$. Now \eqref{main-obs} follows from \eqref{hinner}, namely \begin{align*} \snorm{\P(P)-\P(Q)}_H^2 &= \int\!\!\!\!\int\! k(x,y)\,\mathrm{d} P(x)\,\mathrm{d} P(y) + \int\!\!\!\!\int\! k(x,y)\,\mathrm{d} Q(x)\,\mathrm{d} Q(y) - 2\int\!\!\!\!\int\! k(x,y)\,\mathrm{d} P(x)\,\mathrm{d} Q(y)\\ &= 2 \left(\int S_k(P,x)\,\mathrm{d} Q(x) - \int S_k(Q,x)\,\mathrm{d} Q(x)\right). \end{align*} The remaining assertions directly follow from \eqref{main-obs}. \end{proofof} \subsection{Proofs related to Section \ref{sec:lcs}} \begin{proofof}{Theorem \ref{exist-sipd}} We fix a $C_0(X)$-kernel $k$ and a finite signed measure $\mu\in \sosbm X\setminus \sosrm X$. Then $f\mapsto \int_X f\,\mathrm{d}\mu$ defines a bounded linear operator $C_0(X)\to \mathbb{R}$, and by \eqref{riesz-repres} there thus exists a $\mu^*\in \sosrm X$ such that \begin{displaymath} \int_X f \,\mathrm{d}\mu = \int_X f\,\mathrm{d}\mu^* \end{displaymath} for all $f\in C_0(X)$. By $H\subset C_0(X)$ and \eqref{alternative-comp} we conclude that $\hnorm{\mu-\mu^*}=0$ while our construction ensures $\mu\neq \mu^*$. \end{proofof} \begin{proofof}{Theorem \ref{thm-new-char-2}} Using the already observed identity $I'=\P$ and \citet[Theorem 3.1.17]{Megginson98} we see that $I$ has a dense image if and only if $\P: \sosrm X\to H$ is injective. By \eqref{sipd} we then conclude that $k$ is universal if and only if $k$ is strictly integrally positive definite with respect to $\sosrm X$. \end{proofof} \begin{proofof}{Theorem \ref{exists-univer-char-kern}} \aeqb {i} {iii} This has been shown in \citet[Theorem 2]{StHuSc06a}. Moreover, it is well-known that compact metrizable Hausdorff spaces are Polish. The equality $\sosrm X = \sosbm X$ then follows from Ulam's theorem, see e.g.~\citet[Lemma 26.2]{Bauer01}. \atob i {ii} If there exists a universal kernel $k$, then we have already shown that $\sosrm X = \sosbm X$. Consequently, $k$ is strictly integrally positive definite with respect to $\sosbm X$ by Theorem \ref{thm-new-char-2}, and thus characteristic by Proposition \ref{M0-char}. \atob {ii} {i} Assume that there exists a characteristic kernel $k$. By Proposition \ref{M0-char} we know that $k$ is strictly integrally positive definite with respect to $\sosbmh 0{} X$. Then $k+1$ is a bounded and continuous kernel, which is strictly integrally positive definite with respect to $\sosbm X$ by Lemma \ref{plusone-sipd}. Using $\sosrm X \subset \sosbm X$ and Theorem \ref{thm-new-char-2} we conclude that $k$ is universal. \end{proofof} \begin{proofof}{Corollary \ref{product-char}} \atob i {ii} Since $(X_1\times X_2, \t_1\otimes \t_2)$ is a compact metrizable space, we have $\sosrm {X_1\times X_2} = \sosbm {X_1\times X_2}$, and hence the implication follows from Proposition \ref{M0-char} and Theorem \ref{thm-new-char-2}. \atob {ii} {iii} Since $(X_1,\t_1)$ and $(X_2,\t_2)$ are assumed to be non-trivial, we find $\dim \sosbm {X_1} \geq 2$ and $\dim \sosbm {X_2} \geq 2$. Now \emph{iii)} follows from Lemma \ref{product-form} and Theorem \ref{thm-new-char-2}. \atob {iii} i This can be shown by the theorem of Stone-Weierstra\ss, see e.g.~\citet[Lemma A.5]{StThScXXa} for details. \end{proofof} \begin{proofof}{Theorem \ref{thm-new-char-3}} Before we begin, we write $E:= \spann\{ e_i: i\in I\}$ and denote the RKHS of $k$ by $H$. \aeqb {ii} {iii} Via the isomorphism \eqref{riesz-repres} between $C_0(X)'$ and $\sosrm X$ we easily see that \emph{ii)} is equivalent to the statement $\varphi'_{|E} =0 \implies \varphi'=0$ for all $\varphi'\in C_0(X)'$ and by Hahn-Banach's theorem, see e.g.~\citet[p.~64, Corollary II.3.13]{DuSc58}, the latter is equivalent to \emph{iii)}. \atob {iii} {i} The chain of inclusions $E\subset H\subset C_0(X)$ immediately gives the desired implication. \atob {i} {iii} Clearly, we have $E = \spann\{ \sqrt{\lambda_i}e_i: i\in I\}$ and since $(\sqrt{\lambda_i} e_i)_{i\in I}$ is an ONB of $H$, we conclude that $\overline E^{H} = H$. Let us now fix an $\varepsilon>0$ and an $f\in C_0(X)$. Since $k$ is universal, there then exists an $h\in H$ with $\inorm {f-h}\leq \varepsilon$ and for this $h$ our initial consideration yields an $e\in E$ with $\hnorm{e-h}\leq \varepsilon$. Combining both estimates we find \begin{displaymath} \inorm{f-e} \leq \inorm {f-h} + \inorm{h-e} \leq \varepsilon + \inorm k\hnorm{h-e} \leq (1+\inorm k)\, \varepsilon\, , \end{displaymath} and hence $E$ is dense in $C_0(X)$. To check the last statement, let us assume that $E$ is dense in $C_0(X)$. Using that $\nu$ is finite we then see that $E$ is also dense in $C_0(X)$ with respect to $\snorm\cdot_\Lx 2 \nu$, and since $C_0(X)$ is dense in $\sLx 2 \nu$ by the regularity of $\nu$, see e.g.~\citet[Theorem 29.14]{Bauer01}, we conclude that $E$ is dense in $\sLx 2 \nu$. This shows that $[E]_\sim = \spann\{ [e_i]_\sim: i\in I\}$ is dense in $\Lx 2 \nu$, and therefore $([e_i]_\sim)_{i\in I}$ is indeed an ONB of $\Lx 2 \nu$. \end{proofof} \begin{proofof}{Corollary \ref{uni-cor}} \ada i Let us first assume that $\lambda_i>0$ for all $i\in I$. By \citet[Lemma 2.6 and Theorem 2.11]{StSc12a} we then see that $k$ is of the form considered in Theorem \ref{thm-new-char-3}, and since $\spann\{e_i:i\in I\} $ is dense in $C_0(X)$ by our assumption, we conclude that $k$ is universal. To show the converse implication we set $I^* := \{i\in I: \lambda_i>0\}$ and assume that $k$ is universal but $I^* \neq I$. By the definition of $I^*$ we have \begin{displaymath} k(x,x') = \sum_{i\in I^*} \lambda_i e_i(x)e_i(x') \end{displaymath} for all $x,x'\in X$, and by \citet[Lemma 2.6 and Theorem 2.11]{StSc12a} we thus conclude that $(e_i)_{i\in I*}$ is the family considered in Theorem \ref{thm-new-char-3}. Consequently, this sub-family $([e_i]_\sim)_{i\in I^*}$ is already an ONB in $\Lx 2 \nu$, which, however, is impossible for $I^* \neq I$. \ada{ii} Let us first assume that $k$ is characteristic and that there exists an $j\neq i_0$ with $\lambda_j=0$. We have $[e_j]_\sim \in \Lx 1 \nu$ because $\nu(X) < \infty$. Moreover, $[e_j]_\sim \perp [e_i]_\sim$ for all $i\neq j$ implies both $\int_X {e_j} \,\mathrm{d}\nu =0$ and $[e_j]_\sim \in [H]_\sim^\perp$, where in the last step we used \eqref{span_ei}. Consequently, $k$ cannot be characteristic by Theorem \ref{kernel-metric-thm}. Conversely, assume that $\lambda_i >0$ for all $i\neq i_0$. If $\lambda_{i_0}>0$ then $k$ is actually universal by the already established part \emph{i)}, and thus characteristic. If $\lambda_{i_0}=0$, then the kernel \begin{displaymath} k(x,x') + 1 = \sum_{i\neq i_0} \lambda_i e_i(x)e_i(x') + e_{i_0}(x)e_{i_0}(x') \end{displaymath} is universal by part \emph{i)} and thus $k$ is characteristic by Theorem \ref{thm-new-char-2}, Lemma \ref{plusone-sipd}, Proposition \ref{M0-char}, and $\sosrm X = \sosbm X$. \ada {iii} If $\lambda_i >0$ for all $i\in I$, then \emph{i)} shows that $k$ is universal, and by Theorem \ref{thm-new-char-2}, Proposition \ref{M0-char}, and $\sosrm X = \sosbm X$ we conclude that $k$ is characteristic. To show the converse implication, we assume that $k$ is characteristic and there is an $i_{0}\in I$ with $\lambda_{i_0}=0$. Since $([e_i]_\sim)_{i\in I}$ is an ONS in $\Lx 2 \nu$ and $[e_{i_0}]_\sim \not\in \overline{[H]_\sim} = \overline{\spann\{ [e_i]_\sim: \lambda_i>0\}}$, see \eqref{span_ei}, we conclude that $[e_{i_0}]_\sim \in [H]_\sim^\perp$. On the other hand, $\boldsymbol{1}_X \in H$ gives $[\boldsymbol{1}_X]_\sim \in [H]_\sim$, and thus we find \begin{displaymath} 0 = \langle [e_{i_0}]_\sim, [\boldsymbol{1}_X]_\sim\rangle_{\Lx 2 \nu} = \int_X e_{i_0} \,\mathrm{d}\nu\, . \end{displaymath} Finally, $[e_{i_0}]_\sim \neq 0$ is obvious and $[e_{i_0}]_\sim \in \Lx 1 \nu$ follows from $\nu(X)<\infty$, so that Theorem \ref{kernel-metric-thm} shows that $k$ is not characteristic. \end{proofof} \subsection{Proofs related to Section \ref{sec:Sd}} \begin{proofof}{Proof of Theorem \ref{thm:32}} We first consider the case $\psi \in \Psi_{d+2}$. If the kernel $k$ on $\mathbb{bbS}^d$ induced by $\psi$ is strictly positive definite then Lemma \ref{prop:1} implies that $b_{n,d}>0$ for all $n \ge 0$. By Theorem \ref{thm:Schoenberg}, $k$ is then universal and characteristic. Conversely, if $k$ is universal or characteristic then $b_{n,d} > 0$ for all $n \ge 1$ by Theorem \ref{thm:Schoenberg}, thus it is strictly positive definite by Remark \ref{rem:spd2cb}. In the case $\psi \in \Psi_{d+1}^+$, we have $\psi \in \Psi_{d}^+$, and hence it suffices to show that $k$ is universal and characteristic. This, however, follows from Lemma \ref{prop:34} and Theorem \ref{thm:Schoenberg}. \end{proofof} \begin{proofof}{Theorem \ref{thm:33}} Suppose that the kernel $k$ is induced by $\psi\in \Psi_{\infty}^+$. Then $b_{n,d} >0$ for all $n \in \mathbb{N}_0$ by Lemma \ref{prop:34} and hence the kernel is universal and characteristic by Theorem \ref{thm:Schoenberg}. Suppose now that $k$ is characteristic. By Proposition \ref{prop:421} we obtain that $\psi \in \Psi_{\infty}^+$. \end{proofof} \begin{proofof}{Lemma \ref{prop:1}} It is clear from the results of \citep{ChenMenegattoETAL2003} that $b_{n,d} > 0$ for all $n \in \mathbb{N}_0$ is a sufficient condition for $\psi$ being strictly positive definite. Suppose now that $\psi \in \Psi_{d+2} \cap \Psi_{d}^+$. \citet[Corollary 4]{Gneiting2013} implies that if $b_{2k+2,d} > 0$ ($b_{2k+1,d}> 0$) for some $k$, then $b_{2k' + 2,d} > 0$ ($b_{2k'+1,d}> 0$) for all $k' \le k$. This yields the claim by Remark \ref{rem:spd2cb}. \end{proofof} \begin{proofof}{Lemma \ref{prop:34}} This is shown in \citet[Proof of Corollary 1(b)]{Gneiting2013}. \end{proofof} \begin{proofof}{Proposition \ref{prop:421}} Assume that $\psi$ is not strictly positive definite, or, by Remark \ref{rem:spd2cb}, does not satisfy condition $b$. We will show that it cannot be characteristic. First, we construct a special class of probability densities $p$ on $\mathbb{bbS}^d$ such that we explicitly know the integrals \begin{equation}\label{eq:ckjp} c_{k,j}(p) := \int_{\mathbb{bbS}^d} e_{k,j}(y) p(y)\,\mathrm{d} \sigma(y) \end{equation} with respect to the basis of spherical harmonics. Here, $\sigma$ is the surface area measure on $\mathbb{S}^d$ normalized such that $\int_{\mathbb{S}^d}\,\mathrm{d} \sigma = 1$. Fix $v_0 \in \mathbb{S}^d$ and $a \in [-1,1]\backslash \{0\}$. We have \begin{align*} |C_n^{(d-1)/2}(x)|\le C_n^{(d-1)/2}(1)\, , \qquad \qquad x \in [-1,1], \end{align*} see (\citetalias[18.14.4]{dlmf} for $d \ge 2$), and therefore, for all $n \ge 1$, $x \in \mathbb{bbS}^d$, we have \begin{equation}\label{eq:pna} p_{n,a}(x) := 1 + a\frac{C_n^{(d-1)/2}(\langle v_0,x\rangle)}{C_n^{(d-1)/2}(1)} \ge 0, \end{equation} and $\int_{\mathbb{bbS}^d} p_{n,a}(x)\,\mathrm{d} \sigma(x) = 1$, where for the last equality we used that $C_{n}^{(d-1)/2}(\langle v_0,\cdot\rangle)$ is a spherical harmonics of degree $n$ and thus orthogonal to $e_{0,0} = \boldsymbol{1}_{\mathbb{bbS}^d}$. Consequently, $p_{n,a}$ is a probability density function on $\mathbb{bbS}^d$ with respect to the surface area measure $\sigma$. Note that $p_{n,a}$ and $p_{n',a}$ induce different probability measures on $\mathbb{S}^d$ for $n \not=n'$. We obtain \begin{equation}\label{eq:ckjpna} c_{k,j}(p_{n,a}) = \delta_{k,0} + \delta_{k,n}\frac{a}{N(d,k)}e_{k,j}(v_0) \end{equation} using \eqref{eq:legendre}, and the Funk-Hecke Theorem \citep[Theorem 3.4.1]{Groemer1996} yields that \begin{equation* \int_{\mathbb{bbS}^d} \langle x,y \rangle^n e_{k,j}(y) \,\mathrm{d} \sigma(y) = \lambda_k^n e_{k,j}(x), \quad x \in \mathbb{bbS}^d, j = 1,\dots,N(d,k), \end{equation*} where \[ \lambda_k^n = \frac{\Gamma((d+1)/2)}{\sqrt{\pi}\Gamma(d/2)}C_k^{(d-1)/2}(1)^{-1}\int_{-1}^1 t^n C_k^{(d-1)/2}(t)(1-t^2)^{(d-2)/2}dt. \] Since the family $(e_{n,j})_{n\in \mathbb{N}_o, j=1,\dots,N(d,k)}$ is an ONB of $\Lx 2 {\mathbb{bbS}^d}$, we obtain the following Mercer representation of the bounded and continuous kernel $(x,y)\mapsto \langle x,y\rangle^n$ on $\mathbb{bbS}^d$ \begin{equation}\label{eq:xyton} \langle x,y\rangle^n = \sum_{k=0}^{\infty} \sum_{j=1}^{N(d,k)} \lambda_k^n e_{k,j}(x)e_{k,j}(y)\, , \end{equation} where for each $x$ the convergence is uniform in $y$ by \citet[Corollary 3.5]{StSc12a}. Using that $C_k^{(d-1)/2}$ is even for $k$ even and odd for $k$ odd, one obtains that $\lambda_k^n=0$ if $k-n$ is odd. The $C_k^{(d-1)/2}$ are orthogonal with respect to the weight function $(1-t^2)^{(d-2)/2}$ \citetalias[18.3.1]{dlmf}, therefore $\lambda_k^n = 0$ for $k > n$. Finally, the formula \citetalias[18.17.37]{dlmf} for the Mellin transform yields that \[ \lambda_k^n = \frac{\pi 2^{d-n-1}\Gamma(k+d-1)\Gamma(n+1)}{k! \Gamma(\frac{d-1}{2})\Gamma(\frac{k+d+n}{2})\Gamma(\frac{n-k + 2}{2})} > 0 \quad k \le n, \; k-n \; \text{even}. \] For a probability density $p$ on $\mathbb{bbS}^d$, we have by \eqref{eq:Schoenberg} and \eqref{eq:xyton} for $x \in \mathbb{bbS}^d$, \begin{align*} \int_{\mathbb{bbS}^d} k(x,y) p(y) \,\mathrm{d}\sigma(y) &= \sum_{n=0}^{\infty} \sum_{k=0}^n \sum_{j=1}^{N(d,k)} b_n\lambda_k^n e_{k,j}(x) c_{k,j}(p)\\ &= \sum_{k=0}^{\infty} z_k \sum_{j=1}^{N(d,k)} c_{k,j}(p) e_{k,j}(x), \end{align*} where $z_k = \sum_{n=k}^{\infty} b_n \lambda_k^n$ and $c_{k,j}(p)$ is defined at \eqref{eq:ckjp}. If $b_n = 0$ for all even $n \ge n_0$ then $z_k = 0$ for all even $k \ge n_0$. If $b_n =0$ for all odd $n \ge n_0$ then $z_k = 0$ for all odd $k \ge n_0$. Let us start with the case that $b_n = 0$ for all even $n \ge n_0$, i.e.~$z_k = 0$ for all even $k\ge n_0$. For all $m \in \mathbb{N}_0$, we have $c_{k,j}(p_{2m,a}) = 0$ for $k$ odd by \eqref{eq:ckjpna}, where $p_{n,a}$ is defined at \eqref{eq:pna}. Hence, for $2m \ge n_0$ and $x\in \mathbb{bbS}^d$, we obtain \begin{align*} \int_{\mathbb{bbS}^d} &k(x,y) p_{2m,a}(y) \,\mathrm{d}\sigma(y) \\&= \sum_{k=0,k\text{ even}}^{n_0} z_k \sum_{j=1}^{N(d,k)}\left(\delta_{k,0} + \delta_{k,2m}\frac{a}{N(d,k)}e_{k,j}(v_0)\right) e_{k,j}(x) = z_0, \end{align*} which shows that the kernel mean embedding maps all these densities to the constant function with value $z_0$. Consequently, $k$ is not characteristic. Suppose now that $b_n = 0$ for all odd $n \ge n_0$, i.e.~$z_k = 0$ for all odd $k \ge n_0$. For all $m \in \mathbb{N}_0$, we have $c_{k,j}(p_{2m+1,a}) = \delta_{k,0} $ for $k$ even by \eqref{eq:ckjpna}. Hence, for $2m + 1 \ge n_0$ and $x\in \mathbb{bbS}^d$, we obtain \begin{align*} \int_{\mathbb{bbS}^d} &k(x,y) p_{2m+1,a}(y) \,\mathrm{d}\sigma(y) \\&= z_0 + \sum_{k=1,k\text{ odd}}^{n_0} z_k \sum_{j=1}^{N(d,k)} \delta_{k,2m+1}\frac{a}{N(d,k)}e_{k,j}(v_0) e_{k,j}(x)= z_0, \end{align*} which again shows that the kernel mean embedding maps all these densities to the constant function with value $z_0$. Consequently, $k$ is not characteristic. \end{proofof} \section{Proofs}\label{sec:proofs} \input{proofs-intro} \input{proofs-char} \input{proofs-lcs} \input{proofs-groups} \input{proofs-spheres}
{ "timestamp": "2017-12-15T02:09:13", "yymm": "1712", "arxiv_id": "1712.05279", "language": "en", "url": "https://arxiv.org/abs/1712.05279" }
\section{Introduction} Precise segmentation of infant brain MRI into white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) is essential to study early brain development. Throughout this period, the postnatal human brain shows its most dynamic phase of development, with a rapid growth of tissues and the formation of key cognitive and motor functions \cite{paus2001maturation}. Infant brain segmentation is also important to detect brain abnormalities occurring shortly after birth, such as hypoxic ischemic encephalopathy, hydrocephalus or congenital malformations, enabling the prediction of neuro-developmental outcomes. Magnetic resonance imaging (MRI) is commonly used for infant brains because it provides a safe and non-invasive way of examining cross-sectional views of the brain in multiple contrasts. Brain MRI in the first two years can be divided in three distinct phases: infantile ($<$ 6 months), isointense (6-12 months) and early adult-like phase ($>$12 months). Images in the isointense phase show patterns of isointense contrast between white and gray matter (e.g., see Fig. \ref{fig:isointense}), which may vary across brain regions due to nonlinear brain development \cite{paus2001maturation}. These patterns, along with various factors, for instance, limited acquisition time, increased noise, motion artifacts, severe partial volume effect due to smaller brain sizes and ongoing white matter myelination in infant brain images, make automatic segmentation of isointense infant brain MRI a challenging task. \begin{figure}[ht!] \begin{center} \mbox{ \includegraphics[height=0.30\linewidth]{Pat1-T1.png} \hspace{-2.5 mm} \includegraphics[height=0.30\linewidth]{Pat1-T2.png} \hspace{-2.5 mm} \includegraphics[height=0.30\linewidth]{Pat1-Labels.png} } \caption{Example of data from a training subject.6-month infant brain images from a mid-axial T1w slice (\textit{left}), the corresponding T2w slice (\textit{middle}), and the ground-truth labels (\textit{right}).} \label{fig:isointense} \end{center} \end{figure} \subsection{Related work} \label{ssec:relatedWork} A popular approach for automatic segmentation uses atlases to model the anatomical variability of target structures \cite{prastawa2005automatic,weisenfeld2006segmentation,song2007clinical,weisenfeld2009automatic,shi2010neonatal,shi2010construction,shi2011infant,melbourne2012neobrains12,cardoso2013adapt,wang2014segmentation}. In such approach, an atlas (or multiple atlases) is first registered to a target image and then used to propagate manual labels to this image. When several atlases are considered, labels from individual atlases can be combined into a final segmentation via a label fusion strategy \cite{weisenfeld2006segmentation,weisenfeld2009automatic,wang2012atlas} such as the STAPLE (Simultaneous Truth and Performance Level Estimation) algorithm \cite{warfield2004simultaneous}. Atlas-based methods have been widely used in a breadth of segmentation problems, e.g., parcellation of brain MRI into subcortical structures \cite{dolz2015segmentation}. Although these methods provide state-of-the-art performance in many applications, they are usually sensitive to the registration process, and may fail if the image has a low contrast or the target structure has a large variability. This is particularly problematic in the case of infant brain segmentation, due to isointense contrasts and the high spatial variability of the infant population. To overcome the limitations of atlas-based methods, parametric \cite{prastawa2005automatic,xue2007automatic,ledig2012neonatal,wu2012automatic,cardoso2013adapt} or deformable \cite{wang2011automatic} models can be used in a refinement step. Parametric models typically state segmentation as the optimization of an energy function, which integrates pixel/voxel probabilities from the atlas with priors restricting the shape or pairwise interactions of brain tissues \cite{xue2007automatic,ledig2012neonatal,melbourne2012neobrains12,cardoso2013adapt}. Such models often need a large number of annotated images, which are rarely available in practice. Deformable models refine atlas-produced contours in an iterative manner so as to align better with image edges. However, these models are typically structure-specific and difficult to extend to multi-tissue segmentation. Recently, deep learning methods based on convolutional neural networks (CNNs) have demonstrated outstanding performances in a wide range of computer vision and image analysis applications. In particular, CNNs have achieved state-of-the-art results for various problems \cite{long2015fully,kamnitsas2017efficient,DolzNeuro2017,Fechter_Esophagus}, including the segmentation of infant brain MRI \cite{moeskops2016automatic,zhang2015deep,nie2016fully}. For instance, Moeskops et al. \cite{moeskops2016automatic} presented a multi-scale 2D CNN architecture, which yielded accurate segmentations and spatial consistency using a single image modality (i.e., T2w MRI). To acquire multi-scale information, they considered patches and convolution kernels of multiple sizes. Independent paths were used for patches of different sizes, and the features of these paths were combined at the end of the network. Several recent studies investigated architectures based on multiple modalities as input, in order to overcome the extremely low contrast between WM and GM tissues. For example, Zhang et al. \cite{zhang2015deep} proposed a deep CNN combining MR-T1, T2 and fractional anisotropy (FA) images. Similarly, a fully convolutional neural network (FCNN) was proposed in \cite{nie2016fully} for segmenting isointense phase brain MR images. As further explained in Section \ref{sec:methods}, a FCNN is a special type of CNN that generates dense pixel predictions. Instead of simply stacking the three modalities at the network input, the network in \cite{nie2016fully} processes each modality within an independent path. The final segmentation is obtained by fusing the ensuing paths. These approaches have some important drawbacks. First, the architectures in \cite{moeskops2016automatic} and \cite{zhang2015deep} use sliding windows, each defining a region that is processed independently of the other windows. Such a strategy is not efficient because of the many redundant convolution and pooling operations. Furthermore, processing these regions independently yields non-structured predictions, which affects segmentation accuracy. Second, these networks use 2D patches as input. This does not account for the anatomic context in directions orthogonal to the 2D plane. As first shown in \cite{kamnitsas2017efficient}, and later in \cite{DolzNeuro2017}, in different contexts of brain structure segmentation, considering 3D data directly, instead of slice-by-slice, can improve segmentation accuracy. Table \ref{table:table_RV_seg} provides a brief summary of the existing methods for infant brain tissue segmentation. For a detailed review of the methods proposed to address this task, we refer the reader to the recent work of Makropoulos et al. \cite{makropoulos2017review} \begin{table*}[h!] \begin{center} \begin{scriptsize} \centering \renewcommand{\arraystretch}{1.25} \caption{A brief summary of existing methods for infant brain tissue segmentation.} \label{table:table_RV_seg} \begin{tabular}{lC{38mm}C{15mm}C{15mm}C{15mm}} \toprule \textbf{Method} & \textbf{Technique} & \textbf{Modality} & \textbf{Stage}** & \textbf{Time} \\ \midrule\midrule Prastawa et al., 2005 \cite{prastawa2005automatic} & Parametric (Graph clustering) & T1, T2 & IF & -- \\ Weisenfeld et al., 2006 \cite{weisenfeld2006segmentation} & Atlas + Bayesian Classifier & T1, T2 & IF & --\\ Nishida et al., 2006 \cite{nishida2006detailed} & -- & T1 & IF & --\\ Xue et al., 2007 \cite{xue2007automatic} & Parametric (EM-MRF) & T2 & IS & -- \\ Song et al., 2007 \cite{song2007clinical} & -- & T2 & IF & --\\ Anbeek et al., 2008 \cite{anbeek2008probabilistic} & K-Nearest Neighbours& T2,IR & IF & --\\ Weisenfeld and Warfield, 2009 \cite{weisenfeld2009automatic} & Multi-atlas & T1,T2 & IF & --\\ Shi et al., 2010 \cite{shi2010construction} & Multi-atlas & T2 & IF & -- \\ Wang et al., 2011 \cite{wang2011automatic} & Level-sets & T2 & IF & --\\ Gui et al., 2012 \cite{gui2012morphology} & -- & T1,T2 & IS & 60-75 min\\ Ledig et al., 2012 \cite{ledig2012neonatal} & Parametric (MRF) & T2 & IS & --\\ Makropoulos et al.,2012 \cite{makropoulos2012automatic} & Multi-atlas & T2 & IS & 95 min\\ Melbourne et al., 2012 \cite{melbourne2012neobrains12} & Parametric (EM-MRF) & T2 & IS & 15 min\\ Wang et al., 2012 \cite{wang2012atlas} & Multi-atlas & T1, T2 & IS & 7 min\\ Wu et Avants, 2012 \cite{wu2012automatic} & Parametric (MAP-EM) & T1, T2 & IS & 80-100 min\\ Cardoso et al., 2013 \cite{cardoso2013adapt} & Parametric (EM-MRF) & T1 & IS & --\\ He and Parikh, 2013 \cite{he2013automated} & -- & T2, PD & IF & -- \\ Wang et al., 2014 \cite{wang2014integration} & Multi-atlas & T1,T2,FA & IF/IS/EA & --\\ Wang et al., 2014 \cite{wang2014segmentation} & Multi-atlas & T2 & IF & -- \\ Li et al., 2015 \cite{wang2015links} & Random Forests& T1,T2,FA& IS& 5 min\\ Moeskops et al., 2015 \cite{moeskops2015automatic} & SVM & T2 & IS & -- \\ Zhang et al., 2015 \cite{zhang2015deep} & 2D CNN & T1,T2,FA & IS & 1 min/slice\\ Moeskops et al., 2016 \cite{moeskops2016automatic} & 2D CNN & T2 & IS & -- \\ Nie et al., 2016 \cite{nie2016fully} & 2D CNN & T1,T2,FA & IS & -- \\ \bottomrule \multicolumn{4}{l}{**Infantile (IF) / Isointense (IS) / Early-adult like (EA)} \end{tabular} \end{scriptsize} \end{center} \end{table*} While fully-automatic, learning-based medical image segmentation methods have improved substantially over the last years, the performances in many applications are still insufficient for practical use, more so when the amount of training data is very limited, as is typically the case in medical applications. For instance, manual annotations of brain MRI (i.e., assigning a label to each voxel) is a highly complex and time-consuming process, which requires extensive expertise. This is particularly the case of infant brain MRIs. Therefore, both active and semi/weakly supervised learning frameworks are currently attracting substantial interest in medical image analysis \cite{yang2017suggestive,rajchl2017deepcut,bai2017semi}. For instance, in the recent study in \cite{yang2017suggestive}, Yang et al. proposed an active learning framework, and showed its potential in the context of segmenting glands in histology images and lymph nodes in ultrasound. The purpose of \cite{yang2017suggestive} was to design algorithms that suggest a small set of images to annotate, which lead to the highest possible performance improvement when adding these new annotations to the training set. The framework in \cite{yang2017suggestive} uses an ensemble of deep CNNs to compute an agreement score for candidate instances, and suggests representative instances with the highest uncertainty. However, since suggestions are made at the image level, manual annotations of full images are still required. Using ensemble of CNNs, each trained with a different set of images, can further improve robustness by reducing test error due to variance \cite{kamnitsas2017ensembles}. \subsection{Contributions and outline} \label{ssec:contributions} The contributions of this study can be itemized as follows: \begin{itemize} \item This work presents the first ensemble of 3D CNNs for suggesting annotations within images. An important benefit of an ensemble of predictors is the ability to measure their level of agreement. This is particularly useful for evaluating the reliability of the segmentations at a voxel level and suggesting local corrections in regions where the ensemble is not confident about the prediction. An important finding in our experiments is that {\em prediction uncertainty, measured as the inverse of predictor agreement within the ensemble, is highly correlated with segmentation errors}. \item Inspired by the recent success of {\em dense} networks \cite{huang2016densely}, we propose a novel architecture called \emph{SemiDenseNet}, which connects all convolutional layers directly to the end of the network. This semi-dense architecture allows the efficient propagation of gradients during training, while limiting the number of trainable parameters. Our network requires one order of magnitude less parameters than popular medical image segmentation networks such as 3D U-Net \cite{cciccek20163d}. Furthermore, by combining the feature maps of intermediate convolutional layers into the first fully-connected layer, our architecture injects multiscale information into the final segmentation. \item As in recent approaches for infant brain MRI segmentation \cite{zhang2015deep,nie2016fully}, the proposed network addresses the problem of low contrast by using multiple image modalities as inputs. So far, to the best of our knowledge, there is no clear guideline in the literature as to how to combine multi-modal images in the network. While stacking available modalities into a single-input network works well in some cases \cite{zhang2015deep}, other works have shown the advantage of having independent paths for modalities and combining these paths further in the network \cite{nie2016fully}. Another contribution of our work is an investigation of the impact that early or late fusions of several modalities might have on the performances. \item We report evaluations of our method on the publicly available data of the MICCAI iSEG-2017 Grand Challenge on 6-month infant brain MRI Segmentation\footnote{http://iseg2017.web.unc.edu/rules/results/}. We obtained very competitive results among 21 teams, ranking first and second in most metrics. \end{itemize} The remainder of this paper is as follows. In Section \ref{sec:methods}, we present the proposed semi-dense architecture, and detail how an ensemble of networks can be used to suggest annotations. We also describe the evaluation protocol used in our study. Section \ref{sec:results} demonstrates the performance of our method on data from the MICCAI iSEG-2017 Challenge. Finally, in Section \ref{sec:discussion}, we discuss the main contributions and results of this work, and propose some potential extensions. \section{Methods}\label{sec:methods} Convolutional neural networks (CNNs) are a special type of artificial neural networks that learn a hierarchy of increasingly complex features by successive convolution, pooling and non-linear activation operations \cite{lecun1998gradient,krizhevsky2012imagenet}. Originally designed for image recognition and classification, CNNs are now commonly used in semantic image segmentation. A naive approach follows a sliding-window strategy, where regions defined by the window are processed independently. As explained before, this technique presents two main drawbacks: reduction of segmentation accuracy and low efficiency. An alternative approach, known as fully CNNs (FCNNs) \cite{long2015fully}, mitigates these limitations by considering the network as a single non-linear convolution, which is trained in an end-to-end fashion. An important advantage of FCNNs, compared to standard CNNs, is that they can be applied to images of arbitrary size. Moreover, because the spatial map of class scores is obtained in a single dense inference step, FCNNs can avoid redundant convolution and pooling operations, making them computationally more efficient. The proposed architectures (Fig. \ref{fig:CNN_archit_Early} and \ref{fig:CNN_archit_Late}) are built on top of \textit{DeepMedic} \cite{kamnitsas2017efficient} and extend our network presented in \cite{DolzNeuro2017}, which showed state-of-the-art performance on the task of segmenting subcortical brain structures in MRI. Unlike this network, the proposed architectures use multiple image modalities as input. Moreover, while the previous network has skip-forward connections for only a few convolutional layers, these architectures follow a denser connection strategy, where feature maps from all convolutional layers are aggregated before the first fully-connected layer. Another notable difference is the proposed ensemble learning strategy, where multiple 3D CNNs are combined to improve robustness. \subsection{Semi-dense 3D fully CNN} \label{ssec:semi-dense} Recent hardware developments, in particular those related to graphic processing units (GPUs), have increased the amount of memory available during inference. This has led to a rise in CNN architectures based on 3D convolutions \cite{kamnitsas2017efficient,dou20163d,milletari2016v,lu2017automatic}, which have a much larger number of parameters than their 2D counterpart. To fit volumetric data into memory, 3D CNNs typically perform pooling operations that down-sample feature maps across the network. However, this down-sampling strategy can lead to a loss of resolution in the segmentation. In \cite{long2015fully}, this issue is addressed by connecting features maps of corresponding levels in the down-sampling and up-sampling streams. The resolution of the input image is recovered by adding deconvolution (or \emph{transpose} convolution) layers at the end of the network. Unfortunately, this technique may still give coarse-looking segmentations. For instance, thin structures can disappear after pooling, and may not be recovered in the up-sampling path. To avoid this effect, the proposed FCNN architecture preserves resolution by avoiding down-sampling operations entirely. The proposed method extends our recent 3D FCNN achitecture \cite{DolzNeuro2017}, which is composed of many convolutional layers, each containing several 3D convolution filters (or \emph{kernels}). Filters in a layer are applied to the output of the previous layer, or to the input volume in the case of the first layer. The result of this operation is known as a feature map. Let $m_l$ denotes the number of convolution kernels in layer $l$ of the network, and $\vec x^n_{l-1}$ the 3D array corresponding to the $n$-th input of layer $l$. The $k$-th output feature map of layer $l$ is then given by \begin{equation} \vec y^k_l \ = \ f\Big(\sum^{m_{l-1}}_{n=1} \vec w^{k,n}_{i} \otimes \vec x^n_{l-1} + \vec b^{k}_{l}\Big), \end{equation} where $\vec w^{k,n}_{i}$ is a filter convolved with each of the previous layers, $\vec b^{k}_{l}$ is the bias, $f$ is a non-linear activation function and $\otimes$ denotes the convolution operator. Note that feature maps produced by convolutions are slightly smaller than their input volumes: The size difference along each dimension is equal to the filter size in this dimension minus one voxel. Hence, applying a \vold{3} convolution filter will reduce the input volume by 2 voxels along each dimension. A stride may also be defined for each convolutional layer, representing the displacement of the filter along the three dimensions after each application. For the activation function, we used the Parametric Rectified Linear Unit (PReLU) \cite{he2015delving} instead of the popular Rectified Linear Unit (ReLU). This function can be formulated as \begin{equation} f(\vec x_i) \ = \ \max(0, \vec x_i) \, + \, a_i \! \cdot \! \min(0,\vec x_i), \end{equation} where $\vec x_i$ defines the input signal and $f(\vec x_i)$ represents the output. Here, $a_i$ is a scaling coefficient that stops the local gradient from becoming zero when $\vec x_i$ is negative. In other words, PReLUs prevent saturation as we approach negative values of $\vec x_i$. While ReLU employs predefined values for $a_i$ (typically equal to 0), PReLU requires learning this coefficient. Thus, this activation function can adapt the rectifiers to their inputs, improving the network's accuracy at a negligible extra computational cost. As in standard CNNs, fully-connected layers are added at the end of the network to encode semantic information. However, to ensure that the network contains only convolutional layers, we use the strategy described in \cite{long2015fully} and \cite{kamnitsas2017efficient}, in which fully-connected layers are converted to a large set of \vold{1} convolutions. Doing this allows the network to retain spatial information and learn the parameters of these layers as in other convolutional layers. Lastly, neurons in the last layer (i.e., the classification layer) are grouped into $m=C$ feature maps, where $C$ denotes the number of classes. The output of the classification layer $L$ is then converted into normalized probability values via a softmax function. The probability score of class $c \in \{1, \ldots, C\}$ is computed as follows: \begin{equation} p_c \ = \ \frac{\exp\big(y^c_L\big)}{\sum^{C}_{c'=1} \exp\big(y^{c'}_L\big)} \label{eq:SoftMax} \end{equation} In addition, we model both local and global context by embedding intermediate-layer outputs in the final prediction. Specifically, we concatenate the output of all convolutional layers into a dense feature map that is fed to the first fully-connected layer. This semi-dense connectivity (Fig. \ref{fig:CNN_archit_Early} and \ref{fig:CNN_archit_Late}), encourages consistency between features extracted at different scales and facilitates the propagation of gradients during training. \begin{figure}[ht!] \centering \begin{center} \includegraphics[width=1.02\textwidth]{CNNTest2.png} \caption{Proposed semi-dense architecture using \textit{early fusion} strategy. T1w and T2w MRI inputs are combined before the first convolutional layer and feature maps from every convolutional layer are connected to the first fully-connected layer.} \label{fig:CNN_archit_Early} \end{center} \end{figure} \begin{figure}[ht!] \centering \begin{center} \includegraphics[width=1.02\textwidth]{SemiDenseNET.png} \caption{Proposed semi-dense architecture using \textit{late fusion} strategy. Features are extracted from T1w and T2w MRI images through independent paths, and are fused before the first fully-connected layer.} \label{fig:CNN_archit_Late} \end{center} \end{figure} \subsection{Multi-modal input: early versus late fusion} \label{ssec:multi-source} The studies in \cite{zhang2015deep,nie2016fully} suggested that multiple input sources provide complementary information, which can help dealing with low contrasts in infant brain MRI. They showed that combining several image sequences in the CNN, in particular T1w, T2w and FA, yielded more accurate segmentations than using these modalities individually. Following this observation, we extend our architecture in \cite{DolzNeuro2017} to accommodate multi-sequence images as input to the network. A common strategy for this task is to merge available modalities at the input of the CNN. This early fusion strategy processes input modalities alongside one another, encouraging their use in all the features of a given layer \cite{zhang2015deep}. Another approach, referred to as late fusion, employs independent streams for each modality, with features updated separately during training and merged at the end of the network. Considering these strategies, we propose two architectures for our network based on \textit{early} (Fig. \ref{fig:CNN_archit_Early}) or \textit{late} fusion (Fig. \ref{fig:CNN_archit_Late}). \subsection{Ensemble learning to suggest local corrections} Ensemble learning uses multiple classifiers/regressors based on different models or trained with different sets of instances, so as to reduce errors due to variance. The outputs of individual models in the ensemble are combined into a single prediction, for instance, by averaging or majority voting. The justification for this technique lies in the fact that, when multiple and independent decisions are combined, random errors are reduced. Thus, ensemble learning promotes better generalization and provides higher prediction accuracy than individual models. Recent studies have demonstrated that averaging the predictions of similar CNNs can lead to an increase in performance. For example, Krizhevsky et al. \cite{krizhevsky2012imagenet} found that, on the ImageNet 2012 classification benchmark, an ensemble of 5 CNNs achieved a top-5 validation error rate of 16.4$\%$, compared to 18.2$\%$ for a single CNN model. By adding another CNN to this ensemble, and making small changes to the network architecture, Zeiler and Fergus \cite{zeiler2014visualizing} were able to decrease this error further to 14.7$\%$. In our approach, multiple CNNs are generated and combined to obtain an aggregated prediction. Given a training set $\Omega = \{ (\mathcal{X}_t, \vec y_t)\}$, $t = 1,\ldots,N$, where $\mathcal{X}_t = (\vec x^\mr{T1}_t, \vec x^\mr{T2}_t)$ are the T1w and T2w images of training subject $t$ and $\vec y_t$ is the corresponding annotated ground truth, we build a set of predictors $\varphi(\vec x; \boldsymbol{\theta}_k)$, $k = 1, \ldots, K$, each trained with a randomly selected subset of $\Omega$. This enforces diversity in the predictors, thereby increasing the ensemble's ability to generalize. In this work, we used an ensemble of $K=10$ CNNs and combined their predictions with majority voting. An important benefit of an ensemble of predictors is the ability to measure their level of agreement. In \cite{yang2017suggestive}, this measure is used to identify unlabeled images for which the predicted segmentation is uncertain, and to recommend these images to an expert for annotation. In this work, we evaluate the reliability of the segmentation at a voxel level, not at the image level as in \cite{yang2017suggestive}), and suggest local corrections in regions where the ensemble is not confident about the prediction. An important finding in our experiments is that the prediction uncertainty (i.e., the inverse of predictor agreement) is highly correlated with segmentation error. \subsection{Network parameters and implementation details} Our CNN is composed of 13 layers in total: 9 convolutional layers in each path, 3 fully-connected layers, and the classification layer. The number of kernels in each convolutional layer, from shallow to deeper, is as follows: 25, 25, 25, 50, 50, 50, 75, 75 and 75. To achieve this depth in a 3D CNN, we employ small kernels of size \vold{3}. Moreover, a unit stride is used for all convolutions to preserve spatial resolution. As in \cite{he2016identity} the activation functions, i.e. PReLU and Batch normalization are employed as 'pre-activation' steps. Thus, each convolutional block is composed by a batch normalization step, followed by a Parametric Rectified Linear Unit (PReLU) activation function and lastly the convolutional filters. The three fully-connected layers are composed of 400, 200 and 150 hidden units, respectively. Features maps from each convolutional layer are fed into the first fully-connected layer, thereby incorporating multi-scale information in the segmentation. Since the size of feature maps differs from one layer to the next, they are cropped to fit the size of feature maps in the last convolutional layer. Thus, the input to the first fully-connected layer has a size of $\tx{\emph{num. feature maps}} \times 9 \times 9 \times 9$, where the number of feature maps is set to 450 in the early fusion and to 900 in late fusion architectures (Figs. \ref{fig:CNN_archit_Early} and \ref{fig:CNN_archit_Late}, respectively). Instead of using the whole 3D image as input, we sample $S$ image segments (i.e., sub-volumes) from the image, $\vec x_s$, $s=1, \ldots, S$, and feed these segments to the network \cite{kamnitsas2017efficient}. This strategy offers two considerable benefits. First, it reduces the memory requirements of our network, thereby removing the need for spatial pooling. More importantly, it substantially increases the number of training examples without having to perform data augmentation. Network parameters are optimized via the RMSprop optimizer, using cross-entropy as cost function. Let $\boldsymbol{\theta}$ denotes the network parameters (i.e., convolution weights, biases and $a_i$ from the rectifier units), and $y^v_s$ the label of voxel $v$ in the $s$-th image segment. Following the training scheme proposed in \cite{kamnitsas2017efficient}, we define the following cost function: \begin{equation} J(\boldsymbol{\theta}) \ = \ -\frac{1}{S\!\cdot\!V} \sum^{S}_{s=1} \sum^{V}_{v=1} \sum^{C}_{c=1} \delta(y^v_s = c) \cdot \log \, p^v_c(\vec x_s), \end{equation} where $p^v_c(\vec x_s)$ is the output of the network for voxel $v$ and class $c$, when the input segment is $\vec x_s$. In \cite{kamnitsas2017efficient}, Kamnitsas et al. found that increasing the size of input segments in training leads to a higher performance, but this performance increase stops beyond segment sizes of \vold{25}. In their network, using this segment size for training, score maps at the classification stage were of size \vold{9}. Since our architecture is one layer deeper, and to keep the same score map sizes, we set the segment size in our network to \vold{27}. This method was used in \cite{DolzNeuro2017} with very satisfactory results. The initialization of weights in deep CNNs is usually performed by assigning random normal-distributed values to kernel and bias weights. However, using fixed standard deviations to initialize weights might lead to poor convergence \cite{simonyan2014very}. To overcome this problem, we adopted the strategy proposed in \cite{he2015delving}, and used in \cite{DolzNeuro2017,kamnitsas2017efficient} for segmentation, which allows very deep architectures to converge rapidly. We use a zero-mean Gaussian distribution of standard deviation $\sqrt{2/n_l}$ to initialize the weights in layer $l$, where $n_l$ denotes the number of connections to units within layer $l$. We set momentum to 0.6 and the initial learning rate to 0.001, with the latter reduced by a factor of 2 after every 5 epochs (starting from epoch 10). Instead of employing an adaptive strategy for the learning rate, we used step decay and monitored the evolution of the cost function during training. We observed that it followed an exponentially decreasing curve with small increasing/decreasing slopes and, therefore, kept this simple yet effective strategy. Our 3D FCNNs were trained for 30 epochs, each consisting of 20 subepochs. At each subepoch, a total of 1000 samples were randomly selected from the training images, and processed in batches of size 20. The code of the proposed 3D FCNN architecture is implemented in Theano \cite{bergstra2010theano} and is publicly available\footnote{https://www.github.com/josedolz/SemiDenseNet}. Training and testing were performed on a server equipped with an NVIDIA Tesla P100 GPU with 16 GB of RAM memory. Training a single network takes around 30 min per epoch, and around 17 hours in total. The segmentation of a 3D MR scan requires 10 seconds per CNN model, on average. On a single GPU, segmenting a new subject with the ensemble of CNNs takes around 100 seconds. \subsection{Dataset} The images were acquired at the UNC-Chapel Hill on a Siemens head-only 3T scanner with a circular polarized head coil, and were randomly chosen from the pilot study of the Baby Connectome Project (BCP)\footnote{http://babyconnectomeproject.org}. During scan, infants were asleep, unsedated and fitted with ear protection, with the head secured in a vacuum-fixation device. \paragraph{Acquisition parameters} T1-weighted images were acquired with 144 sagittal slices using the following parameters: TR/TE = 1900/4.38 ms, flip angle = 7$^\circ$, resolution = 1$\times$1$\times$1 mm$^3$. Likewise, T2-weighted images were obtained with 64 axial slices by employing: TR/TE = 7380/119 ms, flip angle = 150$^\circ$, resolution =1.25$\times$1.25$\times$1.95 mm$^3$. \paragraph{Pre-Processing} The preprocessing was performed by the iSEG-2017 organizers. Specifically, T2w images were linearly aligned onto their corresponding T1w images. All images were resampled into an isotropic 1$\times$1$\times$1 mm$^3$ resolution. Standard image pre-processing steps were then applied using in-house tools, including skull stripping, intensity inhomogeneity correction, and removal of the cerebellum and brain stem. This pre-processing was performed to eliminate the effects that different image registration and bias correction algorithms might have on infant brain segmentation. \paragraph{Ground truth generation} The manual labels were prepared by the iSEG-2017 organizers. Instead of starting from scratch, an initial automatic segmentation for 6-month subjects \cite{wang2013longitudinally,wang20124d} was generated with the guidance from follow-up 24-month scans with high tissue contrast, using the publicly-available iBEAT tool\footnote{http://www.nitrc.org/projects/ibeat/}. Based on this initial automatic segmentation, manual editing was then performed by an experienced neuro-radiologist, to correct segmentation errors in both T1- and T2-weighted MR images. Geometric defects were also removed with the help of surface rendering, using ITK-SNAP. For example, if a hole/handle was found on the surface, the neuro-radiologist first localized the related slices and then checked the segmentation maps of both T1w and T2w images, in order to determine whether to fill the hole or cut the handle. Using this approach, correcting segmentation of a single subject took about one week. \subsubsection{Evaluation} \label{sssec:evaluation} The MICCAI iSEG-2017 organizers used three metrics to evaluate the accuracy of the competing segmentation methods: Dice Similarity Coefficient (DSC) \cite{dice1945measures}, Modified Hausdorff distance (MHD), where the 95-\textit{th} percentile of all Euclidean distances is employed, and Average Surface Distance (ASD). The first measures the degree of overlap between the segmentation region and ground truth, whereas the other two evaluate boundary distances. \paragraph{Dice similarity coefficient (DSC)} Let $V_\mr{ref}$ and $V_\mr{auto}$ be, respectively, the reference and automatic segmentations of a given tissue class and for a given subject. The DSC can be defined as: \begin{equation} \mr{DSC}\big(V_\mr{ref}, V_\mr{auto} \big) \ = \ \frac{2 \mid V_\mr{ref} \cap V_\mr{auto}\mid} {\mid V_\mr{ref}\mid +\mid V_\mr{auto}\mid} \end{equation} DSC values are within a $[0,1]$ range, 1 indicating perfect overlap and 0 corresponding to a total mismatch. \paragraph{Modified Hausdorff distance (MHD)} Let $P_\mr{ref}$ and $P_\mr{auto}$ denote the sets of voxels within the reference and automatic segmentation boundary, respectively. MHD is given by: \begin{equation} \mr{MHD}\big(P_\mr{ref}, P_\mr{auto} \big) \ = \ \max \Big\{ \max_{q \in P_\mr{ref}}d(q,P_\mr{auto}), \max_{q \in P_\mr{auto}}d(q,P_\mr{ref}) \Big\}, \end{equation} where $d(q,P)$ is the point-to-set distance defined by: $d(q,P)=\min_{p \in P} \| q-p\|$, with $\|.\|$ denoting the Euclidean distance. \paragraph{Average surface distance (ASD)} Using the same notation as the Hausdorff distance above, the ASD corresponds to: \begin{equation} \mr{ASD}\big(P_\mr{ref}, P_\mr{auto} \big) \ = \ \frac{1}{|P_\mr{ref}|} \sum_{p \, \in \, P_\mr{ref}} d(p, P_\mr{auto}), \end{equation} where $|.|$ denotes the cardinality of a set. In distance-based metrics, smaller values indicate higher proximity between two point sets and, thus, a better segmentation. \section{Results}\label{sec:results} Three different methods are evaluated in our experiments. The first, referred to as \textit{EarlyFusion$\_$Single}, is a semi-dense network with early fusion of multi-modal images (Fig. \ref{fig:CNN_archit_Early}). The second, denoted \textit{EarlyFusion$\_$Ensemble}, consists of an ensemble of $10$ early-fusion CNNs, trained with different subjects. Finally, the third method, referred to as \textit{LateFusion$\_$Ensemble}, is an ensemble of 10 semi-dense CNNs, each performing a late fusion of modalities in different paths (Fig. \ref{fig:CNN_archit_Late}) and trained with different subjects. For the \textit{EarlyFusion$\_$Single} approach, we used 9 subjects for training and one for validation. In the ensemble methods, for each CNN, 8 different subjects were used for training and 2 subjects for validation. \begin{comment} \begin{table}[ht!] \centering \small \caption{Segmentation results from the iSEG-2017 Segmentation challenge for the top-5 ranked methods (21 teams participated in the challenge). The first set of results correspond to the three proposed approaches, and the second set to approaches presented by other competing teams. Bold fonts are employed to highlight the best performances for each metric and structure. For additional details, we refer the reader to the challenge's website\footnote{http://iseg2017.web.unc.edu/rules/results/}.} \label{table:results} \begin{tabular}{lccccccccc} \toprule \multirow{2}[3]{*}{\textbf{Method}} & \multicolumn{3}{c}{{CSF}} & \multicolumn{3}{c}{{GM}} & \multicolumn{3}{c}{{WM}} \cmidrule(lr){2-4}\cmidrule(lr){5-7}\cmidrule(lr){8-10}\\ & {DSC} & {MHD} & {ASD} & \multicolumn{1}{l}{{DSC}} & \multicolumn{1}{l}{{MHD}} & \multicolumn{1}{l}{{ASD}} & \multicolumn{1}{l}{{DSC}} & \multicolumn{1}{l}{{MHD}} & \multicolumn{1}{l}{{ASD}} \\ \midrule \textbf{EarlyFusion$\_$Single} & 0.953 & 9.296 & 0.128 & 0.916 & 7.131 & 0.346 & 0.895 & 6.903 & 0.406 \\ \textbf{EarlyFusion$\_$Ensemble} & 0.957 & \textbf{9.029} & 0.138 & \textbf{0.919} & 6.415 & 0.338 & 0.897 & 6.975 & \textbf{0.376}\\ \textbf{LateFusion$\_$Ensemble} & 0.957 & 9.127 & 0.119 & 0.918 & 6.060 & 0.344 & 0.895 & 7.451 & 0.409\\ \midrule Bern\_IPMI & 0.954 & 9.616 & 0.127 & 0.916 & 6.455 & 0.341 & 0.896 & 6.782 & 0.398 \\ MSL\_SKKU & \textbf{0.958} & 9.072 & \textbf{0.116} & \textbf{0.919} & \textbf{5.980} & \textbf{0.330} & \textbf{0.901} & \textbf{6.444} & 0.391 \\ nic\_vicorob & 0.951 & 9.178 & 0.137 & 0.910 & 7.647 & 0.367 & 0.885 & 7.154 & 0.430 \\ TU/e IMAG/e & 0.947 & 9.426 & 0.150 & 0.904 & 6.856 & 0.375 & 0.890 & 6.908 & 0.433 \\ \bottomrule \end{tabular} \end{table} \end{comment} Table \ref{table:results} reports the results obtained by the three proposed methods and the top 5 among the 21 teams that participated in the iSEG MICCAI Grand Challenge\footnote{http://iseg2017.web.unc.edu/rules/results/}. In this table, mean DSC, MHD and ASD values are given separately for CSF, WM and GM tissues. We first observe that adopting an ensemble learning strategy (i.e., \textit{EarlyFusion\_Ensemble}) results in a small yet noticeable improvement in global performance, with respect to our baseline using a single CNN (i.e., \textit{EarlyFusion$\_$Single}). The results also indicate that fusing features from image modalities in a late stage does not help the segmentation compared to an early fusion strategy. In fact, improvements in the \textit{LateFusion\_Ensemble} approach only occurred for 2 out of 9 combinations of metric and tissue (ASD for CSF and MHD for GM). Comparing against other competing approaches, the proposed \textit{EarlyFusion\_Ensemble} method obtained the best score in 5 out of 9 cases. In particular, our network yielded the best DSC values for the three brain tissues, and the best MHD and ASD values for CSF and white matter, respectively. Furthermore, all tested methods obtained the highest accuracy for CSF, and slightly better results for GM than WM. As can be seen in Fig. \ref{fig:isointense}, the edges between GM and WM tissues are weak, resulting in a harder classification for voxels lying on these edges. \begin{table}[ht!] \centering \caption{Segmentation results from the iSEG-2017 Segmentation challenge for the top-5 ranked methods. The first set of the results correspond to the three proposed approaches, and the second set to approaches presented by the other competing teams in the top-5 (in alphabetical order). Bold fonts indicate the best performances for each metric and structure. For additional details, we refer the reader to the challenge's website.} \label{table:results} \begin{small} \begin{tabular}{lccccccccc} \toprule \multirow{2}[3]{*}{\textbf{Method}} & \multicolumn{3}{c}{{CSF}} & \multicolumn{3}{c}{{GM}} & \multicolumn{3}{c}{{WM}} \\ \cmidrule(lr){2-4}\cmidrule(lr){5-7}\cmidrule(lr){8-10} & {DSC} & {MHD} & {ASD} & \multicolumn{1}{l}{{DSC}} & \multicolumn{1}{l}{{MHD}} & \multicolumn{1}{l}{{ASD}} & \multicolumn{1}{l}{{DSC}} & \multicolumn{1}{l}{{MHD}} & \multicolumn{1}{l}{{ASD}} \\ \midrule\midrule \textbf{EarlyFusion$\_$Single} & 0.95 & 9.30 & 0.13 & \textbf{0.92} & 7.13 & 0.35 & \textbf{0.90} & 6.90 & 0.41 \\ \textbf{EarlyFusion$\_$Ensemble} & \textbf{0.96} & \textbf{9.03} & 0.14 & \textbf{0.92} & 6.42 & 0.34 & \textbf{0.90} & 6.98 & \textbf{0.38}\\ \textbf{LateFusion$\_$Ensemble} & \textbf{0.96} & 9.13 & \textbf{0.12} & \textbf{0.92} & 6.06 & 0.34 & \textbf{0.90} & 7.45 & 0.41\\ \midrule Bern\_IPMI & \textbf{0.96} & 9.62 & 0.13 & \textbf{0.92} & 6.46 & 0.34 & \textbf{0.90} & 6.78 & 0.40 \\ MSL\_SKKU & \textbf{0.96} & 9.07 & \textbf{0.12} & \textbf{0.92} & \textbf{5.98} & \textbf{0.33} & \textbf{0.90} & \textbf{6.44} & 0.39 \\ nic\_vicorob & 0.95 & 9.18 & 0.14 & 0.91 & 7.65 & 0.37 & 0.89 & 7.15 & 0.43 \\ TU/e IMAG/e & 0.95 & 9.43 & 0.15 & 0.90 & 6.86 & 0.38 & 0.89 & 6.91 & 0.43 \\ \bottomrule \end{tabular} \end{small} \end{table} \begin{table}[ht!] \centering \caption{Segmentation results obtained by the proposed \textit{EarlyFusion$\_$Ensemble} approach on the 13 test subjects of the iSEG Segmentation challenge.} \label{table:resultsSingle} \begin{small} \begin{tabular}{lccccccccc} \toprule \multirow{2}[3]{*}{\textbf{Subject ID}} & \multicolumn{3}{c}{{CSF}} & \multicolumn{3}{c}{{GM}} & \multicolumn{3}{c}{{WM}} \\ \cmidrule(lr){2-4}\cmidrule(lr){5-7}\cmidrule(lr){8-10} & {DSC} & {MHD} & {ASD} & \multicolumn{1}{l}{{DSC}} & \multicolumn{1}{l}{{MHD}} & \multicolumn{1}{l}{{ASD}} & \multicolumn{1}{l}{{DSC}} & \multicolumn{1}{l}{{MHD}} & \multicolumn{1}{l}{{ASD}} \\ \midrule\midrule \textbf{\#11} & 0.9637 & 7.5498 & 0.1024 & 0.9300 & 4.3589 & 0.2838 & 0.9065 & 8.4853 & 0.3423 \\ \textbf{\#12} & 0.9526 & 6.7823 & 0.1287 & 0.9037 & 6.4807 & 0.3780 & 0.8702 & 6.6333 & 0.4558 \\ \textbf{\#13} & 0.9619 & 10.1980 & 0.1158 & 0.9276 & 6.4031 & 0.3172 & 0.9104 & 9.4868 & 0.3685 \\ \textbf{\#14} & 0.9423 & 8.9443 & 0.1565 & 0.9091 & 6.3246 & 0.3767 & 0.8912 & 5.7446 & 0.4203 \\ \textbf{\#15} & 0.9607 & 9.2736 & 0.1064 & 0.9281 & 4.5826 & 0.3159 & 0.9054 & 7.0000 & 0.3728 \\ \textbf{\#16} & 0.9582 & 10.8167 & 0.1179 & 0.9208 & 8.1240 & 0.3237 & 0.9111 & 6.5574 & 0.3733 \\ \textbf{\#17} & 0.9609 & 8.1240 & 0.0405 & 0.9270 & 7.8740 & 0.2828 & 0.9119 & 5.9161 & 0.3277 \\ \textbf{\#18} & 0.9646 & 9.4340 & 0.1040 & 0.9201 & 6.4031 & 0.3151 & 0.9033 & 7.1414 & 0.3735 \\ \textbf{\#19} & 0.9598 & 9.0000 & 0.1111 & 0.9202 & 5.6569 & 0.3198 & 0.9088 & 8.1854 & 0.3830 \\ \textbf{\#20} & 0.9524 & 10.0499 & 0.1352 & 0.9039 & 6.7082 & 0.4207 & 0.8690 & 5.9161 & 0.5073 \\ \textbf{\#21} & 0.9587 & 9.4340 & 0.1132 & 0.9156 & 5.8310 & 0.3343 & 0.8908 & 5.8310 & 0.4275 \\ \textbf{\#22} & 0.9454 & 8.7750 & 0.4491 & 0.9158 & 8.4853 & 0.3951 & 0.8896 & 7.0711 & 0.1248 \\ \textbf{\#23} & 0.9598 & 9.0000 & 0.1087 & 0.9196 & 6.1644 & 0.3339 & 0.8934 & 6.7082 & 0.4080 \\ \bottomrule \end{tabular} \end{small} \end{table} Considering results for individual test subjects (Table \ref{table:resultsSingle}), the proposed \textit{Early\_Ensemble} approach yields an accurate segmentation in most cases, thus showing its robustness. A lower performance was, however, obtained for a few cases, for example, segmenting the CSF of \textit{subject\_022}, which yielded an ASD of 0.449 mm. Ignoring this result brings the mean ASD for CSF down to 0.11, which is lower than the best ASD value of 0.12 for this tissue. In a paired t-test, our approach performs at the same level as the best competing method (i.e., MSL\_SKKU), with no significant difference (p $>$ 0.01) observed in any of the test cases. Note that this competing method is also based on deep CNNs. To illustrate the impact on reliability of using an ensemble of CNNs, Fig. \ref{fig:probMaps} compares the segmentation confidence of the \textit{Early\_Single} and \textit{Early\_Ensemble} methods, for a given slice of two different subjects. Specifically, the first and third columns depict the probability maps predicted by the single CNN, while the second and fourth columns show the agreement score of the ensemble (i.e., the percentage of CNNs in the ensemble that predicted a given label). We see that the ensemble agreement values are sharper (i.e., closer to 0 -- black or 1 -- blue) than the predictions of the single CNN. With respect to tissue classes, the differences in confidence are smallest for the CSF, and more significant for WM and GM. This is in line with previous results in Table \ref{table:results}, indicating CSF to be an easier tissue to segment. These differences between the methods are illustrated in Fig. \ref{fig:probMapsGM}, showing the ensemble's ability to improve the predictions of a single CNN. \begin{figure}[ht!] \mbox{ \includegraphics[width=0.24\linewidth]{CSF_Subj11_OneCNN.png} \hspace{-2.25 mm} \includegraphics[width=0.24\linewidth]{CSF_Subj11_Ensemble.png} \hspace{-.25 mm} \includegraphics[width=0.24\linewidth]{CSF_Subj14_OneCNN.png} \hspace{-2.25 mm} \includegraphics[width=0.24\linewidth]{CSF_Subj14_Ensemble.png} } \vspace{1mm} \mbox{ \includegraphics[width=0.24\linewidth]{GM_Subj11_OneCNN.png} \hspace{-2.25 mm} \includegraphics[width=0.24\linewidth]{GM_Subj11_Ensemble.png} \hspace{-.25 mm} \includegraphics[width=0.24\linewidth]{WM_Subj14_OneCNN.png} \hspace{-2.25 mm} \includegraphics[width=0.24\linewidth]{GM_Subj14_Ensemble.png} } \vspace{1mm} \mbox{ \includegraphics[width=0.24\linewidth]{WM_Subj11_OneCNN.png} \hspace{-2.25 mm} \includegraphics[width=0.24\linewidth]{WM_Subj11_Ensemble.png} \hspace{-.25 mm} \includegraphics[width=0.24\linewidth]{GM_Subj14_OneCNN.png} \hspace{-2.25 mm} \includegraphics[width=0.24\linewidth]{WM_Subj14_Ensemble.png} } \caption{Probability maps obtained for each tissue of two test subjects. The left column of each example depicts the probability maps obtained from the baseline CNN (i.e. single CNN method). The right column shows voxel-wise segmentation confidence of the ensemble of CNNs. Dark blue indicates highest confidence and dark red lowest confidence.} \label{fig:probMaps} \end{figure} \begin{figure}[ht!] \begin{center} \mbox{ \includegraphics[width=0.45\linewidth]{WM_Detail_Single.png} \includegraphics[width=0.45\linewidth]{WM_Detail_Ensemble.png} } \caption{Probability maps of gray matter on a 2D slice obtained by a single CNN (\textit{left}) and the ensemble (\textit{right}). Important differences between both methods are highlighted in green.} \label{fig:probMapsGM} \end{center} \end{figure} \begin{comment} \begin{figure}[ht!] \begin{center} \mbox{ \includegraphics[width=0.30\linewidth]{ProbMapInfant1.png} \hspace{-2.5 mm} \includegraphics[width=0.30\linewidth]{ProbMapInfant2.png} \hspace{-2.5 mm} \includegraphics[width=0.30\linewidth]{ProbMapInfant3.png} } \caption{Probability maps obtained for each of the three tissues of a given subject.} \label{fig:probMaps} \end{center} \end{figure} \end{comment} \subsection{Suggestion of local corrections} \begin{comment} An important feature of employing an ensemble of classifiers is that we are able to measure the level of agreement among them. This knowledge about the degree of agreement may serve as a confidence value to evaluate how sure the ensemble is of a segmentation in a given region. This local information can be used to suggest local corrections where the ensemble is not confident about the prediction. \end{comment} We validated the usefulness of the ensemble as to suggesting local corrections by comparing confidence values at individual voxels with segmentation errors. Since the ground truth is not available for test instances of the iSEG segmentation challenge, we used a cross-validation strategy for this task. Given the 10 subjects with reference segmentations, we selected a single subject for testing and used the remaining 9 to train the $k=10$ CNNs of the ensemble. For each CNN, 7 subjects were randomly selected for training and the other 2 kept for validation. This process was repeated 5 times, each involving a different test subject. To evaluate the relationship between ensemble confidence and error, we separated confidence values in two groups using an agreement threshold of 60\%: \emph{low confidence}, where agreement among the ensemble's CNNs equal to or less than 60$\%$ (no more than 6 CNNs agree), and \emph{high confidence}, represented by agreement values greater than 60$\%$ (at least 7 CNNs agree). Table \ref{tab:correlation} gives the correlation between the \textit{Early\_Ensemble} method's prediction and ground truth value, for low confidence and high confidence voxels. We observe a very high correlation ($>$ 90$\%$) between highly confident predictions and the ground truth, for all three tissues. When prediction is low, correlation drops to values around 50$\%$ for gray matter, and around 10$\%$ for the other two tissues. Furthermore, Fig. \ref{fig:PDF_Condifence} shows the distributions of correctly and incorrectly classified voxels according to their confidence. For all three tissues, most correctly classified voxels have a 100$\%$ agreement, whereas confidence values of incorrectly classified voxels are more evenly distributed. We note that this distribution differs across tissue types. For CSF, voxels classified incorrectly mostly have a low confidence, while more balanced distributions are observed for GM and WM. Again, this indicates the higher difficulty of segmenting these two tissues compared to CSF. Overall, these results validate our hypothesis that the spatial map of ensemble CNN agreement values can be used to suggest local corrections. \begin{table}[ht!] \footnotesize \centering \caption{Correlation between the prediction of the proposed method and the ground truth in regions of high and low confidence (values in parenthesis correspond to the percentage of voxels belonging to each of these regions.)} \label{tab:correlation} \begin{tabular}{lcccccc} \toprule & \multicolumn{2}{c}{\textbf{CSF}} & \multicolumn{2}{c}{\textbf{GM}} & \multicolumn{2}{c}{\textbf{WM}} \\ \cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7} & High Conf. & Low Conf. & High Conf. & Low Conf. & High Conf. & Low Conf. \\ \midrule\midrule Subject\_01 & 0.92 (99.20$\%$) & 0.11 (0.80$\%$) & 0.93 (96.81$\%$) & 0.48 (3.19$\%$) & 0.92 (97.32$\%$) & 0.10 (2.68$\%$) \\ Subject\_02 & 0.93 (99.02$\%$) & 0.13 (0.98$\%$) & 0.94 (96.60$\%$) & 0.51 (3.40$\%$) & 0.94 (97.44$\%$) & 0.10 (2.57$\%$) \\ Subject\_03 & 0.94 (99.12$\%$) & 0.10 (0.88$\%$) & 0.94 (96.68$\%$) & 0.49 (3.32$\%$) & 0.94 (97.43$\%$) & 0.09 (2.57$\%$) \\ Subject\_04 & 0.91 (99.18$\%$) & 0.07 (0.82$\%$) & 0.92 (96.79$\%$) & 0.49 (3.21$\%$)& 0.92 (97.52$\%$) & 0.09 (2.48$\%$) \\ Subject\_05 & 0.93 (99.05$\%$) & 0.08 (0.95$\%$) & 0.94 (96.76$\%$) & 0.48 (3.24$\%$) & 0.94 (97.59$\%$) & 0.11 (2.41$\%$) \\ \bottomrule \end{tabular} \end{table} \begin{figure}[ht!] \begin{center} \mbox{ \includegraphics[width=0.95\linewidth]{CSF_PDF_Bar2.png} } \mbox{ \includegraphics[width=0.95\linewidth]{GM_PDF_Bar2.png} } \mbox{ \includegraphics[width=0.95\linewidth]{WM_PDF_Bar2.png} } \caption{Distribution of confidence values for correctly (\textit{blue bars}) and incorrectly (\textit{yellow bars}) classified voxels, for the three tissues of a given subject.} \label{fig:PDF_Condifence} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \mbox{ \includegraphics[width=0.24\linewidth]{Ex1_WM.png} \hspace{-2.5 mm} \includegraphics[width=0.24\linewidth]{Ex1_GM.png} \hspace{-2.5 mm} \includegraphics[width=0.24\linewidth]{Ex1_Prediction.png} \hspace{-2.5 mm} \includegraphics[width=0.24\linewidth]{Ex1_GT.png} } \mbox{ \includegraphics[width=0.24\linewidth]{Ex2_WM.png} \hspace{-2.5 mm} \includegraphics[width=0.24\linewidth]{Ex2_GM.png} \hspace{-2.5 mm} \includegraphics[width=0.24\linewidth]{Ex2_Pred.png} \hspace{-2.5 mm} \includegraphics[width=0.24\linewidth]{Ex2_GT.png} } \caption{Visual inspection of segmentation confidence values. From left to right: white matter, gray matter, predicted segmentation and reference labels. Green boxes highlight regions with large differences between the predicted contours and the reference standard, where as pink boxes are used to indicate small differences. } \label{fig:confidence} \end{center} \end{figure} As qualitative validation, Fig. \ref{fig:confidence} shows the examples of confidence values obtained for WM and GM (i.e., percentage of CNNs that predicted these tissues), along with the predicted GT labels. Dark blue corresponds to a total agreement (i.e., 100$\%$ of CNNs voted for the tissue), while dark red indicates the lowest possible agreement (i.e., a single CNN voted for the tissue). Voxels with low confidence (light blue and yellow colors) thus indicate regions with potential segmentation errors, which should be verified by the expert. Visual inspection of these confidence regions, highlighted by the green and pink squares in the figure, corroborates this hypothesis. \begin{comment} \begin{figure}[ht!] \begin{center} \mbox{ \includegraphics[height=0.145\linewidth]{Subject11-T1.png} \hspace{-2.5 mm} \includegraphics[height=0.145\linewidth]{Subject11-T2.png} \hspace{-2.5 mm} \includegraphics[height=0.145\linewidth]{Subject11-CSF.png} \hspace{-2.5 mm} \includegraphics[height=0.145\linewidth]{Subject11-GM.png} \hspace{-2.5 mm} \includegraphics[height=0.145\linewidth]{Subject11-WM.png} \hspace{-2.5 mm} \includegraphics[height=0.145\linewidth]{Subject11-Labels.png} } \mbox{ \includegraphics[height=0.15\linewidth]{Subj17_T1.png} \hspace{-2.5 mm} \includegraphics[height=0.15\linewidth]{Subj17_T2.png} \hspace{-2.5 mm} \includegraphics[height=0.15\linewidth]{Subj17_CSF.png} \hspace{-2.5 mm} \includegraphics[height=0.15\linewidth]{Subj17_GM.png} \hspace{-2.5 mm} \includegraphics[height=0.15\linewidth]{Subj17_WM.png} \hspace{-2.5 mm} \includegraphics[height=0.15\linewidth]{Subj17_Labels.png} } \caption{\textcolor{red}{Elaborate this.}Agreement of the CNNs composing the ensemble (first row patient 11.).} \label{fig:Agreement} \end{center} \end{figure} \end{comment} \section{Discussion}\label{sec:discussion} We presented an ensemble-learning approach, which combines multiple deep CNNs to segment isointense infant brain MRI and suggest local corrections in regions of low confidence. In the proposed CNN architecture, multi-modal MR images are employed to deal with the problem of low contrast, using an early or late fusion strategy to combine these different modalities. Furthermore, global and local features are considered in the classification by connecting feature maps from all convolutional layers to the first fully-connected layer. This semi-dense architecture facilitates the propagation of gradients during training, while limiting the number of network parameters. To improve the generalization of our segmentation approach, we adopted an ensemble-learning strategy that combines the output of 10 CNN models by majority voting. Having several independent predictions also allowed us to build a spatial map of ensemble confidence, which measures segmentation reliability at a voxel level. We showed that confidence values can identify potential errors in the automated segmentation, which might require corrections. The proposed approach was evaluated on the iSEG MICCAI Grand Challenge, and compared to 21 other competing teams. Our approach achieved a state-of-the-art performance, obtaining the highest score in most cases. The proposed method extends our previous work in \cite{DolzNeuro2017} by considering multi-modal images in the network. An important design choice is the strategy as to merging multiple sources of information. Different modalities can be combined in a tensor and fed to the network. Another option is to create independent paths for the modalities, and fuse the features of these paths at the end of the network, as in Fig. \ref{fig:CNN_archit_Early}. Since satisfactory results have been reported in the literature for both strategies, we investigated in this work the effect of an \textit{early} or \textit{late} fusion. From Table \ref{table:results}, we can see that fusing images in an early stage typically improved the segmentation for most metrics and for all three brain tissues. These findings are not consistent with the results in \cite{nie2016fully}, where combining extracted features in a late stage improved segmentation performance. However, unlike this study, these results were obtained using three image modalities (i.e., T1w,T2w, and FA) instead of two, and used a set of subjects different from the one available for this challenge. As reported in Table \ref{table:results}, combining predictions from several models in an ensemble yields improvements over a single CNN. This confirms the results obtained in computer vision studies, in the context of color image classification \cite{krizhevsky2012imagenet,zeiler2014visualizing}. These studies showed that ensemble learning can boost the performances of deep neural networks. In the context of medical image segmentation, a very valuable benefit of ensemble learning is the ability to measure segmentation confidence at a voxel-level, which can be used to identify regions that need further inspections by medical experts. We validated this assumption by computing the correlation between automatic predictions and manual annotations obtained for 5 subjects, in regions presenting low or high confidence. While high-confidence regions were highly correlated ($>$ 90$\%$) to ground-truth values across all three brain structures, low-confidence regions were poorly correlated, in particular for the CSF and WM ($\approx$ 10$\%$). A wide range of techniques were proposed to segment infant brain tissues in MRI (see Table \ref{table:table_RV_seg}), many of them based on atlases. Although atlas-based approaches were successful in segmenting adult brain images, their application to infant brains is more prone to errors due to the poor tissue contrast and the high spatial variability of the infant population. Such approaches also depend on the image registration step, which is time-consuming and a source of errors. In recent studies, deep CNNs were shown to outperform atlas-based methods for this task. For example, Nie et al. \cite{nie2016fully} obtained mean DSC values of 0.852, 0.873 and 0.887 for the CSF, GM and WM, respectively, over 10 subjects. Yet, a limitation of this work was the use of 2D convolutions, which omits important 3D context. Via 3D convolutions, our approach captures spatial information in volumetric data, which is confirmed by a performance improvement over 2D CNN models. Comparing the proposed \textit{Early\_Ensemble} approach to the top-5 ranked methods of the iSEG Grand Challenge, our approach achieved very competitive results, ranking first or second in most cases. A paired sample t-test showed that the difference between our approach and the other best ranked method in the challenge (i.e., MSL\_SKKU) is not statistically significant. Although various CNN-based networks have been used successfully for medical image segmentation (e.g., 3D U-Net \cite{cciccek20163d} or DenseVoxNet \cite{yu2017automatic}), an important benefit of our architecture is the reduced number of trainable parameters. While U-Net and DenseVoxNet require nearly 19M and 4M of parameters, respectively, our semidense network has less than 1M of parameters (nearly 900,000), which results in a 75$\%$-90$\%$ lighter model. This parameter efficiency translates into lower training and segmentation times. For example 3D-UNet requires 3 days for training, whereas our network can be trained in approximately 17 hours using the same hardware. A limitation of this study comes from the fact that the ground-truth labels of test instances were not available to the participating teams, which makes interpretation of the results difficult. Based on our cross-validation analysis, we can however conjecture that most errors come from misclassified voxels at the boundaries of GM and WM regions, which have low contrast. Confidence maps show that regions with lowest confidence typically correspond to the borders between these two tissues. Moreover, as mentioned on the iSEG website, manually correcting the data of a single subject took approximately one week. Taking into account that nearly 500 typically developing children will be scanned for the Baby Connectome Project, the adoption of an automatic segmentation tool is highly needed. As reported in our results, the proposed network can segment the data of a subject in 1-2 minutes on a single GPU, or in a few seconds on several GPUs. In addition to its efficiency, the proposed approach can identify potential errors in the automatic segmentation, which could be corrected with limited interactions from the user. A potential extension of this study would be to evaluate our approach in a real clinical setting, in which segmentation time and accuracy is measured for multiple manual raters. \section{Conclusion} We presented a novel method based on an ensemble of deep CNNs to segment isointense infant brains in multi-modal MRI images. Our fully-convolutional (FCNN) network considers the 3D spatial context of volumetric data and models both local and global information in the segmentation. We investigated a semi-dense architecture, where the features of each convolutional layer are aggregated as input to the first fully-connected layer. Moreover, two different strategies were investigated to combine multiple input image modalities, using either an early or a late fusion of these modalities. An ensemble learning technique, in which the prediction of 10 CNNs are combined using majority voting, was employed to improve the generalization performance of our method. This ensemble also enables to measure segmentation confidence, using the number of CNNs voting for a particular label. While ensembles were used in the past to suggest images for annotation, to our knowledge, this is the first work that investigated the problem at a voxel level. The performance of the proposed method was evaluated in the MICCAI iSEG-2017 Grand Challenge on 6-month infant brain MRI segmentation. The results show that our method is very competitive, ranking first or second among 21 competing teams for most of the metrics. Our experiments also demonstrate the benefit of combining the prediction of multiple CNNs, with performance improvements over using a single CNN, and show the link between ensemble agreement and segmentation error. This suggests that our method has the potential to achieve expert-level performance with limited user interactions. \section*{Acknowledgments} This work is supported by the National Science and Engineering Research Council of Canada (NSERC), discovery grant program, and by the ETS Research Chair on Artificial Intelligence in Medical Imaging. \section*{References}
{ "timestamp": "2017-12-20T02:10:30", "yymm": "1712", "arxiv_id": "1712.05319", "language": "en", "url": "https://arxiv.org/abs/1712.05319" }
\section{Introduction} The key notion of this paper, the \emph{comparison} originates in the theory of \mbox{$C^*$-al}geb\-ras, but the most important for us ``dynamical'' version concerns group actions on compact spaces. In this setup it was defined by J.~Cuntz (see \cite{Cu}) and further investigated by M.~R\o rdam in \cite{MR1,MR2} and by W.~Winter in \cite{W}. As in the case of many other properties and notions in dynamical systems, the most fundamental form of comparison occurs in actions of the additive group $\mathbb Z$ of the integers. In this context comparison is guaranteed for any action on a zero-dimensional\ compact metric space, which follows from the classical marker property of such actions (see \cite{Bo}). See also \cite{B} for more on comparison in $\mathbb Z$-actions. For a wider generality, we refer the reader to a recent paper by David Kerr \cite{K}, where the notion is defined for other actions including topological\ and measure-preserving ones. We will focus on a particular case where a countable amenable group acts on a zero-dimensional\ compact metric space. In fact, this case also plays one of the leading roles in \cite{K}. The main motivation for this paper is the fact that, unlike for $\mathbb Z$-actions, in the case of a general countable amenable group acting on a zero-dimensional\ compact metric space, it is unknown whether comparison necessarily occurs. There is neither a proof, nor a counterexample, although the problem has been attacked by several specialists for several years. Only a few partial results have been obtained, for instance, it is known (but never published, see \cite{Ph} and also \cite{SG}) that finitely generated groups with a symmetric F\o lner sequence\ satisfying Tempelman's condition (this includes all nilpotent, in particular Abelian, groups) have the comparison property, but beyond this case not much was known. On the other hand, comparison is a very desirable property with many important consequences (see further in this introduction), thus any progress in understanding which actions enjoy comparison (or which groups have comparison for all actions) is valuable. Our main invention introduced in this paper is a new notion of a \emph{correction chain}---a kind of pseudoorbit which allows to improve a partially defined map and extend its domain. Using this tool in section \ref{cztery} we succeed in identifying a large class of groups whose any action on a zero-dimensional\ compact metric space has comparison. Namely, it is the class of groups whose every finitely generated subgroup has subexponential growth (we call them shortly \emph{subexponential groups}). This covers all nilpotent and in fact virtually nilpotent groups (which have polynomial growth) but also other, with intermediate growth, the most known example of which is the Grigorchuk group (\cite{G}). By a recent result of Breuillard, Green and Tao \cite{BGT}, our result also covers the above mentioned ``Tempelman groups''; they turn out to be virtually nilpotent. Of course, there exist also amenable groups with exponential growth, and for these the problem remains a challenge. \smallskip The last section of the paper is devoted to the connection between comparison and the existence of what we call \emph{dynamical tilings} with arbitrarily good ``F\o lner properties''. Such dynamical tilings, which exist in any aperiodic action of $\mathbb Z$, have numerous applications in ergodic theory and occur under various names (as Kakutani--Rokhlin partitions or clopen tower partitions, etc.) for example in the study of full groups and orbit equivalence of minimal Cantor systems (see \cite{Sl} for an exposition on this subject). For amenable group actions, for a long time, quasitilings of Ornstein and Weiss (see \cite{OW1}) have played a crucial role, mainly due to their universal existence in all countable amenable groups, and Lindenstrauss' Pointwise Ergodic Theorem (\cite{L}) is one of the most important applications. There are many more such applications, see for example \cite{DZ,FT,HYZ,PS}. However, the Ornstein--Weiss quasitilings are ``algebraic'' (i.e., unrelated to any \emph{a priori} given action). In \cite{DHZ}, it has been proved that in the algebraic case the Ornstein--Weiss quasitilings (with arbitrarily good F\o lner properties) can be improved to become tilings. Such tilings have already found numerous applications, see e.g. \cite{D,S,Z,ZCY}. In fact, the advantage of (algebraic) tilings over (algebraic) quasitilings is visible also in this paper: these tilings are used in the proof of the key Lemma \ref{key}, where quasitilings would not work. As mentioned above, it is often desired to have a tiling (or at least a quasitiling) which depends on the \emph{a priori} given action. In \cite{DH} it is proved that dynamical quasitilings with arbitrarily good F\o lner properties exist \emph{as factors} in any free action of any countable amenable group. In this version the result has already been used in \cite{FH, FH1}. In a recent paper \cite{C-T} we find a different approach: a dynamical tiling (in place of quasitiling) is obtained as \emph{a factor of an extension} of a given free minimal action. In the same paper, these tilings are applied to establish some kind of stability for generic actions, using the language of $C^*$-algebras (see also the survey \cite{W1}). But for many other purposes all the above discussed quasitilings and tilings are insufficient. Dynamical tilings which are factors of a given action are needed for instance to build the theory of symbolic extensions, and neither dynamical quasitilings nor tilings which are factors of some \emph{extended} action seem to be sufficient. This problem will be discussed in detail in our forthcoming paper \cite{DoZ}. As we have already mentioned, in the general case the existence of a dynamical tiling which is a factor of a given free action (on a zero-dimensional\ compact metric space) is unknown. In the last section, we tie the existence of such dynamical tilings with the comparison property. In particular, using the result concerning the comparison property of subexponential groups, we prove that if the group is subexponential, then any free action on a zero-dimensional\ compact metric space factors to dynamical tilings with arbitrarily good F\o lner properties. We also prove the reversed implication: an action (not necessarily free), which factors to dynamical tilings with arbitrarily good F\o lner properties, admits comparison. \medskip The authors thank Gabor Szabo for valuable information on the current state of the art in the subject matter. \section{Preliminaries} In this paper, whenever we say ``a finite set'' we mean a nonempty finite set, and whenever we say ``a countable set'' we mean either a finite or an infinite countable set. Since we will often consider pairs of subsets of a group $G$ as well as pairs of subsets of a compact space $X$, for easier distinction we will use the convention that in the first case these sets will be denoted using slanted font: $A,B\subset G$, and in the second using straight sans serif font: $\mathsf A,\mathsf B\subset X$. Boldface letters $\mathbf A,\mathbf B$ are reserved for blocks in symbolic systems, while script $\mathcal A,\mathcal B$ will be used for families of blocks. \subsection{Amenable groups and their actions} Let $G$ be a countable group. \begin{defn} A sequence\ $(F_n)_{n\in\mathbb N}$ of finite subsets of $G$ is called a (left) \emph{F\o lner sequence} if for any $g\in G$ one has \[ \lim_{n\to\infty}\frac{|gF_n\cap F_n|}{|F_n|}=1, \] where $|\cdot|$ denotes the cardinality of a set. \end{defn} Equivalently, a sequence of finite sets $(F_n)$ is F\o lner if and only if for every finite set $K\subset G$ and every $\varepsilon>0$ the sets $F_n$ are eventually \emph{$(K,\varepsilon)$-invariant}, i.e., satisfy $$ \frac{|KF_n\triangle F_n|}{|F_n|}<\varepsilon $$ ($\triangle$ stands for the symmetric difference of sets). \begin{defn} A countable group possessing a F\o lner sequence\ is called \emph{amenable}. \end{defn} The above is just one of many equivalent definitions of amenability, applicable to countable groups. For more general definitions and properties see for example \cite{P}. In particular, it is known that a subgroup of an amenable group is amenable. \medskip Let $X$ be a topological\ space. We say that a group $G$ \emph{acts on $X$} if there is a group homomorphism $\tau:G\to\mathsf{HOMEO}(X,X)$ of $G$ into the group of self-homeomorphisms of $X$. If an action of $G$ on $X$ is understood, it is customary to write $g(x)$ instead of $\tau(g)(x)$. By the \emph{orbit} of a point $x\in X$ we will mean the set $G(x)=\{g(x):g\in G\}$. It is a basic property of amenability (and in fact a condition equivalent to it) that if $G$ is amenable then for any action of $G$ on any compact metric space there exists a Borel probability measure $\mu$ on $X$ invariant under the action in the following sense: if $\mathsf A\subset X$ is a Borel set then $\mu(\mathsf A)=\mu(g(\mathsf A))$ for every $g\in G$. We will briefly call such $\mu$ an \emph{invariant measure} (skipping the adjectives ``Borel'' and ``probability''). The collection of all invariant measure s $\mathcal M_G(X)$ endowed with the weak-star topology is a compact convex set whose extreme points are precisely the ergodic measures (i.e., such that each Borel-measurable invariant set has measure either zero or one). If $(F_n)$ is any F\o lner sequence\ in $G$ then, for every point $x\in X$, the sequence of atomic measures $$ \frac1{|F_n|}\sum_{g\in F_n}\delta_{g(x)} $$ (where $\delta_x$ denotes the point-mass at $x$) accumulates at the set of invariant measure s. If this sequence\ converges to some $\mu$ then $\mu$ is necessarily invariant and we call $x$ \emph{a generic point for $\mu$}. If $(F_n)$ is \emph{tempered}, then for every ergodic measure $\mu$ the set of all points generic for $\mu$ has full measure (see \cite{L} for the definition of the notion ``tempered'' and for the theorem). \smallskip The most basic example of an action is the \emph{full shift on a finite alphabet}. Let $\Lambda$ be a finite set (called \emph{the alphabet}) and let $\Lambda^G$ be endowed with the product topology, where the set $\Lambda$ is considered discrete. The group $G$ acts on $\Lambda^G$ by the \emph{shifts} defined as follows: if $x=(x_g)_{g\in G}$ and $h\in G$ then $h(x)=y=(y_g)_{g\in G}$, where, for each $g\in G$, $y_g = x_{gh}$. By a \emph{subshift} we will understand the action of $G$ on any nonempty closed shift-invariant\ subset $X$ of $\Lambda^G$. If $F\subset G$ is finite then any function $\mathbf B:F\to\Lambda$ is called a \emph{block} over $F$. With each block $\mathbf B$ (over any finite $F$) we associate the \emph{cylinder} $$ [\mathbf B]=\{x\in \Lambda^G: x|_F=\mathbf B\}. $$ If $F=\{e\}$ (where $e$ denotes the unity of $G$) and $\mathbf B(e) = \alpha\in\Lambda$ ($\mathbf B(g)$ denotes the entry of $\mathbf B$ at the coordinate $g$) then the cylinder $[\mathbf B]$ will be denoted by $[\alpha]$. We say that a block $\mathbf B$ (over some $F$) \emph{occurrs} in some $x\in \Lambda^G$ \emph{at the position $g$} if $g(x)\in[\mathbf B]$, equivalently, if $x_{fg}=\mathbf B(f)$ for every $f\in F$. If we restrict our attention to a subshift $X\subset \Lambda^G$, by the cylinder $[\mathbf B]$ we will understand what should be formally denoted as $[\mathbf B]\cap X$. The collection of all cylinders (corresponding to all blocks over all finite sets) is a clopen base of the topology in $X$. If $G$ acts on two compact metric spaces, $X$ and $Y$, we will say that the action on $Y$ is a \emph{topological\ factor} of the action on $X$ if there exists a continuous surjection $\pi:X\to Y$ such that, for every $g\in G$, $g\circ\pi = \pi\circ g$ (where $g$ is understood as a transformation of either $X$ or $Y$). \smallskip The property of a group action on a topological\ space which generalizes that of aperiodicity for $\mathbb Z$-actions (i.e., lack of periodic points) is freeness. There are several (not equivalent) definitions of a free group action. We will use the strongest: \begin{defn} An action $\tau:G\to\mathsf{HOMEO}(X,X)$ is called \emph{free} if for every $g\in G$ $$ (\exists\,x\in X : g(x)=x) \text{ \ implies \ }g=e. $$ \end{defn} \subsection{Subexponential groups} \begin{defn} In a group $G$, a set $R$ such that $\bigcup_{n=1}^\infty (R\cup R^{-1})^n=G$ is called a \emph{generator} of $G$. A group having a finite generator is called \emph{finitely generated}. \end{defn} \begin{defn} A finitely generated group $G$ with a generator $R$ has \emph{subexponential growth} if $|(R\cup R^{-1})^n|$ grows subexponentially, i.e., $$ \lim_{n\to\infty}\frac1n\log|(R\cup R^{-1})^n|=0. $$ \end{defn} It is very easy to see that subexponential growth of a finitely generated group $G$ implies subexponential growth of $|K^n|$ for any finite set $K\subset G$ and thus does not depend on the choice of a finite generator. \begin{defn} A countable group $G$ (not necessarily finitely generated) is called \emph{subexponential} if every its finitely generated subgroup has subexponential growth. \end{defn} It is a standard fact that a group $G$ is amenable if and only if so is every finitely generated subgroup of $G$. It is also known that finitely generated groups with subexponential growth are amenable \cite{AS}, hence every subexponential group is amenable. This is why we can omit amenability assumption when dealing with subexponential groups. Examples of subexponential groups are: Abelian, nilpotent and virtually nilpotent groups. These examples have polynomial growth, but there are also examples of countable groups with intermediate growth rates (see \cite{G}). By a recent result \cite{BGT}, all finitely generated groups, which admit an increasing sequence\ of sets $(A_n)_{n\in\mathbb N}$ with $G=\bigcup_{n=1}^\infty A_n$ and $|A_n^2|<C|A_n|$ for some constant $C>0$, are virtually nilpotent and hence subexponential. In particular, this applies to finitely generated groups possessing a symmetric F\o lner sequence\ $(F_n)$ satisfying Tempelman's condition $|F_n^{-1}F_n|\le C|F_n|$. \subsection{Upper and lower Banach densities, Banach density advantage} \begin{defn}\label{1.7} For a subset $B\subset G$ and a finite set $F\subset G$ denote \[ \underline D_F(B)=\inf_{g\in G} \frac{|B\cap Fg|}{|F|}\text{ \ and \ } \overline D_F(B)=\sup_{g\in G} \frac{|B\cap Fg|}{|F|}. \] If $(F_n)$ is a F\o lner sequence then define \[ \underline D(B)=\limsup_{n\to\infty} \underline D_{F_n}(B)\text{ \ and \ } \overline D(B)=\liminf_{n\to\infty} \overline D_{F_n}(B), \] which we call the \emph{lower} and \emph{upper Banach density} of $B$, respectively. \begin{rem} The notions of upper and lower Banach density have been studied from several points of view. For example, in \cite{BBF} the reader will find a different definition. It can be shown that that definition is in fact equivalent to ours. \end{rem} For two sets $A$ and $B$ of $G$ we define the following quantities \[ \underline D_F(B,A)=\inf_{g\in G} \frac1{|F|}(|B\cap Fg|-|A\cap Fg|), \ \ \ \underline D(B,A)=\limsup_{n\to\infty} \underline D_{F_n}(B,A). \] The latter number will be called the \emph{Banach density advantage} of $B$ over $A$ (which can be negative, but we will never consider such a case). \end{defn} We will be using the following elementary fact (see e.g. \cite[Lemma 3.4]{DHZ}, where it is formulated using the language of quasitilings which in this paper will be introduced later). \begin{lem}\label{1.5} Let $(A_k)_{k\ge 1}$ and $(g_k)_{k\ge 1}$ be a sequence of subsets of $G$ and a sequence\ of elements of $G$ such that: \begin{enumerate} \item the union $\bigcup_{k=1}^\infty A_k$ is finite, \item $A=\bigcup_{k=1}^\infty A_kg_k$ is a disjoint union. \end{enumerate} For each $k$ let $B_k\subset A_k$ and let $B=\bigcup_{k=1}^\infty B_kg_k$. Then $$ \underline D(B)\ge\underline D(A)\cdot\inf_k\frac{|B_k|}{|A_k|}. $$ \end{lem} The following lemma will be repeatedly used in many of our considerations. \begin{lem}\label{bdc} Let $F, F_1$ be finite subsets of $G$ and let $A,B$ be some arbitrary subsets of $G$. If $F_1$ is $(F,\varepsilon)$-invariant\ then $\underline D_{F_1}(B,A)\ge \underline D_F(B,A)-4\varepsilon$. \end{lem} \begin{proof} Given $g\in G$, we have $$ |B\cap Fhg|-|A\cap Fhg|\ge \underline D_F(B,A)|F|, $$ for every $h\in F_1$. This implies that \begin{multline*} |\{(f,h): f\in F, h\in F_1, fhg\in B\}| - |\{(f,h): f\in F, h\in F_1, fhg\in A\}| \ge\\ \underline D_F(B,A)|F||F_1|. \end{multline*} This in turn implies that there exists at least one $f\in F$ for which $$ |B\cap fF_1g|-|A\cap fF_1g|\ge \underline D_F(B,A)|F_1|. $$ Since $f\in F$ and $F_1$ is $(F,\varepsilon)$-invariant (and hence so is $F_1g$), we have $$ \bigl||B\cap fF_1g|-|B\cap F_1g|\bigr|\le |fF_1\triangle F_1|=2|fF_1\setminus F_1|\le2|FF_1\setminus F_1|\le2\varepsilon|F_1|, $$ and the same for $A$, which yields \begin{equation} |B\cap F_1g|-|A\cap F_1g|\ge (\underline D_F(B,A)-4\varepsilon)|F_1|. \end{equation} To end the proof, it remains to apply the infimum over all $g\in G$ on the left, and divide both sides by $|F_1|$. \end{proof} The first two equalities in the lemma below have been proved in \cite{DHZ}. \begin{lem}\label{bd} The values of $\underline D(B)$, $\overline D(B)$ and $\underline D(B,A)$ do not depend on the choice of the F\o lner sequence, the limits superior and inferior in the definition are in fact limits, and moreover \begin{align*} \underline D(B) &= \sup_F\ \underline D_F(B),\\ \overline D(B) &= \,\inf_F\ \overline D_F(B),\\ \underline D(B,A) &= \sup_F\ \underline D_F(B,A), \end{align*} where $F$ ranges over all finite subsets of $G$. \end{lem} \begin{proof} We will prove the third equation. Then, plugging in $A=\emptyset$ we will get the first equation and passing to the complement $B^c$ we will get the second equation. The inequality $\limsup_{n\to\infty}\underline D_{F_n}(B,A)\le\sup\{\underline D_F(B,A): F\subset G, F \text{ is finite}\}$ is obvious. It remains to show that $$ \liminf_{n\to\infty}\underline D_{F_n}(B,A)\ge\sup\{\underline D_F(B,A): F\subset G, F \text{ is finite}\}. $$ At the same time this will prove the existence of all three limits. Let $F\subset G$ be a finite set. Given $\varepsilon>0$, for any $n$ large enough $F_n$ is $(F,\varepsilon)$-invariant, hence Lemma \ref{bdc}, implies that $\liminf_{n\to \infty} \underline D_{F_n}(B,A)\ge \underline D_F(B,A)-4\varepsilon$. Since $\varepsilon>0$ is arbitrary, it can be ignored. \end{proof} \begin{cor}\label{coro}We have $$ \underline D(B)-\overline D(A)\le \underline D(B,A). $$ \end{cor} \begin{proof} Fix a F\o lner sequence\ $(F_n)$. By the above lemma, we can write \begin{multline*} \underline D(B,A)= \lim_{n\to\infty} \ \inf_{g\in G} \frac{|B\cap F_ng|-|A\cap F_ng|}{|F_n|}\ge\\ \lim_{n\to\infty} \ \inf_{g\in G} \frac{|B\cap F_ng|}{|F_n|}- \lim_{n\to\infty} \ \sup_{g\in G}\frac{|A\cap F_ng|}{|F_n|}= \underline D(B)-\overline D(A). \end{multline*} \end{proof} \subsection{Tilings of amenable groups} In this section we will briefly recall the notion of a quasitiling and tiling of a group and we will quote from \cite{DHZ} the result on the existence of tilings, with arbitrarily good F\o lner properties, of countable amenable groups. These notions and the result will not be used until Section~\ref{cztery}, but due to their generality we put them in the preliminaries. Later, in Section~\ref{piec} we will also introduce the notion of dynamical quasitilings and tilings (and some results on their existence). In contrast to dynamical (quasi)tilings, the (quasi)tilings of this subsection can be regarded as ``static'' or ``algebraic''. \begin{defn}\label{quasi} A \emph{quasitiling} $\mathcal{T}$ of a group $G$ is determined by two objects: \begin{enumerate} \item a finite collection $\mathcal{S}(\mathcal{T})$ of finite subsets of $G$ containing the unity $e$, called \emph{the shapes}; \item a finite collection $\mathcal{C}(\mathcal{T}) = \{C(S):S\in\mathcal{S}(\mathcal{T})\}$ of disjoint subsets of $G$, called \emph{sets of centers} (for the shapes). \end{enumerate} The quasitiling is then the family $\mathcal{T}=\{(S,c):S\in\mathcal{S}(\mathcal{T}),\ c\in C(S)\}$. We require the map $(S,c)\mapsto Sc$ to be injective. Hence, by the \emph{tiles} of \,$\mathcal{T}$ (denoted by the letter $T$) we will mean either the sets $Sc$ or the pairs $(S,c)$ (i.e., the tiles with defined centers), depending on the context. \end{defn} Note that every quasitiling $\mathcal{T}$ can be represented in a symbolic form, as a point $\mathcal{T}\in\Delta^G$, with the alphabet $\Delta = \mathcal{S}(\mathcal{T})\cup\{0\}$, as follows: $\mathcal{T}_g=S \iff g\in C(S)$, $\mathcal{T}_g=0$ otherwise. \begin{defn}\label{qt} Let $\varepsilon\in[0,1)$ and $\alpha\in(0,1]$ and let $K\subset G$ be a finite set. A quasitiling $\mathcal{T}$ is called \begin{enumerate} \item \emph{$(K,\varepsilon)$-invariant} if all shapes of $\mathcal{T}$ are \emph{$(K,\varepsilon)$-invariant}; \item \emph{$\varepsilon$-disjoint} if there exists a mapping $T\mapsto T^\circ$ (\,$T\in\mathcal{T}$) such that \begin{itemize} \item $T^\circ\subset T$, $\frac{|T^\circ|}{|T|}>1-\varepsilon$ and \item $T\neq T'\implies T^\circ\cap {T'}^\circ=\emptyset$; \end{itemize} \item \emph{disjoint} if the tiles of $\mathcal{T}$ are pairwise disjoint; \item \emph{$\alpha$-covering} if $\underline D(\bigcup\mathcal{T})\ge\alpha$; \item a \emph{tiling} if it is a partition of $G$. \end{enumerate} \end{defn} One of the most fruitful facts in ergodic theory (as well as topological\ dynamics) of amenable group actions is the following fact due to Ornstein and Weiss (\cite[Proposition 4]{OW}), which can be reformulated as follows: \begin{thm} Let $G$ be a countable amenable group. Then, for any finite set $K$ and any $\varepsilon,\delta,\gamma>0$, there exists a $(K,\varepsilon)$-invariant, $(1-\delta)$-covering, $\gamma$-disjoint quasitiling of $G$. \end{thm} The authors of \cite{OW} indicate also that it is possible to create disjoint quasitilings as above. In \cite{DHZ} we were able to improve the above and replace the quasitilings by tilings (this seemingly small improvement will become crucial in Section \ref{cztery}). \begin{thm}{\cite[Theorem 4.3]{DHZ}}\label{ourtilings} Let $G$ be a countable amenable group. Then, for any finite set $K$ and any $\varepsilon>0$, there exists a $(K,\varepsilon)$-invariant\ tiling of $G$. \end{thm} \section{The comparison property} The key notions of this paper are given below (see also \cite{K}). \begin{defn}\label{defcom} Let $G$ be a countable amenable group. \begin{enumerate} \item Let $G$ act on a zero-dimensional\ compact metric space $X$. For two clopen sets $\mathsf A,\mathsf B\subset X$, we say that $\mathsf A$ is \emph{subequivalent} to $\mathsf B$ (and write $\mathsf A\preccurlyeq \mathsf B$), if there exists a finite partition $\mathsf A=\bigcup_{i=1}^k \mathsf A_i$ of $\mathsf A$ into clopen sets and there are elements $g_1,g_2,\dots,g_k$ of $G$ such that $g_1(\mathsf A_1), g_2(\mathsf A_2),\dots,g_k(\mathsf A_k)$ are disjoint subsets of $\mathsf B$. We say that the action \emph{admits comparison} if for any pair of clopen subsets $\mathsf A,\mathsf B$ of $X$, the condition that for each invariant measure\ $\mu$ on $X$ we have $\mu(\mathsf A)<\mu(\mathsf B)$, implies $\mathsf A\preccurlyeq \mathsf B$. \item If every action of $G$ on any zero-dimensional\ compact metric space admits comparison then we will say that $G$ has the \emph{comparison property}. \end{enumerate} \end{defn} Clearly, $\mathsf A\preccurlyeq \mathsf B$ implies $\mu(\mathsf A)\le\mu(\mathsf B)$ for every invariant measure $\mu$, so comparison is nearly an equivalence between subequivalence and the inequality for all invariant measure s. \begin{rem}\label{from0} Let two clopen sets $\mathsf A,\mathsf B$ satisfy $\mu(\mathsf A)<\mu(\mathsf B)$ for every invariant measure\ $\mu$. Because the sets $\mathsf A,\mathsf B$ are clopen, the function $\mu\mapsto\mu(\mathsf B)-\mu(\mathsf A)$ is continuous, and since it is positive on a compact set, it is separated from zero, i.e., $$ \inf_{\mu\in\mathcal M_G(X)}(\mu(\mathsf B)-\mu(\mathsf A))>0. $$ \end{rem} Consider also the following seemingly weaker property: \begin{defn}\label{defwcom} The action of a countable amenable group $G$ on a zero-dimensional\ compact metric space $X$ admits \emph{weak comparison} if there exists a constant $C\ge 1$ such that for any clopen sets $\mathsf A,\mathsf B\subset X$, the condition $\sup_\mu\mu(\mathsf A)<\frac1C\inf_\mu\mu(\mathsf B)$ (where $\mu$ ranges over all invariant measure s) implies $\mathsf A\preccurlyeq \mathsf B$. \end{defn} Clearly, comparison implies weak comparison. We will show that these properties are in fact equivalent. \begin{lem}\label{compisweakcomp} Weak comparison implies comparison. \end{lem} \begin{proof} Suppose the action of a countable amenable group $G$ on a zero-dimensional\ compact metric space $X$ admits weak comparison with a constant $C$. Let two clopen sets $\mathsf A,\mathsf B$ satisfy $\mu(\mathsf A)<\mu(\mathsf B)$ for every invariant measure\ $\mu$. By Remark \ref{from0}, $\inf_{\mu\in\mathcal M_G(X)}(\mu(\mathsf B)-\mu(\mathsf A))>\varepsilon$ for some positive $\varepsilon$. We order the group (arbitrarily) by natural numbers, as $G=\{g_1,g_2,\dots\}$ (or $G=\{g_1,\dots,g_n\}$ in case $G$ is finite). We let $\mathsf A_1=\mathsf A\cap g_1^{-1}(\mathsf B)$, and $\mathsf B_1=g_1(\mathsf A_1)$. For each $k>1$ (or $1<k\le n$ in the finite case) we set inductively $$ \mathsf A_k = \mathsf A\setminus\Bigl(\bigcup_{i=1}^{k-1} \mathsf A_i\Bigr)\cap g_k^{-1}\left(\mathsf B\setminus\Bigl(\bigcup_{i=1}^{k-1} \mathsf B_i\Bigr)\right), $$ and $\mathsf B_k=g_k(\mathsf A_k)$. It is not hard to see that the sets $\mathsf A_k$ and $\mathsf B_k$ are clopen (some of them possibly empty), disjoint subsets of $\mathsf A$ and $\mathsf B$, respectively and $\mu(\mathsf A_k)=\mu(\mathsf B_k)$ for each $k$ and every invariant measure\ $\mu$. Consider the remainder sets $$ \mathsf A_0=\mathsf A\setminus\Bigl(\bigcup_{k=1}^{\infty} \mathsf A_k\Bigr)\text{\ \ and \ \ }\mathsf B_0=\mathsf B\setminus\Bigl(\bigcup_{k=1}^{\infty} \mathsf B_k\Bigr), $$ or in the finite case $$ \mathsf A_0=\mathsf A\setminus\Bigl(\bigcup_{k=1}^{n} \mathsf A_k\Bigr)\text{\ \ and \ \ }\mathsf B_0=\mathsf B\setminus\Bigl(\bigcup_{k=1}^{n} \mathsf B_k\Bigr). $$ Clearly, for each invariant measure\ $\mu$ we have $\mu(\mathsf B_0)\ge\varepsilon$. We claim that $\mu(\mathsf A_0)=0$. It suffices to consider an ergodic measure. But if $\mu(\mathsf A_0)$ was positive, then, by ergodicity, there would exist an $x\in \mathsf A_0$ and $g=g_k$ (for some $k$) such that $g_k(x)\in \mathsf B_0$. This is a contradiction, as, by construction, no orbit starting in $\mathsf A_0$ visits the set $\mathsf B_0$. Now, by countable additivity of the measures, we obtain, for each invariant measure\ $\mu$, $$ \lim_{k\to\infty} \mu\!\left(\mathsf A\setminus\Bigl(\bigcup_{i=1}^k \mathsf A_i\Bigr)\right) =0. $$ Clearly, the limit is decreasing. Since the measured sets are clopen, the above measure values viewed as functions on the set of invariant measure s are continuous, and thus the convergence is uniform. Let $\delta>0$ be strictly smaller than $\frac\varepsilon C$. Then, for $k$ large enough we have, simultaneously for all invariant measure s $\mu$, $$ \mu\!\left(\mathsf A\setminus\Bigl(\bigcup_{i=1}^k \mathsf A_i\Bigr)\right)\le\delta<\frac\varepsilon C\le \frac1C\,\mu\!\left(\mathsf B\setminus\Bigl(\bigcup_{i=1}^k \mathsf B_i\Bigr)\right). $$ By the weak comparison assumption, we get $$ \mathsf A\setminus\Bigl(\bigcup_{i=1}^k \mathsf A_i\Bigr)\preccurlyeq \mathsf B\setminus\Bigl(\bigcup_{i=1}^k \mathsf B_i\Bigr), $$ which, together with the obvious fact that $\bigcup_{i=1}^k \mathsf A_i\preccurlyeq \bigcup_{i=1}^k \mathsf B_i$, completes the proof of $\mathsf A\preccurlyeq \mathsf B$. \end{proof} \begin{rem}\label{finitecomp} The above proof shows also that every finite group $G=\{g_1,g_2,\dots,g_n\}$ has the comparison property. For such a group we have $\mathsf A_0=\mathsf A\setminus(\bigcup_{i=1}^n\mathsf A_i)$. The fact that $\mathsf A_0$ has measure $0$ for all invariant measure s implies that it is empty. \end{rem} \begin{rem}\label{disjoint}In the definition of comparison, it suffices to consider only disjoint clopen sets $\mathsf A, \mathsf B$. Indeed, $\{\mathsf A\cap\mathsf B, \mathsf A\setminus\mathsf B\}$ is a clopen partition of $\mathsf A$, and $g_0=e$ sends $\mathsf A\cap\mathsf B$ inside $\mathsf B$, so if $(\mathsf A\setminus\mathsf B)\preccurlyeq(\mathsf B\setminus\mathsf A)$ then also $\mathsf A\preccurlyeq\mathsf B$. Also note that, for any measure~$\mu$, $\mu(\mathsf A)<\mu(\mathsf B)$ if and only if $\mu(\mathsf A\setminus\mathsf B)<\mu(\mathsf B\setminus\mathsf A)$. \end{rem} It is known that many important countable amenable groups, for instance $\mathbb Z$, $\mathbb Z^d$, have the comparison property. However, the following question remains open: \begin{ques}\label{3.7} Does every countable amenable group have the comparison property? \end{ques} In this paper we will provide a positive answer in a large class of groups. \section{Banach density interpretation of the comparison property}\label{two} Now we provide a characterization of the comparison property of a countable amenable group in terms of Banach density advantage for subsets of the group. \subsection{Passing between clopen subsets of $X$ and subsets of $G$} This subsection contains fairly standard tools, often exploited in symbolic dynamics. We include them for completeness and as an opportunity to introduce our notation. We continue to assume that $G$ is a countable amenable group. \subsubsection{}\label{211} First suppose that $G$ acts on a zero-dimensional\ compact metric space in which we have two disjoint clopen sets $\mathsf A$ and $\mathsf B$. Define a map $\pi_{\mathsf A\mathsf B}:X\to\{\mathsf 0,\mathsf 1,\mathsf 2\}^G$ by the formula $$ (\pi_{\mathsf A\mathsf B}(x))_g=\begin{cases}\mathsf 1\\\mathsf 2\\\mathsf 0\end{cases} \iff g(x)\in \begin{cases} \mathsf A\\\mathsf B\\(\mathsf A\cup \mathsf B)^c,\end{cases} $$ respectively ($g\in G$). As easily verified, $\pi_{\mathsf A\mathsf B}$ is continuous and intertwines the action on $X$ with the shift action, in other words, it is a topological\ factor map onto its image $Y_{\mathsf A\mathsf B}=\pi_{\mathsf A\mathsf B}(X)$, which is a subshift, in which we can distinguish two natural clopen sets, the cylinders $[\mathsf 1]$ and $[\mathsf 2]$. Notice that $\pi_{\mathsf A\mathsf B}^{-1}([\mathsf 1])=\mathsf A$ and $\pi_{\mathsf A\mathsf B}^{-1}([\mathsf 2])=\mathsf B$, hence for every invariant measure\ $\mu$ on $X$ we have $\mu(\mathsf A)=\nu([\mathsf 1])$ and $\mu(\mathsf B)=\nu([\mathsf 2])$, where $\nu=\pi_{\mathsf A\mathsf B}^*(\mu)$ is the ``pushdown'' of $\mu$ onto $Y_{\mathsf A\mathsf B}$ given by $\nu(\cdot) = \mu(\pi_{\mathsf A\mathsf B}^{-1}(\cdot))$. It is well-known that $\pi_{\mathsf A\mathsf B}^*$ is a surjection onto the set $\mathcal M_G(Y_{\mathsf A\mathsf B})$ (from now on abbreviated as $\mathcal M_{\mathsf A\mathsf B}$) of all shift-invariant measure s on $Y_{\mathsf A\mathsf B}$. For each $x\in X$ we define two subsets of $G$, \begin{align} A_x &= \{g:g(x)\in \mathsf A\}=\{g:(\pi_{\mathsf A\mathsf B}(x))_g=\mathsf 1\}=\{g:g(\pi_{\mathsf A\mathsf B}(x))\in[\mathsf 1]\},\label{kiki}\\ B_x &= \{g:g(x)\in \mathsf B\}=\{g:(\pi_{\mathsf A\mathsf B}(x))_g=\mathsf 2\}=\{g:g(\pi_{\mathsf A\mathsf B}(x))\in[\mathsf 2]\}.\label{kiko} \end{align} In the above context we can define new notions: \begin{defn}\label{dbar} We fix in $G$ a F\o lner sequence\ $(F_n)$. The terms \begin{align*} \underline D(\mathsf B)&=\limsup_{n\to\infty}\ \inf_{x\in X}\underline D_{F_n}(B_x),\\ \overline D(\mathsf B)&=\liminf_{n\to\infty}\ \sup_{x\in X}\overline D_{F_n}(B_x),\\ \underline D(\mathsf B,\mathsf A)&=\limsup_{n\to\infty}\ \inf_{x\in X}\underline D_{F_n}(B_x,A_x), \end{align*} will be called the \emph{uniform lower Banach density of (visits of the orbits in) $\mathsf B$}, \emph{uniform upper Banach density of $\mathsf B$} and \emph{uniform Banach density advantage of $\mathsf B$ over $\mathsf A$}. \end{defn} A statement analogous to Lemma \ref{bd} holds: \begin{lem}\label{bbb} The values of $\underline D(\mathsf B)$, $\overline D(\mathsf B)$ and $\underline D(\mathsf B,\mathsf A)$ do not depend on the choice of the F\o lner sequence, the limits superior and inferior in the definition are in fact limits, and moreover \begin{align*} \underline D(\mathsf B)&=\sup_F\ \inf_{x\in X}\underline D_F(B_x),\\ \overline D(\mathsf B)&=\inf_F\ \sup_{x\in X}\overline D_F(B_x),\\ \underline D(\mathsf B,\mathsf A)&=\sup_F\ \inf_{x\in X}\underline D_F(B_x,A_x), \end{align*} where $F$ ranges over all finite subsets of $G$. \end{lem} \begin{proof} The proof is identical to the proof of Lemma \ref{bd}, with the only difference that Lemma \ref{bdc} applies to the sets $A_x,B_x$ whenever $F_n$ is $(F,\varepsilon)$-invariant, simultaneously for all $x\in X$. \end{proof} In a moment we will connect the above notions with the values assumed by the invariant measures on $X$ on the sets $\mathsf A$ and $\mathsf B$. \subsubsection{}\label{212} We will now describe the opposite passage: from subsets of $G$ to clopen subsets of some zero-dimensional\ compact metric space on which we have a $G$-action. Suppose we have two disjoint subsets $A$ and $B$ of $G$. Then they determine an element $y^{AB}$ of the symbolic space $\{\mathsf 0,\mathsf 1,\mathsf 2\}^G$, given by the rule $$ y^{AB}_g=\begin{cases}\mathsf 1\\\mathsf 2\\\mathsf 0\end{cases} \iff g\in \begin{cases} A\\mathcal B\\(A\cup B)^c,\end{cases} $$ respectively ($g\in G$). The shift-orbit closure of $y^{AB}$, i.e., the set $$ Y^{AB}=\overline{\{g(y^{AB}):g\in G\}} $$ is a subshift, which we will call the \emph{subshift associated with the sets $A, B$}. The set of its invariant measure s, $\mathcal M_G(Y^{AB})$, will be abbreviated as $\mathcal M^{AB}$. In this subshift we will distinguish two clopen sets, $\mathsf A=[\mathsf 1]$ and $\mathsf B=[\mathsf 2]$. It is almost immediate to see that if we apply the definitions of the preceding paragraph to the shift action on $Y^{AB}$ and the above sets $\mathsf A,\mathsf B$ then the factor map $\pi_{\mathsf A\mathsf B}$ is the identity, and $A_{y^{AB}}=\{g:y^{AB}_g=\mathsf 1\} = A$ and $B_{y^{AB}}=\{g:y^{AB}_g=\mathsf 2\}=B$. \begin{prop}\label{prop} \begin{enumerate} \item Suppose $G$ acts on a zero-dimensional\ compact metric space $X$ in which we are given two disjoint clopen sets, $\mathsf A,\mathsf B$. Then \begin{align*} \inf_{\mu\in\mathcal M_G(X)}\mu(\mathsf B)&=\underline D(\mathsf B)=\inf_{x\in X}\underline D(B_x), \\ \sup_{\mu\in\mathcal M_G(X)}\mu(\mathsf B)&=\overline D(\mathsf B)=\sup_{x\in X}\overline D(B_x),\\ \inf_{\mu\in\mathcal M_G(X)}(\mu(\mathsf B)-\mu(\mathsf A))&=\underline D(\mathsf B,\mathsf A)=\inf_{x\in X}\underline D(B_x,A_x). \end{align*} \item Next suppose that $A$ and $B$ are disjoint subsets of $G$. Consider the cylinders $[\mathsf 1]$ and $[\mathsf 2]$ in the subshift $Y^{AB}$ associated with these sets. Then \begin{gather*} \inf_{\mu\in\mathcal M^{AB}}\mu([2])=\underline D(B),\\ \sup_{\mu\in\mathcal M^{AB}}\mu([2])=\overline D(B),\\ \inf_{\mu\in\mathcal M^{AB}}(\mu([2])-\mu([1]))=\underline D(B,A). \end{gather*} \end{enumerate} \end{prop} \begin{proof} In (1) we will only show the last line of equalities. The first line will then follow by plugging in $\mathsf A=\emptyset$ and the the second one by considering the complement of $\mathsf B$. First suppose that we have sharp inequality $\inf_{\mu\in\mathcal M_G(X)}(\mu(\mathsf B)-\mu(\mathsf A))>\underline D(\mathsf B,\mathsf A)$. By Lemma \ref{bbb}, there exists an $\varepsilon>0$ such that for every finite set $F$, $\inf_{\mu\in\mathcal M_G(X)} (\mu(\mathsf B)-\mu(\mathsf A))-\varepsilon > \inf_{x\in X} \underline D_F(B_x,A_x)$. In particular for every set $F_n$ in an \emph{a priori} selected F\o lner sequence, there exists some $x_n\in X$ and $g_n\in G$ with $$ \inf_{\mu\in\mathcal M_G(X)}(\mu(\mathsf B)-\mu(\mathsf A))-\varepsilon > \frac1{|F_n|}(|B_{x_n}\cap F_ng_n|-|A_{x_n}\cap F_ng_n|). $$ Note that $|B_{x_n}\cap F_ng_n|=|\{f\in F_n:fg_n(x_n)\in \mathsf B\}|$ (and analogously for $\mathsf A$), thus the right hand side takes on the form $$ \frac1{|F_n|}(|\{f\in F_n:fg_n(x_n)\in \mathsf B\}|-|\{f\in F_n:fg_n(x_n)\in \mathsf A\}|). $$ The function $\mathsf W\mapsto \frac1{|F_n|}|\{f\in F_n:fg_n(x_n)\in \mathsf W\}|$ defined on Borel subsets of $X$ is equal to the probability measure $\frac1{|F_n|}\sum_{f\in F_n}\delta_{fg_n(x_n)}$. This sequence\ of measures has a subsequence\ convergent in the weak-star topology to some $\mu_0\in\mathcal M_G(X)$. Since the characteristic functions of the clopen sets $\mathsf A, \mathsf B$ are continuous, we have $$ \inf_{\mu\in\mathcal M_G(X)}(\mu(\mathsf B)-\mu(\mathsf A))-\varepsilon\ge \mu_0(\mathsf B)-\mu_0(\mathsf A), $$ which is a contradiction. We have proved that $\inf_{\mu\in\mathcal M_G(X)}(\mu(\mathsf B)-\mu(\mathsf A))\le\underline D(\mathsf B,\mathsf A)$. The inequality $\underline D(\mathsf B,\mathsf A)\le\inf_{x\in X}\underline D(B_x,A_x)$ is trivial; both sides differ by changing the order of $\limsup_n$ and $\inf_x$ and on the left the infimum is applied earlier. For the last missing inequality, notice that, given $\varepsilon>0$, there exists an ergodic measure $\mu_0\in\mathcal M_G(X)$ with $\mu_0(\mathsf B)-\mu_0(\mathsf A) < \inf_{\mu\in\mathcal M_G(X)}(\mu(\mathsf B)-\mu(\mathsf A))+\varepsilon$. There exists a point $x\in X$ generic for $\mu_0$. Then $y=\pi_{\mathsf A\mathsf B}(x)$ is generic for $\nu_0=\pi^*_{\mathsf A\mathsf B}(\mu_0)$. This implies that $\frac1{|F_n|}|\{f\in F_n:f(y)\in[\mathsf 1]\}|\to \nu_0([\mathsf 1])=\mu_0(\mathsf A)$ (and analogously for $[\mathsf 2]$ and $\mathsf B$). Thus, for each sufficiently large $n$ we have $$ \frac1{|F_n|}(|\{f\in F_n:f(y)\in[\mathsf 2]\}|-|\{f\in F_n:f(y)\in[\mathsf 1]\}|)< \mu_0(\mathsf B)-\mu_0(\mathsf A)+\varepsilon. $$ But $f(y)\in[\mathsf 1]\iff(\pi_{\mathsf A\mathsf B}(x))_f=1\iff f(x)\in \mathsf A\iff f\in A_x$ (and analogously for $\mathsf B$), so we have shown that $$ \frac1{|F_n|}(|B_x\cap F_n|-|A_x\cap F_n|)< \inf_{\mu\in\mathcal M_G(X)}(\mu(\mathsf B)-\mu(\mathsf A))+2\varepsilon. $$ Clearly, the left hand side is not smaller than $$ \inf_{g\in G}\frac1{|F_n|}(|B_x\cap F_ng|-|A_x\cap F_ng|)=\underline D_{F_n}(B_x,A_x). $$ Passing to the limit over $n$ and then applying infimum over all $x\in X$ we obtain $\inf_{x\in X}\underline D(B_x,A_x)\le \inf_{\mu\in\mathcal M_G(X)}(\mu(\mathsf B)-\mu(\mathsf A))+2\varepsilon$. Since this is true for every $\varepsilon>0$, (1) is proved. \smallskip We pass to proving (2). As before, the last equality suffices. From (1) applied to the cylinders $\mathsf A=[\mathsf 1]$ and $\mathsf B=[\mathsf 2]$ we get $$ \inf_{\mu\in\mathcal M^{AB}}(\mu([2])-\mu([1]))=\underline D([\mathsf 2],[\mathsf 1]) =\lim_{n\to\infty}\ \,\inf_{y\in Y^{AB}}\ \inf_{g\in G} \frac1{|F_n|} (|B_y\cap F_ng|-|A_y\cap F_ng|). $$ The above difference $|B_y\cap F_ng|-|A_y\cap F_ng|$ depends on the block $y|_{F_ng}$. Notice that we are considering a transitive subshift with the transitive point $y^{AB}$ (i.e., whose orbit is dense in the subshift), so every block $y|_{F_ng}$ (for any $y\in Y^{AB}$ and any $g\in G$) occurrs also in $y^{AB}$ as a block $y^{AB}|_{F_ng'}$ for some $g'$ (the converse need not be true, unless $y$ is another transitive point). Thus, for any $n$, the infimum over $y\in Y^{AB}$ on the right hand side of the formula displayed above is the smallest for $y=y^{AB}$. Recall that $A_{y^{AB}}=A$ and $B_{y^{AB}}=B$. We have proved that $$ \inf_{\mu\in\mathcal M^{AB}}(\mu([2])-\mu([1])) = \lim_{n\to\infty}\ \inf_{g\in G}\frac1{|F_n|} (|B\cap F_ng|-|A\cap F_ng|). $$ The right hand side is precisely $\underline D(B,A)$. \end{proof} The following notions are standard in symbolic dynamics. We assume that $G$ is a countable group (in the remainder of this subsection amenability is inessential). \begin{defn} Let $\Lambda$ and $\Delta$ be some finite sets (alphabets). By a \emph{block code} we will mean any function $\Xi:\Lambda^F\to\Delta$, where $F$ is a finite subset of $G$ (called the \emph{coding horizon} of \,$\Xi$). \end{defn} The Curtis--Hedlund--Lyndon Theorem \cite{H} (which holds for actions of any countable group) states: \begin{thm}\label{CHL} Let $X\subset\Lambda^G$ be a subshift (over some finite alphabet $\Lambda$). Let $\Delta$ be a finite set. Then $\xi:X\to\Delta^G$ is a topological factor map (i.e., a continuous and shift-equivariant map, the image is then a subshift over $\Delta$) if and only if there exists a finite set $F\subset G$ and a block code $\Xi:\Lambda^F\to\Delta$, such that, for all $x\in X$ and $g\in G$ we have the equality $$ (\xi(x))_g = \Xi(g(x)|_F). $$ \end{thm} The term ``block code'' refers to both $\Xi$ and $\xi$, depending on the context, and $F$ is called a coding horizon of $\xi$ (and of $\Xi$). Clearly, if $F$ is a coding horizon of $\xi$ (and of $\Xi$), so is any finite set containing $F$. \smallskip \begin{defn}\label{local rule} Let $X\subset\Lambda^G$ be a subshift. For each $x\in X$ let $A_x\subset G$ and let $\tilde\varphi_x: A_x\to G$ be some function. For $X'\subset X$, we will say that the family $\{\tilde\varphi_x\}_{x\in X'}$ is \emph{determined by a block code} if there exists a block code $\Xi:\Lambda^F\to E$, where $E$ is a finite subset of $G$ (and so is $F$), such that if we denote $$ \varphi_x(g) = \Xi(g(x)|_F), $$ ($x\in X, g\in G$), then, for each $x\in X'$, the mapping from $A_x$ to $G$, defined by $$ a\mapsto \varphi_x(a)a, $$ ($a\in A_x$), coincides with $\tilde\varphi_x$. The elements $\varphi_x(a)$ (belonging to $E$) will be called the \emph{multipliers} of $\tilde\varphi_x$. \end{defn} A simple way of checking, that a family $\{\tilde\varphi_x\}_{x\in X'}$ is determined by a block code, is finding a finite set $F$ such that, for any $x_1,x_2\in X'$ and $a_1\in A_{x_1}, a_2\in A_{x_2}$, \begin{equation}\label{clr} a_1(x_1)|_F = a_2(x_2)|_F \ \implies \ \tilde\varphi_{x_1}(a_1)a_1^{-1} = \tilde\varphi_{x_2}(a_2){a_2}^{-1}. \end{equation} The following theorem connects the above definition with the relation of subequivalence. \begin{thm}\label{tutka} \begin{enumerate} \item Let $X\subset\Lambda^G$ be a subshift. Consider the pair of disjoint clopen subsets $\mathsf A, \mathsf B\subset X$. Then $\mathsf A\preccurlyeq\mathsf B$ if and only if there exists a family of functions $\tilde\varphi_x:G\to G$ determined by a block code, such that for all $x\in X$, $\tilde\varphi_x$ restricted to $A_x=\{g:g(x)\in\mathsf A\}$ is an injection to $B_x=\{g:g(x)\in\mathsf B\}$. \item If, moreover, $X$ is transitive with a transitive point $x^*$, then the above condition $\mathsf A\preccurlyeq\mathsf B$ is equivalent to the existence of just one function $\tilde\varphi_{x^*}$ determined by a block code, whose restriction to $A_{x^*}$ is an injection to $B_{x^*}$. \end{enumerate} \end{thm} \begin{proof} (1) Firstly suppose that $\mathsf A\preccurlyeq\mathsf B$. Let $\{\mathsf A_1,\mathsf A_2,\dots,\mathsf A_k\}$ be the clopen partition of $\mathsf A$ and let $g_1,g_2,\dots, g_k$ be the elements of $G$ such that the sets $\mathsf B_i = g_i(\mathsf A_i)$ are disjoint subsets of $\mathsf B$. Let $E = \{g_1,g_2,\dots,g_k\}$. Consider the mapping $\xi:X\to E^G$ given by the following rule $$ (\xi(x))_g = \begin{cases} g_i &\text{ if \ }g(x)\in\mathsf A_i, \ i=1,2,\dots,k,\\ g_1 &\text{ otherwise}, \end{cases} $$ ($g\in G$). Since the sets $\mathsf A_i$ and $X\setminus \mathsf A$ are clopen in $X$, the above map is continuous and, as easily verified, it is shift-equivariant. Thus, it is a topological\ factor map from $X$ into $E^G$. By Theorem \ref{CHL}, there exists a block code $\Xi:\Lambda^F\to E$ (with some finite coding horizon $F$) satisfying, for all $x\in X$ and $g\in G$, the equality $$ (\xi(x))_g = \Xi(g(x)|_F). $$ For each $x\in X$ we define $\varphi_x:G\to E$ by $\varphi_x(g)= (\xi(x))_g$ and $\tilde\varphi_x:G\to G$ by $\tilde\varphi_x(g)=\varphi_x(g)g$, i.e., the family of maps $\{\tilde\varphi_x\}_x$ is determined by the block code $\Xi$. We need to show that, for every $x\in X$, $\tilde\varphi_x$ restricted to $A_x$ is an injection to $B_x$. Throughout this paragraph we fix some $x\in X$ and skip the subscript $x$ in the writing of $A_x, B_x$, $\varphi_x$ and $\tilde\varphi_x$. For $i=1,2,\dots,k$ let $A_i = A\cap\varphi^{-1}(g_i)$. Clearly, $\{A_1,A_2,\dots,A_k\}$ is a partition of $A$ and for every $a\in A$ we have: $$ a\in A_i \iff \varphi(a)=g_i \iff (\xi(x))_a = g_i \iff a(x) \in \mathsf A_i, \ \ (i=1,2,\dots,k). $$ Further, $a(x)\in\mathsf A_i$ yields $g_ia(x)\in\mathsf B_i\subset \mathsf B$, which implies that $g_ia\in B$. Since $g_ia = \varphi(a)a = \tilde\varphi(a)$, we have shown that $\tilde\varphi$ sends $A$ into $B$. For injectivity of the restriction $\tilde\varphi|_A$, observe that if $a_1\neq a_2$ and both elements belong to the same set $A_i$ then their images by $\tilde\varphi$, equal to $g_ia_1$ and $g_ia_2$, respectively, are different by cancellativity. If $a_1\in A_i$ and $a_2\in A_j$ with $i\neq j$, then $\tilde\varphi(a_1)(x)=g_ia_1(x)\in\mathsf B_i$ and $\tilde\varphi(a_2)(x)=g_ja_2(x)\in\mathsf B_j$. Since $\mathsf B_i$ and $\mathsf B_j$ are disjoint, the elements $\tilde\varphi(a_1)$ and $\tilde\varphi(a_2)$ must be different. \smallskip Now suppose that there exist injections $\tilde\varphi_x:A_x\to B_x$ (for all $x\in X$) determined by a block code $\Xi:\Lambda^F\to E=\{g_1,g_2,\dots,g_k\}\subset G$, where the elements $g_i$ are written without repetitions, i.e., are different for different indices $i=1,2,\dots,k$. That is, denoting, for each $g\in G$, $$ \varphi_x(g) = \Xi(g(x)|_F), $$ we obtain maps $\varphi_x$ such that $g\mapsto\varphi_x(g)g$ restricted to $A_x$ coincides with $\tilde\varphi_x$. Now, for each $i=1,2,\dots,k$ we define $$ \mathsf A_i=\mathsf A\cap [\Xi^{-1}(g_i)]=\{x\in\mathsf A:\Xi(x|_F)=g_i\}. $$ Clearly, $\{\mathsf A_1,\mathsf A_2,\dots,\mathsf A_k\}$ is a clopen partition of $\mathsf A$. Let $x\in \mathsf A_i$ (for some $i=1,2,\dots, k$). Then $e\in A_x$ and thus $\tilde\varphi_x(e)\in B_x$, i.e., $\tilde\varphi_x(e)(x)\in\mathsf B$. But $\tilde\varphi_x(e)=\varphi_x(e)=\Xi(x|_F)=g_i$. We have shown that $g_i(\mathsf A_i)\subset\mathsf B$. It remains to show that the sets $g_i(\mathsf A_i)$ are disjoint. Suppose that for some $i\neq j$ there exists $x\in X$ belonging to both $g_i(\mathsf A_i)$ and $g_j(\mathsf A_j)$. This implies that $g_i^{-1}$ and $g_j^{-1}$ both belong to $A_x$, and $\varphi_x(g_i^{-1}) = g_i$, $\varphi_x(g_j^{-1}) = g_j$. But then $$ \tilde\varphi_x(g_i^{-1})= \varphi_x(g_i^{-1}) g^{-1}_i= g_ig^{-1}_i=e\text{ \ and \ }\tilde\varphi_x(g_j^{-1})=\varphi_x(g_j^{-1}) g^{-1}_j= g_jg^{-1}_j=e, $$ which contradicts the injectivity of $\tilde\varphi_x$ on $A_x$. \smallskip (2) In view of (1), it suffices to show that if a block code $\Xi:\Lambda^F\to E$ determines an injection $\tilde\varphi_{x^*}:A_{x^*}\to B_{x^*}$ then it also determines (as usual, by the formulas $\varphi_x(g)=\Xi(g(x)|_F)$ and $\tilde\varphi_x(a)=\varphi_x(a)a$\,) injections $\tilde\varphi_x:A_x\to B_x$ for all $x\in X$. Fix some $x\in X$ and let $a_1\neq a_2$ belong to $A_x$, i.e., $a_1(x), a_2(x)\in\mathsf A$. Since $x^*$ is a transitive point, a point $g(x^*)$ (for some $g\in G$) is so close to $x$ that: \begin{enumerate} \item[(a)] $a_1g(x^*),\ a_2g(x^*)\in\mathsf A$, \item[(b)] the blocks $g(x^*)|_{Fa_1\cup Fa_2}$ and $x|_{Fa_1\cup Fa_2}$ are equal, \item[(c)] $(\forall f\in Ea_1\cup Ea_2) \ \ fg(x^*)\in\mathsf B \iff f(x)\in\mathsf B$. \end{enumerate} By (a), both $a_1g$ and $a_2g$ belong to $A_{x^*}$. Thus $\tilde\varphi_{x^*}(a_1g)$ and $\tilde\varphi_{x^*}(a_2g)$ are \emph{different} elements of $B_{x^*}$. But $$ \tilde\varphi_{x^*}(a_1g)=\varphi_{x^*}(a_1g)a_1g\text{ \ \ and \ \ }\tilde\varphi_{x^*}(a_2g)=\varphi_{x^*}(a_2g)a_2g, $$ which, after canceling $g$, yields $$ \varphi_{x^*}(a_1g)a_1\neq\varphi_{x^*}(a_2g)a_2. $$ On the other hand, by (b), $x|_{Fa_1} = g(x^*)|_{Fa_1}$, whence $a_1(x)|_F = a_1g(x^*)|_F$, and $$ \varphi_{x}(a_1)=\Xi(a_1(x)|_F) = \Xi(a_1g(x^*)|_F) = \varphi_{x^*}(a_1g), $$ which means that $\tilde\varphi_{x}(a_1)=\varphi_{x}(a_1)a_1=\varphi_{x^*}(a_1g)a_1$. Analogously, $\tilde\varphi_{x}(a_2)=\varphi_{x^*}(a_2g)a_2$. We have shown that $\tilde\varphi_x(a_1)\neq \tilde\varphi_x(a_2)$, i.e., $\tilde\varphi_x$ restricted to $A_x$ is injective. Further, the fact that $\tilde\varphi_{x^*}(a_1g)\in B_{x^*}$ yields $$ \mathsf B\ni\tilde\varphi_{x^*}(a_1g)(x^*)=\varphi_{x^*}(a_1g)a_1g(x^*)= \varphi_{x}(a_1)a_1g(x^*). $$ Since $\varphi_x(a_1)a_1\in Ea_1$, by (c) we get $$ \mathsf B\ni\varphi_{x}(a_1)a_1(x)=\tilde\varphi_x(a_1)(x), $$ and hence $\tilde\varphi_x(a_1)\in B_x$. We have shown that $\tilde\varphi_x$ sends $A_x$ injectively to $B_x$. \end{proof} \subsection{Banach density comparison property of a group} \begin{defn} We say that $G$ has the \emph{Banach density comparison property} if whenever $A\subset G$ and $B\subset G$ are disjoint and satisfy $\underline D(B,A)>0$ then, in the subshift \,$Y^{AB}$ there exists an injection $\tilde\varphi:A\to B$ determined by a block code (recall that $y^{AB}$ is a transitive point in $Y^{AB}$ and $A=A_{y^{AB}},\ B=B_{y^{AB}}$, so the above condition is the same as that in Theorem \ref{tutka} (2)). \end{defn} \begin{rem} It is immediate to see that any finite group has the Banach density comparison property. \end{rem} We can now completely characterize the comparison property of a countable amenable group in terms of the Banach density comparison property. \begin{thm}\label{ujowe} A countable amenable group $G$ has the comparison property if and only if it has the Banach density comparison property. \end{thm} \begin{proof} The theorem holds trivially for finite groups, so we can restrict to infinite groups $G$. Assume that $G$ has the comparison property and let $A,B\subset G$ be disjoint and satisfy $\underline D(B,A)>0$. Then, by Proposition \ref{prop} (2), taking in the subshift $Y^{AB}$ the clopen sets: $\mathsf A=[\mathsf 1]$ and $\mathsf B=[\mathsf 2]$, we have $\inf_{\mu\in\mathcal M^{AB}}(\mu(\mathsf B)-\mu(\mathsf A))>0$. By the assumption, $\mathsf A\preccurlyeq\mathsf B$. Now, a direct application of Theorem \ref{tutka} (2) completes the proof of the Banach density comparison property. Let us pass to the proof of the opposite implication. Suppose that a countable amenable group $G$ having the Banach density comparison property acts on a zero-dimensional\ compact metric space $X$, in which we have selected two clopen sets $\mathsf A$ and $\mathsf B$ satisfying, for each invariant measure\ $\mu$ on $X$, the inequality $\mu(\mathsf A)<\mu(\mathsf B)$. By Remark \ref{disjoint}, we can assume that $\mathsf A$ and $\mathsf B$ are disjoint; and by Remark \ref{from0}, we have $\inf_{\mu\in\mathcal M_G(X)}(\mu(\mathsf B)-\mu(\mathsf A))>0$. This translates to $\inf_{\nu\in\mathcal M_{\mathsf A\mathsf B}}(\nu([\mathsf 2])-\nu([\mathsf 1]))>0$ in the factor subshift $Y_{\mathsf A\mathsf B}$. By Proposition \ref{prop} (1) applied to this subshift, we get $\underline D([\mathsf 2],[\mathsf 1])>0$. Since we intend to use the Banach density comparison property and Theorem~\ref{tutka}~(2), we need to embed $Y_{\mathsf A\mathsf B}$ in a transitive subshift $Y$ (over the alphabet $\{\mathsf0,\mathsf1,\mathsf2\}$). We also desire a transitive point $y^*$ which satisfies $\underline D(B_{y^*},A_{y^*})>0$. Below we present the construction of such a transitive subshift. Choose some positive $\gamma<\underline D([\mathsf 2],[\mathsf 1])$. Fix an increasing (w.r.t. set inclusion) F\o lner sequence\ $(F_n)$ such that $\bigcup_{n=1}^\infty F_n=G$. By choosing a subsequence\ we can assume that $\sum_{i=1}^{n-1}|F_i|<\frac{1-\gamma}2|F_n|$ for every $n$ (in this place we use the assumption that $G$ is infinite). Next, we need to find a sequence\ of blocks $\mathbf B_n\in\{\mathsf0,\mathsf1,\mathsf2\}^{F_n}$ each appearing as $y_n|_{F_n}$ in some $y_n\in Y_{\mathsf A\mathsf B}$, such that every $y\in Y_{\mathsf A\mathsf B}$ is a coordinatewise limit of a subsequence $\mathbf B_{n_k}$ of the selected blocks. Finally, we need to find a sequence\ $g_n$ of elements of $G$ such that the sets $F_nF_n^{-1}F_ng_n$ are disjoint. All the above steps are possible and easy. Once they are completed, $y^*$ is defined by the rule: for each $n$ and $f\in F_n$ we put $y^*_{fg_n}=\mathbf B_n (f)$, and for all $g$ outside the union $\bigcup_{n=1}^\infty F_ng_n$, we put $y^*_g = \mathsf 2$. We let $Y$ be the closure of the orbit of $y^*$. The following properties hold: \begin{itemize} \item $Y\supset Y_{\mathsf A\mathsf B}$, \item $\underline D(B_{y^*},A_{y^*})\ge \gamma>0$. \end{itemize} The first property is obvious by construction: each $y\in Y_{\mathsf A\mathsf B}$ is the limit of a sequence of blocks $\mathbf B_{n_k}$, hence it is also the limit of the sequence\ of elements $g_{n_k}(y^*)$, and thus it belongs to $Y$. We need to prove the latter property. By the definition of $\underline D([\mathsf 2],[\mathsf 1])$ in the subshift $Y_{\mathsf A\mathsf B}$, there exist arbitrarily large indices $n_k$ such that \begin{equation}\label{noco} |\{f\in F_{n_k}:y_{fg}=\mathsf 2\}|-|\{f\in F_{n_k}:y_{fg}=\mathsf 1\}|\ge\gamma|F_{n_k}|, \end{equation} for all $y\in Y_{\mathsf A\mathsf B}$ and $g\in G$. It suffices to show an analogous property for $y^*$. Fix some $g\in G$ and observe the block $y^*|_{F_{n_k}g}$. The set $F_{n_k}g$ either does not intersect any of the sets $F_mg_m$ with $m\ge n_k$ or intersects one of them (say $F_{m_0}g_{m_0}$ with $m_0\ge n_k$). In the first case, the block $y^*|_{F_{n_k}g}$ consists mostly of symbols $\mathsf 2$; as all symbols different from $\mathsf 2$ appear in $y^*$ only over the intersection of $F_{n_k}g$ with the union of the sets $F_ig_i$ with $i<n_k$, the percentage of such symbols in $y^*|_{F_{n_k}g}$ is at most $$ \frac1{|F_{n_k}g|}\sum_{i=1}^{n_k-1}|F_ig_i|=\frac1{|F_{n_k}|}\sum_{i=1}^{n_k-1}|F_i|<\frac{1-\gamma}2. $$ Thus, in this case we have \begin{equation}\label{noco1} |\{f\in F_{n_k}:y^*_{fg}=\mathsf 2\}|-|\{f\in F_{n_k}:y^*_{fg}=\mathsf 1\}|\ge\gamma|F_{n_k}|. \end{equation} In the latter case, we have $g\in F_{n_k}^{-1}F_{m_0}g_{m_0}$, hence $F_{n_k}g\subset F_{n_k}F_{n_k}^{-1}F_{m_0}g_{m_0}\subset F_{m_0}F_{m_0}^{-1}F_{m_0}g_{m_0}$. By disjointness of the sets $F_nF_n^{-1}F_ng_n$, $F_{n_k}g$ does not intersect any set $F_nF_n^{-1}F_ng_n$ (and hence also $F_ng_n$) with $n\neq m_0$. We will compare the block $y^*|_{F_{n_k}g}$ with the block $y_{m_0}|_{F_{n_k}gg_{m_0}^{-1}}$. We can write $$ F_{n_k}g = (F_{n_k}g\cap F_{m_0}g_{m_0}) \cup (F_{n_k}g\setminus F_{m_0}g_{m_0}), $$ and likewise $$ F_{n_k}gg_{m_0}^{-1} = (F_{n_k}gg_{m_0}^{-1}\cap F_{m_0})\cup (F_{n_k}gg_{m_0}^{-1}\setminus F_{m_0}). $$ By the definition of $y^*$, the block $y^*|_{F_{n_k}g\cap F_{m_0}g_{m_0}}$ is identical to $y_{m_0}|_{F_{n_k}gg_{m_0}^{-1}\cap F_{m_0}}$, while $y^*|_{F_{n_k}g\setminus F_{m_0}g_{m_0}}$ contains just the symbols $\mathsf 2$. Thus the difference $$ |\{f\in F_{n_k}:y^*_{fg}=\mathsf 2\}|-|\{f\in F_{n_k}:y^*_{fg}=\mathsf 1\}| $$ is not smaller than $$ |\{f\in F_{n_k}: (y_{m_0})_{fgg_{m_0}^{-1}}=\mathsf 2\}|-|\{f\in F_{n_k}:(y_{m_0})_{fgg_{m_0}^{-1}}=\mathsf 1\}|. $$ Since $y_{m_0}\in Y_{\mathsf A\mathsf B}$, \eqref{noco} implies that the latter expression is at least $\gamma|F_{n_k}|$. We have proved \eqref{noco1} also in this case. We have proved that $\underline D(B_{y^*},A_{y^*})\ge\gamma>0$. Now, the Banach density comparison property of $G$ implies that there exists an injection $\tilde\varphi$ from $A_{y^*}$ to $B_{y^*}$ determined by a block code. Thus, by Theorem~\ref{tutka} (2), we get $[\mathsf 1]\preccurlyeq[\mathsf 2]$ in the transitive subshift $Y$, and by restriction to a closed invariant\ set the same holds in $Y_{\mathsf A\mathsf B}$, which, by an application of $\pi_{\mathsf A\mathsf B}^{-1}$, translates to $\mathsf A\preccurlyeq\mathsf B$ in~$X$. \end{proof} \subsection{Comparison property via finitely generated subgroups} \begin{lem}\label{supinf} Let $G$ act on a zero-dimensional\ compact metric space $X$. Let $\mathsf A,\mathsf B\subset X$ be two disjoint clopen sets. Then $$ \sup_H \inf_{\mu\in\mathcal M_H(X)}(\mu(\mathsf B)-\mu(\mathsf A))=\sup_{H'} \inf_{\mu\in\mathcal M_{H'}(X)}(\mu(\mathsf B)-\mu(\mathsf A))=\inf_{\mu\in\mathcal M_G(X)}(\mu(\mathsf B)-\mu(\mathsf A)) $$ where $H$ ranges over all finitely generated subgroups of $G$ and $H'$ ranges over all subgroups of $G$. \end{lem} \begin{proof} The inequality $\le$ on the left hand side is trivial, while the second inequality $\le$ follows easily from the fact that every measure invariant under the action of $G$ is invariant under the action of $H'$ for any subgroup $H'$ of $G$. We need to prove the last missing inequality. By Proposition \ref{prop} (1), we have $\inf_{\mu\in\mathcal M_G(X)}(\mu(\mathsf B)-\mu(\mathsf A))=\underline D(\mathsf B,\mathsf A)$. Then, for any positive $\delta$, there exists a finite set $F$ such that $$ \frac1{|F|}(|B_x\cap Fg|-|A_x\cap Fg|)>\underline D(\mathsf B,\mathsf A)-\delta $$ for every $x\in X$ and all $g\in G$, in particular for all $g\in H$, where $H$ is the subgroup generated by $F$. Thus, for every $x\in X$, we have $$ \inf_{g\in H}\frac1{|F|}(|B_x\cap Fg|-|A_x\cap Fg|)\ge\underline D(\mathsf B,\mathsf A)-\delta. $$ Since $F\subset H$ and $g\in H$, we have $A_x\cap Fg = (A_x\cap H)\cap Fg$. Note that $A_x\cap H$ equals the set $A_x$ defined for the induced action of $H$ on $X$ (and analogously for $B_x$). Thus, the expression on the left hand side above equals $\underline D_F(B_x,A_x)$ evaluated for the action of $H$ on $X$. Now, Lemma \ref{bd} implies $\underline D(B_x,A_x)\ge\underline D(\mathsf B,\mathsf A)-\delta$ for every $x\in X$ (where $\underline D(B_x,A_x)$ is evaluated for the action of $H$ on $X$, and $\underline D(\mathsf B,\mathsf A)$ is evaluated for the action of $G$ on $X$), and Proposition \ref{prop} (1) yields $$ \inf_{\mu\in\mathcal M_H(X)}(\mu(\mathsf B)-\mu(\mathsf A))\ge\underline D(\mathsf B,\mathsf A)-\delta=\inf_{\mu\in\mathcal M_G(X)}(\mu(\mathsf B)-\mu(\mathsf A))-\delta. $$ After applying the supremum over $H$ on the left we can ignore $\delta$ on the right. \end{proof} \begin{prop}\label{44} A countable amenable group $G$ has the comparison property if every finitely generated subgroup $H$ of $G$ has it. \end{prop} \begin{proof} Let $G$ act on a zero-dimensional\ compact metric space $X$ and let $\mathsf A,\mathsf B\subset X$ be two disjoint clopen sets satisfying $\underline D(\mathsf B,\mathsf A)>0$. By the preceding lemma (and by Proposition \ref{prop} (1) used twice), there exists a finitely generated subgroup $H$ of $G$ such that the inequality $\underline D(\mathsf B,\mathsf A)>0$ holds also if $\underline D$ is evaluated for the action of $H$. By the comparison property of $H$, we get that $\mathsf A\preccurlyeq\mathsf B$ in this latter action. But this clearly implies the same subequivalence in the action by $G$. \end{proof} \begin{rem} By the proof of Lemma \ref{supinf}, if $(H_n)$ is an increasing sequence\ of subgroups of $G$ such that $G=\bigcup_{n=1}^\infty H_n$ then $$ \inf_{\mu\in\mathcal M_G(X)}(\mu(\mathsf B)-\mu(\mathsf A))=\lim_{n\to\infty}\inf_{\mu\in\mathcal M_{H_n}(X)}(\mu(\mathsf B)-\mu(\mathsf A)). $$ Thus, in Proposition \ref{44}, the assumption can be weakened to the existence of an increasing sequence\ $(H_n)$ of subgroups of $G$ such that $G=\bigcup_{n=1}^\infty H_n$, and every $H_n$ has the comparison property. \end{rem} \begin{rem} The converse implication in Proposition \ref{44} is a bit mysterious. On the one hand, since there are no examples of countable amenable groups without the comparison property, clearly, there is no counterexample for the implication in question. On the other hand, we failed to deduce the comparison property of a subgroup of $G$ from the comparison property of the group $G$. \end{rem} \section{Comparison property of subexponential groups}\label{cztery} This section contains our main result: every subexponential group has the comparison property. The theorem is preceded by a few key definitions and lemmas. \subsection{Correction chains} We now introduce the key tool in the proof of the main result. The term $(\phi,E)$-chain reflects a remote analogy to $(f,\varepsilon)$-chains in topological\ dynamics. Throughout this subsection, we let $A,B$ denote two disjoint subsets of a countable group $G$. \begin{defn}Given a partially defined bijection $\phi:A'\to B'$, where $A'\subset A$ and $B'\subset B$, such that all multipliers $\phi(a)a^{-1}$ belong to a finite set $E\subset G$, by a \emph{$(\phi,E)$-chain of length $2n$} (or briefly just \emph{a chain}) we will mean a sequence\ $\mathbf C=(a_1,b_1,a_2,b_2,\dots,a_n,b_n)$ of \,$2n$ \emph{different} elements alternately belonging to $A$ and $B$, such that $$ \text{for each }i=1,2,\dots,n,\ \ b_i\in Ea_i, $$ and $$ \text{for each }i=1,2,\dots,n-1, \ \ b_i\in B',\ \ a_{i+1}\in A' \text{ \ and \ } b_i = \phi(a_{i+1}) $$ (in particular, $b_i\in Ea_{i+1}$). \end{defn} The $(\phi,E)$-chains starting at a point $a_1\in A\setminus A'$ and ending at a point $b_n \in B\setminus B'$ are of special importance, as they allow one to ``correct'' the mapping and include $a_1$ in the domain and $b_n$ in the range. \begin{defn} A $(\phi,E)$-chain $\mathbf C=(a_1,b_1,a_2,b_2,\dots,a_n,b_n)$ will be called a \emph{$\phi$-correction chain} if $a_1\in A\setminus A'$ and $b_n\in B\setminus B'$. With each $\phi$-correction chain $\mathbf C$ we associate the \emph{correction of $\phi$ along $\mathbf C$}. The corrected map denoted by $\phi^{\mathbf C}$ is defined on $A'\cup\{a_1\}$ onto $B'\cup\{b_{n}\}$, as follows: for each $i=1,2,\dots,n$ we let $$ \phi^{\mathbf C}(a_i) = b_i, $$ and for all other points $a\in A'$ we let $\phi^{\mathbf C}(a)=\phi(a)$. \end{defn} The correction may be visualized as follows (solid arrows in the top row represent the map $\phi$ and in the bottom row they represent $\phi^\mathbf C$; the dashed arrows represent the ``$E$-proximity relation'' $b\in Ea$): \begin{gather*} a_1\dashrightarrow b_1\longleftarrow a_2\dashrightarrow b_2\longleftarrow a_3\ \dots\ b_{n-1}\longleftarrow a_n\dashrightarrow b_n\\ \Downarrow\\ a_1\longrightarrow b_1\dashleftarrow a_2\longrightarrow b_2\dashleftarrow a_3\ \dots\ b_{n-1}\dashleftarrow a_n\longrightarrow b_n \end{gather*} (the dashed arrows become solid, the solid arrows are removed from the map). Notice that $\phi^{\mathbf C}$ still has all its multipliers $\phi^{\mathbf C}(a) a^{- 1}$ in the set $E$. \smallskip The problem with the correction chains is that the corresponding corrections of $\phi$ usually cannot be applied simultaneously. The correction chains may collide with each other, i.e., pass through common points and then the corresponding corrections rule each other out. To manage this problem we need to learn more about the possible collisions and then carefully select a family of mutually non-colliding correction chains. The details of this selection are given below. \begin{defn} Two $\phi$-correction chains \emph{collide} if they have a common point. \end{defn} Since the starting points of $\phi$-correction chains belong to $A\setminus A'$, the ending points belong to $B\setminus B'$, other odd points (counting along the chain) belong to $A'$, other even points belong to $B'$, where the above four sets are disjoint, and each even point is tied to the following odd point by the inverse map $\phi^{-1}$, each collision between two $\phi$-correction chains, say $\mathbf C=(a_1,b_1,a_2,b_2,\dots,a_n,b_n)$ and $\mathbf C'=(a'_1,b'_1,a'_2,b'_2,\dots,a'_m,b'_m)$, is of one of the following three types: \begin{itemize} \item \emph{common start}: $a_1=a'_1$, \item \emph{common end}: $b_n=b'_m$, \item all other collisions occur in pairs $(b_i,a_{i+1})=(b'_j,a'_{j+1})$ for some $1\le i<n$ and $1\le j<m$. \end{itemize} Of course, two chains may have more than one collision. Note that the definition of a $(\phi,E)$-chain eliminates the possibility of ``self-collisions'' in one chain. \begin{defn} Given a $(\phi,E)$-chain $\mathbf C=(a_1,b_1,a_2,b_2,a_3,\dots,a_n,b_n)$, the sequence\ $\mathbf n(\mathbf C)=(p_1,q_1,p_2,q_2,\dots,p_{n-1},q_{n-1},p_n)$, where $p_i=b_ia_i^{-1}$ $(i=1,2,\dots,n)$ and $q_i=b_ia_{i+1}^{-1}$ $(i=1,2,\dots,n-1)$, will be called the \emph{name} of $\mathbf C$. \end{defn} Notice that the name is always a sequence\ of elements of $E$, of length $2n-1$. \begin{lem}\label{shorter} If two different $\phi$-correction chains have the same name (note that their lengths are then equal) and collide with each other then each of them collides also with a strictly shorter $\phi$-correction chain. \end{lem} \begin{proof} It is obvious that if two $\phi$-correction chains with the same name, say $$ \mathbf C=(a_1,b_1,a_2,b_2,a_3,\dots,a_n,b_n), \ \ \mathbf C'=(a'_1,b'_1,a'_2,b'_2,a'_3,\dots,a'_n,b'_n), $$ have the common start $a_1=a_1'$ or the common end $b_n=b_n'$, or a common pair $(b_i, a_{i+1})=(b'_i,a'_{i+1})$ with the same index $i=1,2,\dots,n-1$, then the chains are equal. The only possible collision between two different $\phi$-correction chains with the same name is that they have a common pair $(b_i,a_{i+1})=(b'_j,a'_{j+1})$ with $i\neq j$. Let $i_0$ be the smallest index appearing in the role of $i$ or $j$ in the collisions of $\mathbf C$ with $\mathbf C'$ and assume that it plays the role of $i$ (with some corresponding $j$). Then $$ (a_1,b_1,a_2,b_2,a_3,\dots,a_{i_0},b_{i_0},a_{i_0+1},b'_{j+1},a'_{j+2},\dots,a'_n,b'_n) $$ is a $\phi$-correction chain (it has no self-collisions) of length strictly smaller than $2n$, and clearly it collides with both $\mathbf C$ and $\mathbf C'$. \end{proof} We enumerate $E$ (arbitrarily) as $\{g_1,g_2,\dots,g_k\}$. We define $$ \mathbf N=\bigcup_{n=1}^\infty E^{\times2n-1}, $$ which means the disjoint union of the $(2n-1)$-fold Cartesian products of copies of $E$. This set can be interpreted as the collection of all ``potential'' names of the correction chains of any partially defined bijection from $A$ to $B$ with the multipliers in $E$. The enumeration of $E$ induces the following linear order on $\mathbf N$: $$ \mathbf n<\mathbf n'\ \ \iff\ \ |\mathbf n|<|\mathbf n'| \ \vee \ (\,|\mathbf n|=|\mathbf n'| \ \wedge\ \mathbf n<\mathbf n'\,), $$ where $|\mathbf n|$ denotes the length of $\mathbf n$ and the last inequality is with respect to the lexicographical order on $E^{\times|\mathbf n|}$. \begin{defn} A $\phi$-correction chain $\mathbf C$ is \emph{minimal} if it does not collide with any other $\phi$-correction chain whose name precedes $\mathbf n(\mathbf C)$ in the above defined order on~$\mathbf N$. \end{defn} \begin{lem}\label{minnox} Minimal $\phi$-correction chains do not collide with each other. \end{lem} \begin{proof} If two $\phi$-correction chains with different names collide, one of them is not minimal. If two $\phi$-correction chains with the same name collide, by Lemma \ref{shorter} none of them is minimal. \end{proof} \begin{lem}\label{minch} Assume that $E$ is a symmetric set containing the unity $e$ and let $a_1\in A\setminus A'$. If there is a $\phi$-correction chain $\mathbf C$ of length $2n$, starting at $a_1$, then there exists a minimal $\phi$-correction chain of length at most $2n$ contained in the finite set $E^{s(n)}a_1$ (where $s(n)$ depends only on $|E|$ and $n$). \end{lem} \begin{proof} If $\mathbf C$ itself is not minimal then it collides with a $\phi$-correction chain $\mathbf C_1$ with $\mathbf n(\mathbf C_1)<\mathbf n(\mathbf C)$ in $\mathbf N$. Clearly, $\mathbf C_1$ is entirely contained in $E^{4n}a_1$. If $\mathbf C_1$ is not minimal, then it collides with some $\mathbf C_2$, whose name precedes that of $\mathbf C_1$ (and hence also that of $\mathbf C$). Now, $\mathbf C_2$ is contained in $E^{6n}a_1$. This recursion may be repeated at most $\sigma_n-1=\sum_{i=1}^n|E|^{2n}-1$ times, because this number estimates the number of names preceding $\mathbf n(\mathbf C)$. So, before $\sigma_n$ steps are performed, a minimal $\phi$-correction chain must occur. Its length is at most $2n$ and it is entirely contained in $E^{2n\sigma_n}a_1$. \end{proof} It is the following lemma, where subexponentiality of the group comes into play. \begin{lem}\label{key} Let $G$ be a subexponential group. Let $\mathcal{T}$ be a tiling of $G$ and let $\mathcal S$ denote the set of all shapes of $\mathcal{T}$. Denote $E=\bigcup_{S\in\mathcal S}SS^{-1}$. Let $A,B$ be disjoint subsets of $G$ satisfying, for some $\varepsilon>0$ and every tile $T$ of $\mathcal{T}$, the inequality $$ |B\cap T|-|A\cap T|>\varepsilon|T|. $$ Let $N\ge 1$ be such that for any $n\ge N$, $$ \frac1n\log|(E^2)^n|<\log(1+\varepsilon) $$ (by the subexponentiality assumption, since $E^2$ is finite, such an $N$ exists). Then, for any partially defined bijection $\phi:A'\to B'$ with $A'\subset A,\ B'\subset B$, such that all multipliers $\phi(a)a^{-1}$ are in $E$, for every point $a_1\in A\setminus A'$, there exists a $\phi$-correction chain of length at most $2N$, starting at $a_1$ (and ending in $B\setminus B'$). \end{lem} \begin{proof} For each tile $T$ of $\mathcal{T}$ we have $$ \frac{|B\cap T|}{|A\cap T|}\ge\frac{\varepsilon|T|}{|A\cap T|}+1\ge 1+\varepsilon $$ (including the case when the denominator equals $0$). Clearly, any \emph{$\mathcal{T}$-saturated} finite set $Q$, i.e, being a union of tiles of $\mathcal{T}$, also satisfies $$ \frac{|B\cap Q|}{|A\cap Q|}\ge1+\varepsilon. $$ For a set $P\subset G$, we define the \emph{$\mathcal{T}$-saturation} $P^\mathcal{T}$ of $P$ as the union of all tiles intersecting $P$: $$ P^\mathcal{T}= \bigcup\{T\in\mathcal{T}:P\cap T\neq\emptyset\}. $$ Obviously, $P^\mathcal{T}\subset EP$. Consider a point $a_1\in A\setminus A'$ (if $A\setminus A'=\emptyset$ then the statement of the theorem holds trivially). Let $T$ be the tile of $\mathcal{T}$ containing $a_1$, i.e., $T=\{a_1\}^\mathcal{T}$. Since $T$ contains $a_1$ (and thus $|A\cap T|\ge 1$), we have $|B\cap T|\ge 1+\varepsilon$. There exist $(\phi,E)$-chains of length $2$ from $a_1$ to every $b\in B\cap T$. Now, there are two options: \begin{itemize} \item either at least one of these chains is a $\phi$-correction chain (and then the construction is finished), \item or none of these chains is a $\phi$-correction chain, i.e., $B'\cap T=B\cap T$. \end{itemize} In the latter option we have $|B'\cap T|=|B\cap T|\ge1+\varepsilon$, i.e., denoting $$ P_1=\{a_1\} \text{ \ and \ } Q_1=T=P_1^{\mathcal{T}}, $$ we have $$ |B'\cap Q_1|\ge1+\varepsilon. $$ From now on we continue by induction. Suppose that for some $n\ge 1$ we have defined a $\mathcal{T}$-saturated set $Q_n$ such that \begin{enumerate} \item for every $b\in B\cap Q_n$ there exists a $(\phi,E)$-chain of length at most $2n$ from $a_1$ to $b$, \item $B\cap Q_n= B'\cap Q_n$ (i.e., there are no $\phi$-correction chains starting at $a_1$ and ending in $Q_n$), and \item $|B'\cap Q_n|\ge(1+\varepsilon)^n$. \end{enumerate} Then we define $P_{n+1}=\phi^{-1}(Q_n)=\phi^{-1}(B'\cap Q_n)$. Bijectivity of $\phi$ implies that $|P_{n+1}|\ge(1+\varepsilon)^n$. Let $Q_{n+1}$ denote the $\mathcal{T}$-saturation $P_{n+1}^{\mathcal{T}}$. Every point $b\in B\cap Q_{n+1}$ is of the form $g\phi^{-1}(b')$ with $g\in E$ and $b'\in B'\cap Q_n$, and, by (1), $b'$ can be reached from $a_1$ by a $(\phi,E)$-chain of length at most $2n$. Thus there exists a $(\phi,E)$-chain of length at most $2(n+1)$ from $a_1$ to every $b\in B\cap Q_{n+1}$. There are two options: \begin{itemize} \item either at least one of these chains is a $\phi$-correction chain (then the construction is finished), \item or $B\cap Q_{n+1}=B'\cap Q_{n+1}$. \end{itemize} Suppose the latter option occurs. Since $Q_{n+1}$ is $\mathcal{T}$-saturated, we have $$ |B'\cap Q_{n+1}|=|B\cap Q_{n+1}|\ge(1+\varepsilon)|A\cap Q_{n+1}|\ge(1+\varepsilon)|P_{n+1}|\ge(1+\varepsilon)^{n+1}. $$ Now, (1)--(3) are fulfilled for $n+1$, so the induction can be continued. Notice that for each $n$, $Q_n\subset EP_n$ and, by symmetry of the set $E$, $P_{n+1}\subset EQ_n$. As a consequence, we have $Q_{n+1}\subset E^{2n+1} a_1\subset (E^2)^{n+1} a_1$, and if the latter of the above options occurs, we have $$ |(E^2)^{n+1}|\ge|Q_{n+1}|\ge|B'\cap Q_{n+1}|\ge(1+\varepsilon)^{n+1}, $$ which implies that $n+1<N$ by the assumption. So, $n=N-2$ is the last integer for which nonexistence of $\phi$-correction chains of length $2(n+1)$ is possible. In the worst case scenario a correcting chain of length $2N$ must already exist. \end{proof} \begin{rem}It is absolutely crucial in the proof that we are using a tiling, not a quasitiling leaving some part of $G$ uncovered by the tiles. In such case, $a_1$ may be uncovered by the tiles, moreover, we would have no control as to how many elements of $P_{n+1}=\phi^{-1}(Q_n)$ are ``lost'' in the untiled part of $G$. \end{rem} \subsection{The main result} \begin{thm}\label{main} Every subexponential group $G$ has the comparison property. \end{thm} \begin{proof} By Proposition \ref{44}, it suffices to prove the theorem for finitely generated groups $G$ with subexponential growth, and Theorem \ref{ujowe} allows us to focus on the Banach density comparison property. So, let $G$ be a finitely generated group with subexponential growth. Let $A,B\subset G$ be disjoint and satisfy $\underline D(B,A)>0$. All we need is, in the subshift $Y^{AB}$, to construct an injection $\tilde\varphi:A\to B$ determined by a block code. By Lemma \ref{bd}, there exists a finite set $F\subset G$ such that $\underline D_F(B,A)>5\varepsilon$ for some positive $\varepsilon$. By Theorem~\ref{ourtilings}, there exists an $(F,\varepsilon)$-invariant\ tiling $\mathcal{T}$ of $G$. We let $\mathcal S$ denote the set of all shapes of $\mathcal{T}$. By Lemma \ref{bdc}, for every shape $S$ of $\mathcal{T}$ we have $\underline D_S(B,A)>\varepsilon$, in particular, $$ |B\cap T|-|A\cap T|>\varepsilon|T|, $$ for every tile $T$ of $\mathcal{T}$. Let $E=\bigcup_{S\in\mathcal S}SS^{-1}$ and say $E= \{g_1, g_2, \dots, g_k\}$. We will build the desired injection $\tilde\varphi:A\to B$ in a series of steps. The first approximation of $\tilde\varphi$ is the map $\phi_1$ defined on a subset of $A$ by a procedure similar to that used in the proof of Lemma \ref{compisweakcomp}: we let $A_1=A\cap g_1^{-1}(B)$, and $B_1=g_1(A_1)\subset B$ and then, for each $j=2,3,\dots,k$ we define inductively $$ A_j = A\setminus\Bigl(\bigcup_{i=1}^{j-1}A_i\Bigr)\cap g_j^{-1}\left(B\setminus\Bigl(\bigcup_{i=1}^{j-1} B_i\Bigr)\right)\ \text{and}\ B_j= g_j A_j\subset B. $$ On each set $A_j$ (with $j=1,2,\dots,k$), $\phi_1$ is defined as the multiplication on the left by $g_j$. We let $A'_1=\bigcup_{i=1}^k A_i\subset A$ and $B'_1=\bigcup_{i=1}^k B_i\subset B$ denote the domain and range of $\phi_1$, respectively. The rule behind the construction of $\phi_1$ is as follows: for each $a\in A$ we first check whether $g_1a\in B$ and for those $a$ for which this is true, we assign $\phi_1(a)=g_1 a$. For other points $a$ we check whether $g_2a\in B$ and, unless $g_2a$ has already been assigned as $\phi_1(a')$ (for some $a'\in A$) in the previous step, we assign $\phi_1(a)=g_2 a$. And so on: in step $i$ we assign $\phi_1(a)=g_i a$ if $g_ia\in B$, unless $g_ia$ has already been assigned as $\phi(a')$ (for some $a'\in A$) in steps $1,2,\dots,i-1$. We stop when $i=k$. From this description it is easy to see that $\phi_1$ is an injection from $A_1'$ into $B_1'\subset B$. In fact, it is also seen that if $a_1,a_2\in A$ and $$ a_1(y^{AB})|_{E^k} = a_2(y^{AB})|_{E^k}, $$ then either $\phi_1(a_1)a_1^{- 1}=\phi_1(a_2) a_2^{- 1}$ or both values of $\phi_1(a_1)$ and $\phi_1(a_2)$ are undefined. Using the criterion \eqref{clr} (for a one-element family $\mathcal A$), we conclude that $\phi_1$ restricted to its domain $A_1'$ is determined by a block code (with the coding horizon $E^k$). We remark, that the block code determines some extension of $\phi_1$ to the whole group, but we do not care about the values of the code outside $A_1'$ and we still treat $\phi_1$ as undefined outside $A_1'$. If $A'_1=A$ (which is rather unlikely in infinite groups), then the proof is finished. Otherwise we continue the construction involving the correction chains and the associated corrections. By Lemma \ref{key}, for an appropriate $N$, every element $a_1\in A\setminus A_1'$ is the start of a $\phi_1$-correction chain of length at most $2N$. Next, by Lemma \ref{minch}, within $E^{s(N)}a_1$ there is a minimal $\phi_1$-correction chain of length at most $2N$. Finally, by Lemma \ref{minnox}, all minimal $\phi_1$-correction chains of lengths at most $2N$ do not collide with each other. Thus we can perform simultaneous corrections along all $\phi_1$-correction chains of lengths at most $2N$. The corrected map will be denoted by $\phi_2$. For each $a\in A\setminus A'_1$ perhaps we have not yet included $a$ in the domain $A'_2$ of $\phi_2$, but we have included in $A'_2$ at least one new point from $E^{s(N)}a\cap (A\setminus A_1')$. Clearly, $\phi_2$ sends $A'_2$ into $B$ and the multipliers of $\phi_2$ are contained in $E$. We will now argue why $\phi_2$ is determined by a block code. Notice that given $a\in A$, finding all $\phi_1$-correction chains of lengths bounded by $2N$ starting at or passing through $a$ requires examining the values of $\phi_1$ at most in the set $E^{2N}a$. Then, given such a chain, we can decide whether it is minimal or not by examining all $\phi_1$-correction chains of lengths bounded by $2N$ which collide with it. For this, viewing the values of $\phi_1$ on the set $E^{4N}a$ suffices. Now suppose that $a_1,a_2\in A$ and $$ a_1(y^{AB})|_{E^{k+4N}}=a_2(y^{AB})|_{E^{k+4N}}. $$ Since $E^k$ is the coding horizon for $\phi_1$, we have $$ a_1(\bar\phi_1)|_{E^{4N}}=a_2(\bar\phi_1)|_{E^{4N}}, $$ where $\bar\phi_1$ is defined as the symbolic element over the alphabet $E\cup\{\emptyset\}$ by the rule $$ (\bar\phi_1)_g=\begin{cases}\phi_1(g)g^{-1}& \text{if }g\in A'_1,\\ \emptyset &\text{otherwise,}\end{cases} $$ ($g\in G$). This implies that $(r_1 a_1, s_1 a_1, r_2 a_1, s_2 a_1, \dots, r_n a_1, s_n a_1)$ is a (minimal) $\phi_1$-correction chain if and only if $(r_1 a_2, s_1 a_2, r_2 a_2, s_2 a_2, \dots, r_n a_2, s_n a_2)$ is a (minimal) $\phi_1$-correction chain, whenever $n\le N$ and all $r_i$ and $s_i$ belong to $E^{2N}$. Hence either both $a_1$ and $a_2$ lie on minimal $\phi_1$-correction chains of length at most $2N$, or both do not. In the latter case, since $a_1(y^{AB})|_{E^{k}}=a_2(y^{AB})|_{E^{k}}$, either $\phi_2(a_1)a_1^{-1}=\phi_1(a_1)a_1^{-1}=\phi_1(a_2)a_2^{-1}=\phi_2(a_2)a_2^{-1}$ or both $\phi_2(a_1)$ and $\phi_2(a_2)$ are undefined. In the former case, the lengths and names of the two minimal $\phi_1$-correction chains are the same, moreover $a_1$ and $a_2$ occupy equal positions in the corresponding chains. This implies that the multipliers $\phi_2(a_1)a_1^{-1}$ and $\phi_2(a_2)a_2^{-1}$ (although different than those for $\phi_1$) will both be defined and equal. So, $\phi_2$ is indeed determined by a block code. The above process can be now repeated: the next map $\phi_3$ is obtained by performing simultaneous corrections along all minimal $\phi_2$-correction chains of lengths not exceeding $2N$. Again, for every $a\in A\setminus A'_2$, at least one point from each set $E^{s(N)}a$ is included in the domain $A'_3$ of $\phi_3$ (the intersection $(A\setminus A'_2)\cap E^{s(N)}a$ is nonempty as it contains $a$, and often $a$ will be the new point included in $A'_3$). By the same arguments as before, the map $\phi_3$ is an injection from $A'_3$ into $B$ determined by a block code (with the coding horizon $E^{k+ 4N}$), and the multipliers of $\phi_3$ remain in $E$. We claim that after a finite number $m$ of analogous steps all points of $A$ will be included in the domain of $\phi_m$, i.e., $\phi_m$ will be the desired injection $\tilde\varphi$ from $A$ into $B$. Indeed, a point $a\in A\setminus A_1'$ remains outside the domains of all the maps $\phi_i$ with $i\le m$ only if the number of all other points (except $a$) in $(A\setminus A_1')\cap E^{s(N)}a$ is at least $m- 1$ (because in each step at least one new point from this set is included in the domain). This is clearly impossible for $m> |E^{s(N)}|$, hence the desired finite number $m$ exists. By induction, all the maps $\phi_i$ ($i=1,2,\dots,m$) are determined by block codes (the coding horizon for the code which determines $\tilde\varphi=\phi_m$ is at most the set $E^{k+4Nm}$). This ends the proof. \end{proof} \subsection{Two questions} As we have already mentioned, the problem whether all countable amenable groups have the comparison property is rather difficult. On the other hand, based on the experience with subexponential groups, one might hope that other additional assumptions might help as well. We formulate two relaxed, yet still open, versions of Question \ref{3.7}. \begin{ques} \begin{enumerate} \item Do all countable amenable residually finite groups have the comparison property? \item Do all countable amenable left (right) orderable groups have the comparison property? \end{enumerate} \end{ques} \section{Free actions and tilings}\label{piec} In this section we provide an application of comparison to the existence of so-called \emph{dynamical tilings} with good F\o lner properties in free actions on zero-dimensional\ compact metric spaces. At the beginning of the paper, we have explained that the existence of such tilings is very important in the study of some areas, for example, in building the theory of symbolic extension for actions of countable amenable groups. Such tilings are guaranteed to exist in $\mathbb Z$-actions, which follows from various versions of marker theorems (see e.g. \cite{Bo}). But for actions of general countable amenable groups, just like comparison, the existence of dynamical tilings remains an open problem. \begin{defn}\label{dqt} Let a countable amenable group $G$ act on a zero-dimensional\ compact metric space $X$ and let $\mathcal{S}$ be a finite family of finite subsets of $G$ (containing the unity $e$). We say that the action \emph{admits a dynamical quastiling with shapes in $\mathcal{S}$} if there exists a map $x\mapsto \mathcal{T}_x$, which assigns to every $x\in X$ a quasitiling $\mathcal{T}_x$ of $G$ with shapes in $\mathcal{S}$ (see Definition \ref{quasi}), and $x\mapsto \mathcal{T}_x$ is a factor map from $X$ onto a symbolic dynamical system over the alphabet $\Delta=\mathcal{S}\cup\{\mathsf 0\}$, where $\mathcal{T}_x$ is viewed as a point in $\Delta^G$ (see the comments below Definition \ref{quasi}). We say that a dynamical quasitiling is $(K,\varepsilon)$-invariant, $\varepsilon$-disjoint, disjoint, $\alpha$-covering, or that it is a \emph{dynamical tiling} if $\mathcal{T}_x$ has the respective property for every $x$. We will say that the action has \emph{the tiling property} if, for every finite set $K\subset G$ and every $\varepsilon>0$, it admits a $(K,\varepsilon)$-invariant dynamical tiling. \end{defn} The fact that the dynamical quasitiling $x\mapsto\mathcal{T}_x$ is a topological\ factor of the action of $G$ on $X$ is equivalent to the conjuction of the following two statements: \begin{enumerate} \item for any finite set $F$ of $G$, if $x$ and $x'$ are sufficiently close to each other in $X$, then the set $F$ is tiled by $\mathcal{T}_x$ and by $\mathcal{T}_{x'}$ in the same way, \item for each $g\in G$ we have $\mathcal{T}_{g(x)}=\{Tg^{-1}:T\in\mathcal{T}_x\}$. \end{enumerate} In \cite{DH} the following result is proved: \begin{thm}{\cite[Corollary 3.5]{DH}}\label{quasitilings} Let a countable amenable group $G$ act freely on a zero-dimensional\ compact metric space $X$. For any finite set $K\subset G$ and any $\varepsilon>0,\ \delta>0$ the action admits a $(K,\varepsilon)$-invariant, disjoint, $(1\!-\!\delta)$-covering dynamical quasitiling $x\mapsto\mathcal{T}_x$. \end{thm} We will now demonstrate strong connection between comparison and the tiling property of actions. \begin{thm}\label{dt} Let a countable amenable group $G$ act freely on a zero-dimensional\ compact metric space $X$. Then the action admits comparison if and only if it has the tiling property. The backward implication holds without assuming that the action is free. \end{thm} \begin{proof} We need to consider only infinite groups $G$. Firstly we will show that for any finite $K\subset G$ and $1>\varepsilon>0$, the free action admits a $(K, \varepsilon)$-invariant dynamical tiling. By Theorem \ref{quasitilings}, the free action admits a $(K,\frac\eps2)$-invariant, disjoint, $(1\!-\!\delta)$-covering dynamical quasitiling $x\mapsto \mathcal{T}'_x$, where $\delta> 0$ is so small that $\frac{2\delta}{1-\delta}<\frac\varepsilon{2|K|}$. We denote by $\mathcal{S}'$ the collection of all shapes used by this quasitiling. We can assume that each shape $S\in\mathcal{S}'$ has cardinality so large that the interval $(\frac{2\delta}{1-\delta}|S|,\frac\varepsilon{2|K|}|S|)$ contains an integer $i_S$ (if this fails, we can choose a $(K',\frac\eps2)$-invariant, disjoint, $(1\!-\!\delta)$-covering dynamical quasitiling, where $K'\supset K$, and clearly this quasitiling is also $(K,\frac\eps2)$-invariant, while its shapes have cardinalities at least $\frac{|K'|}2$, as large as we wish; here we use infiniteness of $G$). In each shape $S\in\mathcal S'$ we select (arbitrarily) a subset $B_S$ of cardinality $i_S$. Given $x\in X$, we now observe two subsets of $G$: $$ A_x= G\setminus \bigcup\mathcal{T}'_x\text{ \ \ and \ \ }B_x = \bigcup_{(S,c)\in\mathcal{T}'_x}B_Sc. $$ Clearly, $\overline D(A_x)=1-\underline D(\bigcup\mathcal{T}'_x)\le\delta$. Using Lemma~\ref{1.5} we easily get $\underline D(B_x)>(1-\delta)\cdot \frac{2\delta}{1-\delta} =2\delta$. By Corollary \ref{coro}, $\underline D(B_x,A_x)>\delta$. Define two subsets of $X$: $$ \mathsf A=\{x:e\in A_x\}\text{ \ \ and \ \ }\mathsf B=\{x:e\in B_x\}. $$ By continuity of the assignment $x\mapsto \mathcal{T}'_x$, and since one can determine whether $e\in A_x$ (and likewise, whether $e\in B_x$) from the symbolic representation of $\mathcal{T}'_x$ (which is a subshift over the alphabet $\Delta'=\mathcal S'\cup\{0\}$) by viewing the symbols in a bounded horizon $\bigcup_{S\in\mathcal{S}'}S^{- 1}$ (independent of $x$) around $e$, both sets $\mathsf A$ and $\mathsf B$ are clopen (and obviously disjoint). The notation $A_x,\,B_x$ is now consistent with \eqref{kiki} and \eqref{kiko} for the sets $\mathsf A,\,\mathsf B$, respectively, hence, by Proposition \ref{prop} (1) (the last equality) we obtain $\underline D(\mathsf B,\mathsf A)\ge\delta>0$. The comparison property of the action on $X$ implies that $\mathsf A\preccurlyeq\mathsf B$. Since we prefer to work with a symbolic system in place of the zero-dimensional\ system $X$, we will now build a symbolic factor $\hat X$ of $X$ carrying the minimum information needed to restore both the dynamical quasitiling $x\mapsto \mathcal{T}_x'$ and the subequivalence $\mathsf A\preccurlyeq\mathsf B$. Let $\{\mathsf A_1,\mathsf A_2,\dots,\mathsf A_k\}$ and $g_1,g_2,\dots, g_k$ be, respectively, the clopen partition of $\mathsf A$ and the associated elements of $G$ as in the definition of subequivalence. We define a factor map $\pi:X\to\hat X\subset{\hat\Delta}^G$, where $\hat\Delta=\Delta'\times\{\mathsf 0,\mathsf 1,\dots,\mathsf k,\mathsf{k+1}\}$, as follows: $$ (\pi(x))_g = \begin{cases} ((\mathcal{T}_x')_g,\mathsf i)&\text{ \ \ if \ }g(x)\in \mathsf A_i, \ \ i=1,2,\dots,k\\ ((\mathcal{T}_x')_g,\mathsf{k+1})&\text{ \ \ if \ }g(x)\in\mathsf B\\ ((\mathcal{T}_x')_g,\mathsf{0})&\text{ \ \ if \ }g(x)\notin\mathsf A\cup\mathsf B.\\ \end{cases} $$ Denote by $\hat{\mathsf A}_i=[\cdot,\mathsf i]$, $\hat{\mathsf A}=\bigcup_{i=1}^k[\cdot,\mathsf i]$ and $\hat{\mathsf B}=[\cdot,\mathsf{k+ 1}]$. Clearly, $\pi^{-1}(\hat{\mathsf A})=\mathsf A$, $\pi^{-1}(\hat{\mathsf A}_i)=\mathsf A_i$ ($i=1,2,\dots,k$) and $\pi^{-1}(\hat{\mathsf B})=\mathsf B$, which easily implies that $\hat{\mathsf A}\preccurlyeq\hat{\mathsf B}$ in the subshift $\hat X$, and the subequivalence involves the same elements $g_1,g_2,\dots, g_k$. Also for any $\hat x\in \hat X$ all quasitilings $\mathcal{T}'_x$ with $x\in\pi^{-1}(\hat x)$ coincide. Hence, the subshift $\hat X$ admits a dynamical quasitiling $\hat x\mapsto\mathcal{T}'_{\hat x}$, where $\mathcal{T}'_{\hat x}=\mathcal{T}'_x$ for any $x\in\pi^{-1}(\hat x)$. By Theorem \ref{tutka} (1) (and its proof), there exists a family of injections $\tilde\varphi_{\hat x}:\hat A_{\hat x}\to \hat B_{\hat x}$ indexed by $\hat x\in \hat X$ (according to our convention, $\hat A_{\hat x}=\{g:g(\hat x)\in\hat{\mathsf A}\}$, $\hat B_{\hat x}=\{g:g(\hat x)\in\hat{\mathsf B}\}$), determined by a block code $\Xi:\hat\Delta^F\to E$, where $E=\{g_1,g_2,\dots, g_k\}$ (and $F$ is a finite coding horizon). As easily verified, if $\hat x=\pi(x)$ then $\hat A_{\hat x} = A_x$ and $\hat B_{\hat x}=B_x$, thus, for any $x\in X$ we can define injections $\tilde\varphi_x:A_x\to B_x$ simply as $\tilde\varphi_{\hat x}$. Now we are in a position to modify the quasitilings $\mathcal{T}'_x$. Given $x\in X$, we define a transformation of the tiles $Sc\in\mathcal{T}'_x$ as follows: $$ \Phi_x(Sc)=Sc\cup\tilde\varphi_x^{-1}(B_Sc)\subset Sc\cup A_x $$ (recall that $B_Sc$ is a part of the set $B_x$, so its preimage by $\tilde\varphi_x$ is a part of $A_x$). We will call the set $\tilde\varphi_x^{-1}(B_Sc)$ the \emph{added set}. We define the center of the new tile $\Phi_x(Sc)$ as $c$. The shape of the new tile equals $$ \Phi_x(Sc)c^{-1}=S\cup\tilde\varphi_x^{-1}(B_Sc)c^{-1}. $$ Note that $$ \tilde\varphi_x^{-1}(B_Sc)c^{-1}\subset E^{-1} (B_Sc)c^{-1}\subset E^{-1}S, $$ which is a finite set. Since $\mathcal S'$ is finite, the set $\mathcal S$ of all new shapes is also finite. As the quasitiling $\mathcal{T}'_x$ is disjoint, $\tilde\varphi_x$ restricted to $A_x$ is injective, and the image of $A_x$ is contained in $B_x=\bigcup_{Sc\in\mathcal{T}'_x}B_Sc$, it is clear that the new quasitiling $$ \mathcal{T}_x=\{\Phi_x(Sc):Sc\in\mathcal{T}'_x\} $$ is a tiling (disjoint and covering $G$ completely). Further, for any tile $Sc$ of $\mathcal{T}'_x$ the added set $\tilde\varphi_x^{-1}(B_Sc)$ has cardinality at most $|B_S|=i_S<\frac{\varepsilon}{2|K|}|S|$. Thus $$ |K\Phi_x(Sc)|\le |KSc| + |K|\cdot \frac{\varepsilon}{2|K|}|S| = |KS|+\frac\eps2|S|. $$ We can assume (at the beginning of the proof) that $e\in K$, and then $(K,\frac\eps2)$-invariance of $S$ is equivalent to the inequality $|KS|<(1+\frac\eps2)|S|$. Thus $$ |K\Phi_x(Sc)|< (1+\varepsilon)|S|\le (1+\varepsilon)|\Phi_x(Sc)|, $$ and so $\Phi_x(Sc)$ is $(K,\varepsilon)$-invariant. Summarizing, we have constructed a mapping $x\mapsto\mathcal{T}_x$ into tilings with a finite set $\mathcal S$ of $(K,\varepsilon)$-invariant shapes. \smallskip We need to show that the assignment $x\mapsto\mathcal{T}_x$ is a dynamical tiling, i.e., a topological\ factor map from $X$ to a subshift over the alphabet $\Delta = \mathcal S\cup\{0\}$. Of course, it suffices to show that $x\mapsto\mathcal{T}_x$ ``factors through'' $\hat X$, i.e., that $\mathcal{T}_x$ depends in fact on $\hat x=\pi(x)$ and the dependence is via a block code. To do so, we can use the criterion \eqref{clr}, i.e., we need to indicate a finite set $J\subset G$, such that for any $x_1,x_2\in X$ and $g\in G$, \begin{equation}\label{equ} \hat x_1|_{Jg}=\hat x_2|_{Jg} \implies (\mathcal{T}_{x_1})_g=(\mathcal{T}_{x_2})_g, \end{equation} where $\hat x_1=\pi(x_1)$ and $\hat x_2=\pi(x_2)$. We claim that the set $J=\{e\}\cup FE^{-1}R$ is good, where $F$ is the finite coding horizon of $\Xi$ and $R=\bigcup_{S\in\mathcal{S}'}S$. In order to verify this claim assume that with so defined $J$ the left hand side of \eqref{equ} holds for some $x_1,x_2\in X$ and $g\in G$. Since $g\in Jg$, and the first entries of the pairs which constitute the symbols $(\hat x_1)_g$ and $(\hat x_2)_g$ equal $(\mathcal{T}'_{x_1})_g$ and $(\mathcal{T}'_{x_2})_g$, respectively, we have $(\mathcal{T}'_{x_1})_g=(\mathcal{T}'_{x_2})_g$. If this common entry is $0$ then $g$ is not a center of any tile in neither $\mathcal{T}'_x$ nor $\mathcal{T}'_{x'}$, and then $g$ is not a center of any tile in neither $\mathcal{T}_{x_1}$ nor $\mathcal{T}_{x_2}$, i.e., $(\mathcal{T}_{x_1})_g=(\mathcal{T}_{x_2})_g=0$. If the common entry is some $S\in\mathcal S'$ then we know that $g=c$ is a center of some tile in both $\mathcal{T}_{x_1}$ and $\mathcal{T}_{x_2}$, moreover the shapes of these tiles have the common part $S$ and may differ only in having different added sets. The added sets equal $\tilde\varphi_{x_1}^{-1}(B_Sc)c^{-1}$ and $\tilde\varphi_{x_2}^{-1}(B_Sc)c^{-1}$, respectively. Since we can replace the subscripts $x_1,x_2$ correspondingly by $\hat x_1,\hat x_2$, we just need to show that $$ \tilde\varphi_{\hat x_1}^{-1}(B_Sc)=\tilde\varphi_{\hat x_2}^{-1}(B_Sc). $$ Since $FE^{-1}Rc\subset Jg$, the left hand side of \eqref{equ} implies $\hat x_1|_{FE^{-1}Rc}=\hat x_2|_{FE^{-1}Rc}$. Recall that the family $\{\tilde\varphi_{\hat x}\}_{\hat x\in\hat X}$ is determined by a block code with coding horizon $F$. We deduce that $\tilde\varphi_{\hat x_1}$ agrees with $\tilde\varphi_{\hat x_2}$ on the set $E^{-1}Rc$, which contains $E^{-1}Sc$, which contains $E^{-1}B_Sc$. But $E^{-1}B_Sc$ contains the union $\tilde\varphi_{\hat x_1}^{-1}(B_Sc)\cup \tilde\varphi_{\hat x_2}^{-1}(B_Sc)$. Since $\tilde\varphi_{\hat x_1}$ and $\tilde\varphi_{\hat x_2}$ agree on this union, we conclude that $\tilde\varphi_{\hat x_1}^{-1}(B_Sc)=\tilde\varphi_{\hat x_2}^{-1}(B_Sc)$, which ends the proof of the first implication. \medskip Now we shall show how dynamical tilings can be used to prove comparison. Assume that $G$ acts on a zero-dimensional\ compact metric space $X$ (we do not assume freeness of the action) and that for every finite $K\subset G$ and $\varepsilon>0$ this action admits a dynamical tiling with $(K,\varepsilon)$-invariant shapes. Let $\mathsf A,\mathsf B$ be disjoint clopen subsets of $X$ such that $\mu(\mathsf B)>\mu(\mathsf A)$ for all invariant measure s $\mu$ on $X$. We need to show that $\mathsf A\preccurlyeq\mathsf B$. As we have observed in Remark \ref{from0}, the infimum $\inf_{\mu\in\mathcal M_G(X)}(\mu(\mathsf B)-\mu(\mathsf A))$ is positive. Proposition \ref{prop} (1) implies that $$ \underline D(\mathsf B,\mathsf A)\ge6\varepsilon, $$ for some $\varepsilon>0$. By Lemma \ref{bbb}, there exists a finite set $F\subset G$ satisfying, for every $x\in X$, $\underline D_F(B_x,A_x)\ge5\varepsilon$. By the tiling property, there exists a dynamical tiling $x\mapsto\mathcal{T}_x$ with some set of shapes $\mathcal S$ such that each shape $S\in\mathcal S$ is $(F,\varepsilon)$-invariant. Lemma~\ref{bdc} implies that for every $S\in\mathcal S$ and $x\in X$, we have $$ \underline D_S(B_x,A_x)\ge \underline D_F(B_x,A_x)-4\varepsilon>0, $$ which yields $|A_xg^{-1}\cap S|<|B_xg^{-1}\cap S|$ for every $g\in G$. Similarly, as in the preceding proof, we will build a symbolic factor $\hat X$ of $X$ carrying the minimum information about both the sets $\mathsf A,\mathsf B$ and the dynamical tiling. Namely, we define a factor map $\pi:X\to \hat X\subset {\hat\Delta}^G$, where this time $\hat\Delta = \{\mathsf 0,\mathsf 1,\mathsf 2\}\times\Delta$ (as usually, $\Delta=\mathcal S\cup\{0\}$ is the alphabet in the symbolic representation of the dynamical tiling), as follows $$ (\pi(x))_g= \begin{cases} (\mathsf1,S)& \ \text{ if \ \ }g\in A_x, Sg\in\mathcal{T}_x\\ (\mathsf2,S)& \ \text{ if \ \ }g\in B_x, Sg\in\mathcal{T}_x\\ (\mathsf0,S)& \ \text{ if \ \ }g\notin A_x\cup B_x, Sg\in\mathcal{T}_x\\ (\mathsf1,0)& \ \text{ if \ \ }g\in A_x, Sg\notin\mathcal{T}_x\\ (\mathsf2,0)& \ \text{ if \ \ }g\in B_x, Sg\notin\mathcal{T}_x\\ (\mathsf0,0)& \ \text{ if \ \ }g\notin A_x\cup B_x, Sg\notin\mathcal{T}_x. \end{cases} $$ As before, the subshift $\hat X$ admits a dynamical tiling $\hat x\mapsto\mathcal{T}_{\hat x}$, where $\mathcal{T}_{\hat x}=\mathcal{T}_x$ for any $x\in\pi^{-1}(\hat x)$. Denote $\hat{\mathsf A} = [\mathsf 1,\cdot]$ and $\hat{\mathsf B} = [\mathsf 2,\cdot]$. We have $\mathsf A=\pi^{-1}(\hat{\mathsf A})$ and $\mathsf B=\pi^{-1}(\hat{\mathsf B})$. Thus it suffices to show that $\hat{\mathsf A}\preccurlyeq\hat{\mathsf B}$ in $\hat X$. By Theorem \ref{tutka} (1), the proof will be ended once we will have constructed a family of injections $\tilde\varphi_{\hat x}:\hat A_{\hat x}\to \hat B_{\hat x}$ indexed by $\hat x\in\hat X$ and determined by a block code. By the definition of $\pi$ we have, that if $\hat x=\pi(x)$ then $A_x=\hat A_{\hat x}$ and $B_x=\hat B_{\hat x}$, and the inequality $|A_xg^{-1}\cap S|<|B_xg^{-1}\cap S|$ translates to $|\hat A_{\hat x}g^{-1}\cap S|<|\hat B_{\hat x}g^{-1}\cap S|$ (for each $\hat x\in \hat X$, $S\in\mathcal S$ and $g\in G$). In other words, in every block $g(\hat x)|_S$ there are more symbols $\mathsf 2$ than $\mathsf 1$ (we just consider the first entries in the pairs which constitute the symbols). Since $\mathcal S$ is finite and for each $S\in\mathcal S$ there are only finitely many blocks $\mathbf B\in{\hat\Delta}^S$, we have globally a finite number of possible blocks $\mathbf B$ appearing in the role $g(\hat x)|_S$ (with $\hat x\in \hat X$, $g\in G$ and $S\in\mathcal S$). For every block $\mathbf B$ in this finite collection we select arbitrarily an injection $\varphi_{\mathbf B}:\{s\in S:\mathbf B(s)=(\mathsf 1,\cdot)\}\to\{s\in S:\mathbf B(s)=(\mathsf 2,\cdot)\}$, where $S$ is the domain of $\mathbf B$. Fix some $\hat x\in \hat X$ and $a\in\hat A_{\hat x}$. Let $Sc$ be the tile of $\mathcal{T}_{\hat x}$ containing $a$ and let $\mathbf B = c(\hat x)|_S$. We define $$ \tilde\varphi_{\hat x}(a) = \varphi_{\mathbf B}(ac^{-1})c. $$ Since $\mathbf B(ac^{-1})=\hat x_a =(\mathsf 1,\cdot)$, $\varphi_{\mathbf B}(ac^{-1})$ is defined and satisfies $\mathbf B (\varphi_{\mathbf B}(ac^{-1}))=(\mathsf 2,\cdot)$, and thus $\hat x_{\varphi_{\mathbf B}(ac^{-1})c}=(\mathsf 2,\cdot)$, i.e., $\tilde\varphi_{\hat x}(a)\in\hat B_{\hat x}$. Notice that $\tilde\varphi_{\hat x}(a)$ belongs to the same tile of $\mathcal{T}_{\hat x}$ as $a$. Injectivity of so defined $\tilde\varphi_{\hat x}$ is very easy. Consider $a_1\neq a_2\in \hat A_{\hat x}$. If both elements belong to the same tile of $\mathcal{T}_{\hat x}$, then their images are different by injectivity of $\varphi_{\mathbf B}$, where $\mathbf B= c(\hat x)|_S$. If they belong to different tiles, their images also belong to different tiles, hence are different. The last thing to check is the condition \eqref{clr}, which will establish that the family $\{\tilde\varphi_{\hat x}\}_{\hat x\in\hat X}$ is determined by a block code. We claim that the horizon $E=\bigcup_{S\in\mathcal S}SS^{-1}$ is good. Indeed, suppose, for some $\hat x_1,\hat x_2\in \hat X$ and $a_1\in\hat A_{\hat x_1},a_2\in\hat A_{\hat x_2}$, that \begin{equation} \label{add} a_1(\hat x_1)|_E = a_2(\hat x_2)|_E. \end{equation} Let $S_1c$ be the (unique) tile of $\mathcal{T}_{a_1(\hat x_1)}$ containing the unity $e$. Then the second entry of the pair constituting the symbol $(a_1 (\hat x_1))_c$ equals $S_1$. Since $c\in\bigcup_{S\in\mathcal S}S^{-1}\subset E$, by \eqref{add} we obtain that the second entry of the symbol $(a_2 (\hat x_2))_c$ also equals $S_1$, so that $S_1c$ is the (unique) tile of $\mathcal{T}_{a_2(\hat x_2)}$ containing $e$. Further, since $S_1c\subset E$, by \eqref{add} we have $a_1(\hat x_1)|_{S_1c}= a_2(\hat x_2)|_{S_1c}$ and hence $ca_1(\hat x_1)|_{S_1}= ca_2(\hat x_2)|_{S_1}$. That is, these two restrictions define the same block $\mathbf B\in {\hat\Delta}^{S_1}$. This implies that both $\tilde\varphi_{\hat x_1}(a_1)$ and $\tilde\varphi_{\hat x_2}(a_2)$ are defined with the help of the same injection $\varphi_\mathbf B$, and $$ \tilde\varphi_{\hat x_1}(a_1) = \varphi_{\mathbf B}(a_1c_1^{-1})c_1, \ \ \ \ \tilde\varphi_{\hat x_2}(a_2) = \varphi_{\mathbf B}(a_2c_2^{-1})c_2, $$ where $c_1$ is the center of the tile of $\mathcal{T}_{\hat x_1}$ containing $a_1$ and $c_2$ is the center of the tile of $\mathcal{T}_{\hat x_2}$ containing $a_2$. By shift equivariance of the dynamical tiling, we easily see that $c_1=ca_1$ and $c_2=ca_2$, which yields $$ \tilde\varphi_{\hat x_1}(a_1)a_1^{-1} = \varphi_{\mathbf B}(c^{-1})c=\tilde\varphi_{\hat x_2}(a_2)a_2^{-1}. $$ This is exactly the condition $\eqref{clr}$ and the proof is finished. \end{proof} Combining Theorem \ref{main} with Theorem \ref{dt} we obtain: \begin{cor} If $G$ is a subexponential group then every action of $G$ on a zero-dimensional\ compact metric space has the tiling property. \end{cor} We conclude the paper with a question. Let us say that a countable amenable group $G$ has the \emph{tiling property} if any free action of $G$ on a zero-dimensional\ compact metric space has the tiling property as in Definition \ref{dqt}. In such case, by Theorem~\ref{dt} any free action on a zero-dimensional\ compact metric space admits comparison. It is easy to see that the tiling property cannot be extended (without modifying the definition) to non-free actions. However, there are \emph{a priori} no obvious reasons why the comparison property could not be extended. Thus the following question is very natural: \begin{ques} Is it true that if $G$ has the tiling property (which depends on free actions only) then it also has the comparison property (which depends on all actions; of course we restrict our attention to zero-dimensional\ compact metric spaces). \end{ques} \iffalse \bibliographystyle{amsplain}
{ "timestamp": "2017-12-15T02:05:02", "yymm": "1712", "arxiv_id": "1712.05129", "language": "en", "url": "https://arxiv.org/abs/1712.05129" }
\section{Introduction} \dropcap{C}ell motility is an integral part of biological processes such as morphogenesis \cite{Friedl2004}, wound healing \cite{Bindschadler2007} , and cancer invasion \cite{Kraning-Rush2013}. But what are the rules that govern how cells move? Cell migration involves a multitude of organelles and signaling pathways \cite{Lauffenburger1996} and therefore a fruitful, bottom-up approach studies correlations between cell motion and sub-cellular processes that govern motility, including surface interactions \cite{Zaman2006}, integrin signaling pathways \cite{Yamada1995}, or formation of focal adhesions \cite{Schneider2008}. An alternate approach with recent successes is to develop simple models at the cellular scale that can help identify a coarse-grained set of rules that govern cell migration in specific cell types. One such class of models, composed of self-propelled (SPP) or active Brownian particles ~\cite{Vicsek1995} has been used to make predictions about the motion of biological cells in many contexts, including density fluctuations \cite{Zhang2010}, formation of bacterial colonies \cite{Czirok1996}, \updated{and} both confined~\cite{Henkes2011}, and expanding monolayers~\cite{Poujade2007}. These SPP models represent each cell as a particle that moves by generating active force on a substrate, which acts along a specified vector $\hat{\theta}$. Therefore, the parameters for the model specify both the magnitude of the force as well as how the orientation of the force changes with time. Given the ubiquity and usefulness of these models, one would like to have a standard framework for extracting these parameters from experimental data for all trajectories. In the past this has often been accomplished by analyzing ensemble-averaged features of cell trajectories. One such quantity is the time averaged mean-squared displacement (MSD), which is the squared displacement between positions $\vec{r}(t)$ and $\vec{r}(t + dt)$ averaged over all starting times $t$ and the ensemble of trajectories. This yields the MSD as a function of timescale, $\langle(r(t+dt))-r(t))^2\rangle \propto dt^{\alpha}$. Ballistic motion, which corresponds to a cell moving in a straight line at constant speed, corresponds to $\alpha = 2$. Diffusive motion, where a cell executes a random walk with no time correlation in orientation, corresponds to $\alpha = 1$. In non-active matter at low densities, thermal fluctuations generically induce diffusive behavior at long timescales. In contrast, many cell types, including T-cells \cite{Harris2012}, Hydra cells \cite{Upadhyaya2000}, breast carcinoma cells \cite{Metzner2015}, and swarming bacteria \cite{Ariel} display super-diffusive dynamics, defined as trajectories with a MSD exponent between $1 < \alpha < 2$. Several authors have proposed explanations for why super-diffusive migration might be beneficial in biological systems. For example, super-diffusive trajectories are well known for being the optimal search strategy for randomly placed sparse targets \cite{Raposo2009,Viswanathan1999}, and have been found in animal foraging and migration patterns in jellyfish \cite{Reynolds2014}, albatross, and bumblebees \cite{Edwards2007}. In the context of cell biology, superdiffusive migration implies that cells are covering new areas more quickly than they would if they were executing a simple random walk. Although super-diffusive dynamics are commonly observed in \textit{in vitro} experiments, the fundamental mechanism that generates anomalous diffusion in cell trajectories remains unclear. Pinpointing the mechanism would allow biology researchers to better isolate the signaling pathways that govern these processes. Although one might think that simply including the effects of persistent active forces generated by cells would change the long-time behavior, it turns out that standard self-propelled particle models exhibit a fairly sharp crossover from ballistic to diffusive motion, with no extended superdiffusive regime. Since SPP models are commonly used to model cells and superdiffusive dynamics are commonly observed in experiments, we would like to identify the mechanism generating superdiffusitivity to improve the ability of these models to capture cellular phenomena. Standard SPP models include smoothly varying persistent random walkers and standard run-and-tumble particles (RTP) ~\cite{Marchetti2013}. Persistent random walkers obey the following equations of motion for the cell center of mass $r_i$ and the orientation angle $\theta_i$: \begin{equation} \label{spp_v} \partial_t \vec{r}_i = v_0 \hat{\theta}_i, \end{equation} \begin{equation} \label{spp_t} \partial_t \theta_i = \eta(t), \end{equation} where $\eta(t)$ is a Gaussian white noise ($\langle\eta(t)\eta(t')\rangle = 2D_r \delta(t-t')$). In a standard persistent random walk, the speed $v_0$ and the rotational diffusion coefficient $D_r$, which controls the strength of fluctuations in orientation, are constant. In a standard run-and-tumble model, particles are ballistic during runs, $\partial_t \theta_i = 0$, followed by tumbling events where large changes in orientation occur. Variations of run-and-tumble models are characterized by the distribution of times particles remain in the run state. Two different classes of modifications to SPP models have been highlighted as being able to generate super-diffusive behavior on long timescales. The first modification is a heterogeneous speed model, which draws rotational diffusion coefficients and particle speeds from distributions \cite{Metzner2015,Zaburdaev2008}. While persistent random walk models transition from ballistic to diffusive behavior at one characteristic timescale, heterogeneous speed models possess a heterogeneous distribution of crossover timescales, which generates an MSD with a broad superdiffusive regime, though the system becomes diffusive on timescales longer than $1/D_r^{min}$. The second modification is a L\'{e}vy walk model, which is a run-and-tumble model where particles have power law distributed run times: \begin{equation} \label{P_tau} P(\tau) = \frac{\mu}{\tau_o(1+\tau/\tau_o)^{1+\mu}}, \end{equation} \begin{equation} \label{tau} \left<\tau\right> = \frac{\tau_o}{\mu-1}, \end{equation} with $P(\tau)$ the distribution of run times with mean $<\tau>$ for $ \mu > 1$. \cite{Vasily}. In contrast to the heterogeneous SPP model, super-diffusivity generated by L\'{e}vy walks is not transient, so that the long-time MSD scaling exponent is constant: $MSD \propto dt^{3-\mu}$. So which of these models is the ``right" one for a given cell type? By analyzing ensemble-averaged statistics such as the MSD and the velocity autocorrelation function (VACF), one group of researchers was able to show that heterogeneous motility models matched data from breast cancer carcinoma cells \cite{Metzner2015}. Another group evaluated a different ensemble-averaged quantity, the probability displacement distribution, and used that data to suggest that T-cells were undergoing Levy walks \cite{Harris2012}. We would like to better understand whether these ensemble-averaged quantities are in fact a unique identifier of the underlying mechanism for superdiffusivity. Moreover, we also seek to develop a systematic procedure for using experimental data to constrain both the appropriate mechanism and the optimal model parameters for a specific subtype. To this end, we use automated tracking software to analyze over 1000 mouse fibroblast trajectories. We demonstrate that some ensemble-averaged statistics, such as the MSD and VACF, can not distinguish between mechanisms for superdiffusivity. In order to better distinguish, we begin with a very general model for cell dynamics. Although standard SPP models have only two fit parameters, average cell speed $v_0$ and average rotational noise $D_r$, in principal a generalized SPP model could have arbitrary distributions for cell speed $P(v_0)$ and rotational diffusion $P(D_r)$ with arbitrary correlations between them. The heterogeneity motility model from \cite{Metzner2015} is the limit of such a model with Gaussian-distributed $P(v_0)$ and $P(D_r)$, while a L\'{e}vy walk is the limit with a constant $v_0$ and a specialized bimodal $P(D_r)$. Because this is such a large parameter space, we first constrain the functional form of these distributions using specific features of single cell trajectory statistics. We find that the mouse fibroblast data are consistent with run-and-tumble dynamics but the run times are not power-law distributed, confirming that in mouse fibroblasts the mechanism for superdiffusivity is heterogeneous dynamics and not L\'{e}vy walk statistics. The toolkit we have developed here should be useful for pinpointing the origin of superdiffusivity in many other cell types. \section{Methods} \section{Mouse fibroblast cell culture} Cell motility data was collected from C3H10T1/2 mouse fibroblast cells (ATCC) cultured on a flat, gold-coated polymer substrate, prepared as previously described \cite{Yang2013}. Cell nuclei were labled with Hoechst dye and cell motility imaged by time-lapse microscopy under two different temperature conditions, 4 hrs (48 frames) at 30$^{\circ}$C and then 20 hrs (240 frames) at 37$^{\circ}$C (Supplemental Method 1). The resultant motility image stacks were analyzed using the ACT\textit{IV}E image analysis package to track nuclei centers-of-mass~\cite{Baker2014}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{mouse_fibroblast} \caption{\label{mf_example}(A) An example image of nuclei stained with Hoescht dye. The scale bar is 500 $\mu m$. These images were processed using the ACT\textit{IV}E image analysis package to track nuclei centers-of-mass~\cite{Baker2014}, with overlaid best-fit ellipses. (B) A typical cell trajectory with tumbling events indicated by red circles, as identified by the 2D Canny edge detection algorithm.} \end{figure} \section{Cell trajectory analysis and particle simulations} Cell motility was characterized using statistical analysis of cell nuclei trajectories, including MSD, VACF and displacement probability distributions. Tumbling events were identified with a Canny edge detection algorithm. Additional details on cell trajectory analysis can be found in Supplemental Method 2. \section{Active particle simulations} This manuscript focuses on two different models for non-interacting active particles. The first model is a L\'{e}vy walk with constant particle speed $v_0$ at all timesteps. Particles execute ballistic runs with zero rotational noise for times $\tau$ drawn from the distribution in Eq. 3 and a mean run time $\left<\tau\right>$ given by Eq. 4. The generalized SPP model has particles which follow the equations of motion seen in Eqs. 1 and 2, however the parameters for each model are not constant in time. A particle is initialized with a random orientation and assigned an initial speed $v_0$ and rotational diffusion $D_r$ drawn from distributions $P(v_0) = \frac{|v_0|}{\sigma_v^2} e^{-\frac{(v_0-\mu_v)^2}{\sigma_v^2}}$ and $P(D_r) = \frac{1}{\sqrt{\pi\sigma_D^2}} e^{-\frac{(D_r-\mu_D)^2}{\sigma_D^2}}$. We evolve the particle position and orientation for a time $\tau$ drawn from $P(\tau) = \frac{1}{\tau_0} e^{\tau/\tau_0}$, where $\tau_0$ is the mean run time determined by experimental data. The particle then undergoes a tumbling event across one timestep where $D_r = 2\pi$, where the value of rotational diffusion is chosen to approximate an event where the orientation is completely randomized. After the tumble a new $v_0$ and $D_r$ are assigned until the next tumbling event. In contrast to a L\'{e}vy walk or standard SPP model, motility parameters are varied in time to replicate the variations and changes in a biological environment. For both models, particle trajectories are constructed by numerically integrating the equations of motion using a simple Euler scheme with a timestep $dt = 0.1$. To compare these results to experimental data, we equate the simulation time unit to 4 minutes. Finally, we note the VACF for experimental data shows a sharp dropoff across one frame due to errors in reconstructing the nuclei centers caused by imaging noise and fluctuations in dye intensity. To replicate this feature we incorporate positional noise into both models through small Gaussian fluctutations. After particle trajectories are constructed, each position is changed by a vector $\delta \vec{r} = dr \hat{\phi}$, where $dr$ is drawn from a Gaussian distribution of variable width $\Delta$ and the direction $\hat{\phi}$ is chosen randomly from the unit circle. This replicates experimental error in reconstructing cell positions, and allows our model trajectories to match the mouse fibroblast data. \section{Results} \subsection{Experimentally observed ensemble-averaged quantities are well fit by several existing models} Previous reports have compared models to experimental data using ensemble-averaged statistics such at the MSD and the VACF. Therefore, our first goal is to determine whether one of the existing models for explaining superdiffusive cell trajectories is a better fit to the experimental MSD and VACF data, shown by the red lines in Fig.~\ref{msd_compare}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{msd_vacf1} \caption{\label{msd_compare}All of the proposed superdiffusive models are capable of capturing ensemble-averaged mouse fibroblast statistics.(A) Mean-squared displacements for mouse fibroblast cells, shown in red, as well as a L\'{e}vy walk and a generalized SPP model. Both models are able to match the mouse fibroblast MSD within the margin of error. (B) The velocity auto-correlation function $C_{vv}$ as a function of time $dt$. There is a sharp decrease in the VACF across the first frame, due to error in reconstructing the nuclei centers-of-mass generated by imaging noise and fluctuations in dye intensity. At the largest timescales, each bin corresponds to fewer events and so error bars become large. In addition, adding positional error to simulation trajectories to match the initial dropoff in the VACF causes significant fluctuations at larger timescales.} \end{figure} For comparison, we simulate a L\'{e}vy walk model with dynamics given by Eqns 3 and 4, as well as a generalized SPP with no L\'{e}vy-walk behavior, described in more detail below. With the best-fit parameters, we find that both models match the data equally well. As shown in Fig. \ref{msd_compare} (B), the velocity autocorrelation function exhibits a sharp decrease after the first frame window, due to errors that we make in reconstructing the nuclei center of mass caused by imaging noise and fluctuations in the dye intensity. Therefore, we add an additional term to the model that shifts the particle position by a Gaussian distributed variable with zero-mean and variance $\Delta^2$ to account for this effect. While the mean-squared displacement and velocity auto-correlation function are standard metrics for characterizing ensembles of trajectories, they may not be ideal for studying systems with superdiffusion. In an investigation of the L\'{e}vy walk properties of T-cells, Harris et al. study a quantity that reveals structures on shorter timescales: the probability for a cell to be at a displacement $r(t)$ at time $t$ \cite{Harris2012}. They suggest that Levy walks can be distinguished by collapsing these probability distributions with rescaled displacements $\rho(t) = \frac{r(t)}{t^{\gamma}}$, with $\gamma$ significantly larger than the value of $1/2$ expected for a persistent random walk. As seen in Fig. \ref{collapse}, we find that the mouse-fibroblast data does collapse, with the best fit exponent $\gamma = 0.69 \pm 0.02$ as shown in Fig. \ref{collapse2}. The best-fit Levy walk model collapses at $\gamma = 0.59 \pm 0.03$, which is above the value expected for a persistent random walk but still lower than $\gamma$ for mouse fibroblast cells. Importantly, the best-fit generalized SPP model also collapses at a similar value of $\gamma$, suggesting that such a collapse is not sufficient to uniquely identify L\'{e}vy walks as a mechanism for superdiffusivity. Moreover, the functional form of the displacement probability distribution $P(r(t))$ provides additional information. Fig. \ref{collapse} shows that $P(r(t))$ for the best-fit L\'{e}vy walk model has a significantly different functional form from mouse fibroblast trajectories at small displacements, due to ballistic runs over relatively large distances. In contrast, a non-L\'{e}vy version of the generalized SPP model matches the shape of mouse fibroblast $P(r(t))$ very well, providing an indication that a non-L\'{e}vy model might be better for describing mouse fibroblast data. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{collapse_3pan} \caption{\label{collapse} Displacement probability distribution $P(\rho)$, where $\rho(t)$ is the scaled displacement $\frac{r(t)}{t^{\gamma}}$, for the value of $\gamma$ that best collapses the data, for (A) Mouse fibroblast cells, (B) generalized SPP model and (C) L\'{e}vy walk, with colors representing 4 binned timescales from blue (small) to yellow (large). Mouse fibroblast $\tilde{P}(\rho)$ is shown as a dashed red line in (B,C) for comparison with each model, showing that only the generalized SPP model is consistent with the observed data. } \end{figure*} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{chi2_compare} \caption{\label{collapse2} Goodness-of-fit ($\chi^2$) as a function of scaling exponent $\gamma$. The value of $\gamma$ that best collapses each data set minimizes the $\chi^2$ goodness of fit between the $P(\rho(t))$ calculated at each timescale. Experimental data all collapse at a value of $0.5 < \gamma < 1$, consistent with a superdiffusive MSD. } \end{figure} \section{Numerical models are better constrained by single-cell trajectory data} We next study single-cell trajectories. A generalized SPP model with arbitrary distributions for $P(v_0)$ and $P(D_r)$ has an infinite number of parameters that we could never hope to constrain. As a first step to simplifying our model we constrain functional form of these distributions using experimental data. As shown in Fig. \ref{distributions} (A), we first construct a distribution of cell speeds, determined from the magnitudes of the displacement of nuclei centers-of-mass between image capture events. Our experimental data is consistent with a Gaussian distribution of cell velocities, or equivalently, a distribution of cell speeds of the form $P(v_0) = \frac{|v_0|}{\sigma_v^2} e^{-\frac{(v_0-\mu_v)^2}{\sigma_v^2}}$, where $\mu_v$ and $\sigma_v$ are the mean and standard deviation, respectively. Therefore, we use this functional form in our generalized model. Next we estimate a distribution $P(D_r)$ of rotational diffusion constants ($D_r$) from the distribution of turning angles, shown in Fig. \ref{distributions} (B). Simple active Brownian systems with a single value of $D_r$ will generate a Gaussian distribution of turning angles \cite{Marchetti2013}. A Gaussian distribution of rotational noise broadens this distribution significantly. One can show the expected turning angle distribution in this case is a modified Bessel function of the second kind with an exponential tail, consistent with the numerical simulation data given by the red line in Fig. \ref{distributions} (B). We were unable to match the mouse fibroblast turning angle distribution, which is given by the blue line in Fig. \ref{distributions}(B) and has significant weight as the largest values of $\Delta\theta$, with any Gaussian function for the rotational noise. This suggests that mouse fibroblast cells may have a strongly bimodal distribution of rotational noises, further supported by intermittent run-and-tumble behavior seen in videos (Supplementary Movie 1.) We choose to capture this bimodal behavior with a noisy run-and-tumble model, where cells have a distribution $P(D_r) = \frac{1}{\sqrt{\pi\sigma_D^2}} e^{-\frac{(D_r-\mu_D)^2}{\sigma_D^2}}$ during runs, which are punctuated by tumbling events. We use the Canny algorithm described in the methods section to explicitly identify tumbling events, and the data points in Fig. \ref{distributions}(C) show the distribution of times between such events. The red line in \ref{distributions}(C) shows this is well-fit by an exponential distribution with with $\tau_0 \approx 1$ hour, and so in our model the distribution of run times $\tau$ is given by $P(\tau) = \frac{1}{\tau_0} e^{-\tau/\tau_0}$. We note that this is a strong indication that the mouse fibroblasts are not well-described by a L\'{e}vy walk model. The magenta line in Fig.\ref{distributions}(B) shows the distribution of turning angles for a noisy run-and-tumble model with the parameters identified above. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{dist_fig} \caption{\label{distributions}(A) Distribution of mouse fibroblast instantaneous speeds calculated from cell nuclei center-of-mass displacement between image capture events (blue). The red line is a fit to the form $P(v_0) = \frac{|v_0|}{\sigma_v^2} e^{-\frac{(v_0-\mu_v)^2}{\sigma_v^2}}$, which is the distribution of speeds expected for a Gaussian distribution of velocities. (B) Distribution of turning angles of mouse fibroblast trajectories (blue), SPP models with constant $D_r$ (green), Gaussian distributed $D_r$ (red), and a run-and-tumble model with Gaussian distributed $D_r$ during runs and exponentially distributed tumbling events. The distribution of rotational diffusion constants is the same in both heterogeneous cases to highlight the effect of incorporating tumbling events into the system. (C) Run-time distribution for mouse fibroblast cells (blue) is well fit by an exponential distribution (red). } \end{figure*} To confirm that the model parameters we have identified are robust, and to quantify their sensitivity, we vary model parameters around the microscopically determined values and quantify how much this changes their displacement probability distributions. Specifically, we use the linear regression goodness-of-fit parameter ($R^2$) between $P(r(t))$ for mouse fibroblast and generalized model trajectories to characterize each parameter configuration and identify a best-fit between our model and mouse fibroblast statistics \cite{corrCoeff}. Using this method we are able to capture the functional form of $P(r(t))$ very well, as shown in Fig. \ref{collapse}. It should be noted that incorporating a similar distribution of speeds into a L\'{e}vy walk model would improve the fit seen in Fig. \ref{collapse}C. However the distribution of run times would remain a power law and not match the distribution of mouse fibroblast run times shown in Fig. \ref{distributions}C, which is fit well by an exponential. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{phasespacebackup} \caption{\label{goodbad} Sensitivity analysis examining the goodness of fit of the generalized SPP model to displacement distributions in mouse fibroblasts $(1-R^2)$ as a function of model parameters. (A) Contour plot of $log(1-R^2)$ illustrates the experimental data tightly constrains a linear combination of the mean velocity $\mu_v$ and mean noise $\mu_D$. (B,C) The goodness of fit as a function of each model parameter while all others are held fixed. (B) $\vec{r_0}$ is the optimal coordinate in parameter space and $\vec{r}- \vec{r_0}$ is the distance of each parameter from its optimal value. (C) A value of $\tau$ smaller than $\approx 10$ is inconsistent with experimental data, but data does not discriminate between larger values of $\tau$. } \end{figure*} Happily, the configuration of parameters that best matches the macroscopic P(r(t)), located at $\mu_{D} = 0.09, \sigma_{D} = 0.002, \mu_{v} = 1, \sigma_{v} = 0.8, p = 0, \tau_0= 10$, is very similar to those identified from microscopic statistics, indicating that the model is consistent with experimental results. A construction of the dynamical matrix around this minima and subsequent analysis of local eigenvectors indicates that our system is most sensitive to perturbations in the mean velocity and mean rotational noise as shown in Fig. \ref{goodbad}A, and relatively insensitive to correlations between $D_r$ and $v_0$ parameterized by $p$ (Fig~\ref{goodbad}B) as well as mean run time $\tau_0$ (Fig~\ref{goodbad}C). \section{Discussion} Both L\'{e}vy walks and heterogeneous SPP models are capable of generating superdiffusive trajectories. Previous studies have focused on one model or the other in order to identify possible mechanisms for superdiffusive cell trajectories. We show that while both types of model are equally capable of matching the large-scale ensemble averaged statistics of mouse fibroblast cells, an analysis of single cell trajectories demonstrates that L\'{e}vy walks are not consistent with this data set, despite a very good scaling collapse of the probability displacement distribution with scaling exponent $\gamma > 1/2$. Instead, a careful analysis of turning angle distributions suggests thse mouse fibroblasts exhibit heterogeneous speeds, with noisy run-and-tumble behavior. Because superdiffusive cells are able to cover distance faster than diffusive counterparts, it would be useful to adapt the tools developed here to study many more cell types. For example, directed cell motion is known to be a signature of invasiveness in cancer cell lines~\cite{Driscoll2012}, and it would be interesting to know if these cell types are altering the mechanisms or timescales for superdiffusion as they become more malignant. To that end, we have created a MATLAB software package for deploying these analyses on generic data sets~\cite{ManningGroup}, which can be used to quantify superdiffusive dynamics and distinguish between different mechanism behavior in cells and active matter. A natural extension of our current work is interacting SPP models. While a non-interacting model can approximate our mouse fibroblast data where cells are not in constant contact, higher density cell populations, and confluent tissues will require models with steric cell-cell interactions. The effect of super-diffusion, whether generated by a L\'{e}vy walk or heterogeneity based model, could potentially alter results obtained with standard SPP models. For example, recent work suggests that groups of cells \cite{Bi2016} and packings of SPPs undergo jamming transitions ~\cite{Henkes2011,Fily2012,Berthier2014}. Could the addition of superdiffusive dynamics have an effect on these types of transitions? Persistent motility can alter the jamming transition -- higher speeds and more persistent trajectories allows particles to explore areas of the energy landscape that were previously inaccessible \cite{Berthier2014}. Similar effects are seen in shape-based models for confluent tissues~\cite{Bi2016}. The inclusion of both run-and-tumble dynamics as well as varying persistence length through broadly distributed rotational diffusion coefficients in a generalized SPP model could offer an interesting mechanism for tuning jamming. Another emergent feature of self-propelled particle models is motility induced phase separation (MIPS). Persistently moving particles create an inward oriented boundary layer that cage interior particles into a solid phase, while other cells are in a lower density gas phase outside of this boundary \cite{Fily2012,Cates2015} and this effect has recently been implicated in generating colony formation in bacteria \cite{Patch2017}. MIPS relies on persistence length to generate this behavior. Our generalized SPP model could reinforce this effect due to relatively persistent run phases, destroy the effect due to tumbling, or perhaps alter the nature of the transition due to enhanced fluctuations, and this is an interesting direction for future study. Density also plays a critical role in cell interactions. Many cell types exhibit contact inhibition of locomotion (CIL), where contact with another cell will either halt their motion of cause them to immediately recoil and begin moving moving in the opposite direction. It is possible that the tumbling events we see in mouse fibroblast cells are CIL events. There could also be additional interactions such as alignment between neighbors or between cells and the underlying substrate to generate flocking~\cite{Vicsek1995}. It would be interesting to explore the effect of alignment in a generalized SPP model, to see if heterogeneity causes any significant differences in the flocking transition. Another benefit of simple SPP models is that they can be relatively easily coarse-grained to predict large scale features of a tissue or colony \cite{Marchetti2013}. We have shown that a generalized SPP model is more consistent with superdiffusive mouse fibroblast cell trajectories than a L\'{e}vy walk, opening the door to a hydrodynamic coarse-graining approach for this system. Until now, mouse fibroblasts have not been highlighted as a system with run-and-tumble behavior and therefore the biomolecular mechanisms responsible for this behavior are unknown. To begin to investigate this question, it would be useful to correlate tumbling events with the dynamics of sub-cellular features such as spatio-temporal distributions of focal adhesions~\cite{Dennis2011}, Golgi bodies~\cite{Deakin2014}, or actin waves~\cite{Driscoll2012}. This would help us to understand which signaling networks and components of motility machinery are involved generating tumbling behavior or broad distributions of rotational diffusion. Furthermore, it might be useful to study such behavior on structured or controllable substrates \updated{\cite{Riveline}}, to tease apart the influence of environment vs. internal circuitry on controlling these timescales. \begin{acknowledgments} We acknowledge helpful discussions with A. A. Middleton. All authors acknowledge support from NSF-BMMB-1334493. MLM and GP were supported by NSF-DMR-1352184, the Research Corporation and the Alfred P. Sloan Foundation, and GP and MB were supported by the IGERT program (NSF-DGE1068780). Funding for VZ provided by Russian Science Foundation Grant No. 16-12-10496. \end{acknowledgments}
{ "timestamp": "2017-12-15T02:02:25", "yymm": "1712", "arxiv_id": "1712.05049", "language": "en", "url": "https://arxiv.org/abs/1712.05049" }
\section{Introduction} As much as one would welcome the production of a light charged Higgs boson from top-quark decay at the LHC, as the event rate would be plentiful, it must be recognised by now that the likelihood of this being a design of Nature is becoming slimmer and slimmer. This is because extensive searches have been carried out by the ATLAS and CMS Collaborations in this mode, assuming a variety of $H^\pm$ decay channels, none of which has been fruitful. Hence, it is becoming more likely than otherwise that, if such a state indeed exists in Nature, it will be heavier than the top quark, i.e., $M_{H^\pm}>m_t$\footnote{We should note however that both the production and decay modes used in all present searches may have a strong dependence on the parameters of the model. In particular versions of the 2-Higgs Doublet Model (2HDM) \cite{Aoki:2011wd}, for example, a very large value of the parameter $\tan \beta$, the ratio of the Vacuum Expectation Values (VEVs) of the two doublets, will render useless any search involving Yukawa couplings. For these scenarios only processes involving the electromagnetic coupling of the charged Higgs would be able to settle the issue of existence of light charged Higgs bosons.}. The call for establishing an $H^\pm$ signal comes from an intriguing theoretical consideration, that the discovery of a (singly) charged Higgs boson would signal the existence of a second Higgs doublet in addition to the Standard Model (SM)-like one already established through the discovery of the $W^\pm$ and $Z$ bosons at the S$p\bar p$S \cite{Arnison:1983rp, Bagnaia:1983zx} in the eighties and of a Higgs boson itself at the LHC only five years ago \cite{Aad:2012tfa, Chatrchyan:2012xdj}. Such a spinless field can naturally be accommodated in 2HDMs, which are the standard theoretical frameworks assumed in experimental analyses. Indeed, in their CP-conserving versions, 2HDMs present in their spectra, after spontaneous Electro-Weak Symmetry Breaking (EWSB), five physical Higgs states: the neutral pseudoscalar ($A$), the lightest ($h$) and heaviest ($H$) neutral scalars and two charged ones ($H^\pm$). Of all 2HDM Yukawa types (see~\cite{Branco} for a review), we concentrate here on the 2HDM Type II, Flipped and Type III ones (to be defined later). This is because such Yukawa types of 2HDMs have a preference for heavy charged Higgs bosons. In the 2HDM Type II and Flipped, constraints from $b\to s\gamma$ decays put a lower limit on the $H^\pm$ mass at about 580 GeV, rather independently of $\tan\beta$~\cite{Misiak:2015xwa,Misiak:2017bgg}. In the 2HDM Type III, such constraint is relaxed, yet the combination of all available experimental data places a lower limit on $M_{H^\pm}$ at about 200 GeV or possibly even less~\cite{Arhrib:2017yby}. Hence, both such 2HDM scenarios provide parameter spaces that are suitable to benchmark experimental searches for heavy charged Higgs bosons. Such a heavy mass region is very difficult to access because of the large reducible and irreducible backgrounds associated with the main decay mode $H^+\to t\bar b$, following the dominant production channel $bg\to t H^-$ \cite{bg}. (Notice that the production rate of the latter exceeds by far that of other possible production modes, like those identified in~\cite{bq, BBK, ioekosuke, Aoki:2011wd}, thus rendering it the only accessible production channel at the CERN machine in the heavy mass region.) The analysis of the $H^+\to t\bar b$ signature has been the subject of many early phenomenological studies \cite{roger}--\cite{roy1}, their conclusion being that the LHC discovery potential might be satisfactory, so long that $\tan\beta$ is small ($\leq 1.5$) or large ($\geq30$) enough and the charged Higgs boson mass is below 600 GeV or so. Such rather positive prospects have recently been revived by an ATLAS analysis of the full Run-I sample \cite{ATLAS}, which searched precisely for the aforementioned $H^\pm$ production and decay modes, by exploring the mass interval from 300 to 600 GeV. In fact, an excess with respect to the SM predictions was observed for $M_{H^\pm}$ hypotheses in the heavy mass region. While CMS does not confirm such an excess \cite{CMS}, the increased sensitivity that the two experiments are accruing with current Run-II data calls for a renewed interest in the search for such elusive Higgs states. In this spirit, and recognising that the $H^+\to t\bar b$ decay channel eventually produces a $W^+b\bar b$ signature, Ref.~\cite{Uppsala} attempted to extend the reach afforded by this channel by exploiting the companion signature $H^+\to h_{\rm SM}W^+$ $\to$ $b\bar b W^+$, where $h_{\rm SM}$ is the SM-like Higgs boson discovered at CERN in 2012 (which is either the $h$ or $H$ state of 2HDMs). The knowledge of its mass now provides in fact an additional handle in the kinematic analysis when reconstructing a Breit-Wigner resonance in the $h_{\rm SM}\to b\bar b$ decay channel, thereby significantly improving the signal-to-background ratio afforded by pre-Higgs-discovery analyses \cite{whroy,me}. Such a study found that significant portions of the parameter spaces of several 2HDMs are testable at Run-II. Spurred by the aforementioned experimental results and building upon Ref.~\cite{Uppsala}, some of us studied in Ref.~\cite{previous} all intermediate decay channels of a heavy $H^\pm$ state also yielding a $W^\pm b\bar b$ signature, i.e., $H^+\to t\bar b$, $hW^\pm, HW^\pm$ and $AW^\pm$, starting from the production mode $bg\to tH^-$ (+ c.c.) (see also \cite{Akeroyd:2016ymd}). In doing so, we also took into account interference effects between these four channels, in the calculation of the total $H^\pm$ width as well as of the total yield in the cumulative $W^\pm b\bar b$ final state (wherein the $W^\pm$ decays leptonically), with the aim of maximising the experimental sensitivity of ATLAS and CMS. The outcome of this analysis was that somewhat more inclusive search strategies (historically geared towards extracting the prevalent $H^+\to t\bar b$ signature) ought to be deployed, that also capture $H^+\to W^+$~Higgs $\to W^+b\bar b$ channels. The exercise was performed specifically for a 2HDM Type II, but results therein can easily be extrapolated to other Yukawa types. In~\cite{previous}, only interferences between the four 2HDM channels yielding $H^+\to W^+b\bar b$ decays were taken into account though, i.e., those between the different signal modes. While clearly all of these decay rates cannot be large at the same time, the important role of interferences amongst these decay modes was clearly established. However, in that analysis, the role of interference effects between any of these signals and the irreducible background was not discussed, as illustrative examples of the $H^\pm$ production and decay phenomenology were chosen so as to nullify their impact. Unfortunately, this condition can only be realised in specific regions of the 2HDM parameter space considered, whichever the Yukawa type, not everywhere. It is the purpose of this paper to address this issue, i.e., to assess the impact of interference effects between signal and irreducible background in the $H^+\to W^+ b\bar b$ channel on current phenomenological approaches to extract the latter. We will show that such effects are indeed very large for heavy $H^\pm$ masses over certain region of the 2HDM parameter space considered, both at the inclusive and exclusive level, i.e., before and after a selection is enforced, respectively. We will give some quantitative examples of this for the case of the specific $H^+\to W^+A\to W^+ b\bar b$ signal mode in three different Yukawa types of 2HDM (namely Type II, Flipped and Type III), for several $M_{H^\pm}$ choices. The plan of this paper is as follows. In the next section we introduce the 2HDM types considered and define their available parameter spaces based on current experimental and theoretical constraints in the following one. Then we proceed to describe what are the relevant diagrams entering both signal and (irreducible) background as well as illustrate how we computed these. Sect. V is our numerical signal-to-background analysis. Finally, we draw our conclusions based on the results obtained in the last section of the paper. \section{Theoretical framework of 2HDMs} \label{sec:formalism} In this section we define the scalar potential and the Yukawa sector of the 2HDM Type II, Flipped and Type III. The most general scalar potential which is $SU(2)_L\otimes U(1)_Y$ invariant is given by \cite{Gunion:2002zf,Branco} \begin{eqnarray} V(\Phi_1,\Phi_2) &=& m^2_1 \Phi^{\dagger}_1\Phi_1+m^2_2 \Phi^{\dagger}_2\Phi_2 -(m^2_{12} \Phi^{\dagger}_1\Phi_2+{\rm h.c}) +\frac{1}{2} \lambda_1 (\Phi^{\dagger}_1\Phi_1)^2 \nonumber \\ &+& \frac{1}{2} \lambda_2 (\Phi^{\dagger}_2\Phi_2)^2 + \lambda_3 (\Phi^{\dagger}_1\Phi_1)(\Phi^{\dagger}_2\Phi_2) + \lambda_4 (\Phi^{\dagger}_1\Phi_2)(\Phi^{\dagger}_1\Phi_2) \nonumber \\ &+& \left[\frac{\lambda_5}{2}(\Phi^{\dagger}_1\Phi_2)^2 + \left(\lambda_6 \Phi^\dagger_1 \Phi_1 + \lambda_7 \Phi^\dagger_2 \Phi_2 \right) \Phi^\dagger_1 \Phi_2+{\rm h.c.} \right]. \label{higgspot} \end{eqnarray} The scalar doublets $\Phi_i$ ($i=1,2$) can be parametrised as \begin{align} \Phi_i(x) = \begin{pmatrix} \phi_i^+(x) \\ \frac{1}{\sqrt{2}}\left[v_1+\rho_1(x)+i \eta_1(x)\right] \end{pmatrix}, \end{align} with $v_{1,2}\geq 0$ being the VEVs satisfying $v=\sqrt{v_1^2+v_2^2}$, with $v=246.22$~GeV~\cite{Olive:2016xmw}. Hermiticity of the potential forces $\lambda_{1,2,3,4}$ to be real while $\lambda_{5,6,7}$ and $m^2_{12}$ can be complex. In this work we choose to work in a CP-conserving potential where both VEVs are real and $\lambda_{5,6,7}$ and $m^2_{12}$ are also real. After EWSB, three of the eight degrees of freedom in 2HDMs are the Goldstone bosons ($G^\pm$, $G^0$) and the remaining five degrees of freedom become the aforementioned physical Higgs bosons. After using the minimisation conditions for the potential together with the $W^\pm$ boson mass requirement, we end up with nine independent parameters which will be taken as: \begin{equation} \{ m_{h}\,, m_{H}\,, m_{A}\,, m_{H^\pm}\,, \alpha\,, \beta\,, m^2_{12}, \lambda_6, \lambda_7 \}\,, \label{parameters} \end{equation} where $\tan\beta \equiv v_2/v_1$ and $\beta$ is also the angle that diagonalises the mass matrices of both the CP-odd and charged Higgs sector while the angle $\alpha$ does so in the CP-even Higgs sector. The most commonly used version of a CP-conserving 2HDM is the one where the terms proportional to $\lambda_6$ and $ \lambda_7 $ are absent. This can be achieved by imposing a discrete $Z_2$ symmetry on the model that usually takes the form $\Phi_i \to (-1)^{i+1} \Phi_i \quad i=1,2$. Such a symmetry would also require $m^2_{12} = 0$, unless we allow a soft violation of this discrete symmetry by the dimension two term $m^2_{12}$. When this $Z_2$ symmetry is extended to the Yukawa sector we end up with four possibilities regarding the Higgs bosons couplings to the fermions. The two $Z_2$ symmetric models we will use in the work are the Type II model - where the symmetry is extended in such a way that only $\Phi_1$ couples to up-type quarks while only $\Phi_2$ couples to down-type quarks and leptons - and the Flipped model - where $\Phi_1$ couples to to up-type quarks and leptons while $\Phi_2$ couples to the down-type quarks. Besides the Type II and Flipped scenarios we will also study a version of the more general case of Type III, to be discussed below, where neither the potential nor the Yukawa Lagrangian is $Z_2$ symmetric. Therefore, for this particular case, $\lambda_6 \neq 0$ and $\lambda_7 \neq 0$. Still in this work we will consider the limit $\lambda_6 \approx \lambda_7 \approx 0$. The reason is basically that of simplicity and it is justified by the fact that: a) the study does not depend on those parameters as there are no Higgs self-coupling present in our analysis; b) it is a tree-level study and $\lambda_6 \approx \lambda_7 \approx 0$ is a tree-level condition; c) the only possible effect on our study would be to enlarge the allowed values of the parameter ranges which would not change our conclusions. In the most general version of the 2HDM, the Yukawa sector is built such that both Higgs doublets couple to quarks and leptons. The model is known as 2HDM Type III~\cite{Cheng:1987rs,Atwood:1996vj} and the Yukawa Lagrangian can be written as \begin{align} - \mathcal L_Y= \bar Q_L( Y_1^d\Phi_1+Y_2^d\Phi_2) d_R +\bar Q_L({Y}^u_1 \tilde\Phi_1+{Y}^u_2 \tilde\Phi_2)u_R + \bar L_L( Y_1^l\Phi_1+Y_2^l\Phi_2) l_R +\text{h.c.}, \end{align} where $Q^T_L = (u_L, d_L)$ and $L^T_L=(l_L, l_L)$ are the left-handed quark doublet and lepton doublet, respectively, $Y^f_k$ ($k=1,2$ and $f= u,d,l$) denote the $3\times 3$ Yukawa matrices and $\tilde\Phi_k = i\sigma_2 \Phi^*_k$, $k=1,2$. Since the mass matrices of the quarks and leptons are a linear combination of $Y_{1}^f$ and $Y_2^f$, $Y_{1,2}^{d,l}$ and $Y_{1,2}^u$ cannot be diagonalised simultaneously in general\footnote{Since we are interested in the couplings of a charged Higgs boson to quarks we just consider that the lepton flavour violating couplings are small enough not to show any effect in the measured processes involving leptons.}. Therefore, neutral Higgs Yukawa couplings with flavour violation appear at tree-level and lead to a tree-level contribution to $\Delta M_{K, B, D}$ as well as to $B_{d,s} \to \mu^+ \mu^-$ mediated by neutral Higgs exchange. This is an important distinction with respect to $Z_2$ symmetric models and can have important repercussions for many different physical quantities. Note that also the charged Higgs coupling to a pair of fermions is modified, which will in turn induce changes in the contribution of the charged Higgs loop in $b \to s \gamma$ at the one loop-level. In order to get naturally small Flavour Changing Neutral Currents (FCNCs), we will use the Cheng-Sher ansatz by taking $Y_k^{i,j} \propto \sqrt{m_i m_j}/v$ \cite{Cheng:1987rs,Atwood:1996vj}. After EWSB, the Yukawa Lagrangian can be expressed in the mass-eigenstate basis as \cite{GomezBock:2005hc,Arhrib:2017yby}: \begin{eqnarray} {\mathcal L}_Y &=& -\sum_{f=u,d,\ell} \frac{m_f}{v} \left(\xi_h^f \bar f fh + \xi_H^f \bar f fH - i \xi_A^f \bar f \gamma_5 f A \right) \nonumber \\ && - \Big(\frac{\sqrt 2 V_{ud}}{v} \bar u \left (m_u \xi_A^u P_L + m_d\xi_A^d P_R \right )d H^+ +\text{h.c.}\Big),\label{Eq:Yukawa} \end{eqnarray} where the couplings $\xi_\Phi^f$ are given in table~\ref{Tab:MixFactor} for Type II, Flipped and Type III. We stress that the parameters $\eta_{ij}^f$ are related to the Yukawa couplings through the relations: $\eta_{ij}^u=U_L^u Y_1^u U_R^{u\dagger}/m_j$ and $\eta_{ij}^d=U_L^d Y_2^d U_R^{d\dagger}/m_j$, where $U_{L,R}^f$ are unitary matrices that diagonalise the fermions mass matrices. Using the Cheng-Sher ansatz, we assume that $\eta_{ij}^f= \sqrt{m_i/m_j} \chi_{ij}^f/v$ where $\chi_{ij}^f$ is a free parameter that will be taken in the range $[-1,1]$. \begin{table}[!t] \small \begin{center} \renewcommand{\arraystretch}{1.5} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \ \ $\Phi$ \ \ & \multicolumn{2}{c|}{$\xi^u_{\Phi}$} & \multicolumn{2}{c|}{$\xi^d_{\Phi}$} & \multicolumn{2}{c|}{$\xi^\ell_{\Phi}$} \\ \hline & Type II & Type III & Type II & Type III & Type II & Type III \\ \hline $h$ & \ $\; \frac{c_\alpha}{s_\beta} \; $ \ & \ $\; \frac{c_\alpha}{s_\beta}\delta_{ij} - \eta_{ij}^f \frac{c_{\beta-\alpha}}{\sqrt{2}s_\beta} \; $ \ & \ $ \; -\frac{s_\alpha}{c_\beta} \; $ \ & \ $ \; -\frac{s_\alpha}{c_\beta}\delta_{ij} + \eta_{ij}^f \frac{c_{\beta-\alpha}}{\sqrt{2}c_\beta} \; $ \ & \ $ \; -\frac{s_\alpha}{c_\beta} \; $ \ & \ $ \; -\frac{s_\alpha}{c_\beta}\delta_{ij} + \eta_{ij}^f \frac{c_{\beta-\alpha}}{\sqrt{2}c_\beta} \; $ \ \\ $H$ & \ $\; \frac{s_\alpha}{s_\beta} \; $ \ & \ $\; \frac{s_\alpha}{s_\beta}\delta_{ij} + \eta_{ij}^f \frac{s_{\beta-\alpha}}{\sqrt{2}s_\beta} \; $ \ & \ $ \; \frac{c_\alpha}{c_\beta} \; $ \ & \ $ \; \frac{c_\alpha}{c_\beta}\delta_{ij} - \eta_{ij}^f \frac{s_{\beta-\alpha}}{\sqrt{2}c_\beta} \; $ \ & \ $ \; \frac{c_\alpha}{c_\beta} \; $ \ & \ $ \; \frac{c_\alpha}{c_\beta}\delta_{ij} - \eta_{ij}^f \frac{s_{\beta-\alpha}}{\sqrt{2}c_\beta} \; $ \ \\ $A$ & \ $\; \frac{1}{t_\beta} \; $ \ & \ $\; \frac{1}{t_\beta}\delta_{ij} - \eta_{ij}^f \frac{1}{\sqrt{2}s_\beta} \; $ \ & \ $ \; t_\beta \; $ \ & \ $ \; t_\beta \delta_{ij} - \eta_{ij}^f \frac{1}{\sqrt{2}c_\beta} \; $ \ & \ $ \; t_\beta \; $ \ & \ $ \; t_\beta \delta_{ij} - \eta_{ij}^f \frac{1}{\sqrt{2}c_\beta} \; $ \ \\ \hline \end{tabular} \end{center} \caption{Neutral Higgs Yukawa couplings in Type II and Type III relative to the SM Higgs Yukawa couplings with $\eta_{ij}^f = \sqrt{m_i/m_j}\chi_{ij}^f/v $. The Yukawa couplings for the Flipped model are easily obtained from the Type II ones with the replacements: $\xi^{u, d, \ell}_{\Phi}$ (Flipped) = $\xi^{u, d, u}_{\Phi}$ (Type II).} \label{Tab:MixFactor} \end{table} As can be seen from table~\ref{Tab:MixFactor}, if the $\chi_{ij}^f$'s are of ${\cal{O}}(1)$, the new effects are dominated by heavy fermions and comparable with those in the 2HDM Type II and Flipped models. The effect of the $\chi_{ij}^f$'s can modify significantly the limit on the charged Higgs boson mass coming from $b\to s\gamma$. As recently discussed in~\cite{Misiak:2017bgg}, the mass of the charged Higgs boson is bounded to be heavier than about 580 GeV for any value of $\tan\beta$ in both the Type II and Flipped models. As shown in~\cite{Arhrib:2017yby}, though, this bound can be weakened to about 200 GeV by judiciously tuning the $\chi_{ij}^f$'s together with the other 2HDM Type III parameters. The couplings of $h$ and $H$ to gauge bosons $V=W,Z$ are proportional to $\sin(\beta-\alpha)$ and $\cos(\beta-\alpha)$, respectively. Since these are gauge couplings, they are the same for all Yukawa types. As we are considering the scenario where the neutral lightest Higgs state is the 125 GeV scalar, the SM-like Higgs boson $h$ is recovered when $\cos(\beta-\alpha)\approx 0$. For the Type II and Flipped models this is also the limit where the Yukawa couplings of the discovered Higgs boson become SM-like. The limit $\cos(\beta-\alpha)\approx 0$ seems to be favoured by LHC data, except for the possibility of a wrong sign limit \cite{Ferreira:2014naa, Ferreira:2014dya} where the couplings to down-type quarks can have a relative sign to the gauge bosons opposite to that of the SM. Our benchmarks will focus on the SM-like limit where indeed $\cos(\beta-\alpha)\approx 0$ and consequently the effect of the $\chi_{ij}^f$'s in $hf\bar f$ and $Hf\bar f$ coupling is suppressed by the $\cos(\beta-\alpha)$ factor. \section{Theoretical and experimental constraints} In order to perform a systematic scan on the versions of the 2HDM Type II, Flipped and Type III, we use the following theoretical and experimental constraints. \begin{itemize} \item \textbf{\underline{Vacuum stability}}: To ensure that the scalar potential is bounded from below, the quartic couplings should satisfy the relations~\cite{Deshpande:1977rw} \begin{equation} \lambda_{1,2}>0,\qquad \lambda_3>-(\lambda_1 \lambda_2)^{1/2},\qquad \mathrm{and} \qquad \lambda_3+\lambda_4- \vert \lambda_5 \vert >-(\lambda_1 \lambda_2)^{1/2}. \end{equation} We impose that the potential has a minimum that is compatible with EWSB. If this minimum is CP-conserving, any other possible charged or CP-violating stationary points will be a saddle point above the minimum~\cite{Ferreira:2004yd}. However, there is still the possibility of having two coexisting CP-conserving minima. In order to force the minimum compatible with EWSB, one can impose the simple condition~\cite{Barroso:2013awa}: \begin{equation} m_{12}^2 \left(m_{11}^2-m_{22}^2 \sqrt{\lambda_1/\lambda_2} \right) \left( \tan \beta - \sqrt[4]{\lambda_1/\lambda_2}\right) >0. \end{equation} Writing the minimum conditions as \begin{align} m_{11}^2+\dfrac{\lambda_1 v_1^2}{2}+\dfrac{\lambda_3 v_2^2}{2} &= \frac{v_2}{v_1} \left[ m_{12}^2 - (\lambda_4+\lambda_5)\dfrac{v_1 v_2}{2}\right],\\ m_{22}^2+\dfrac{\lambda_2 v_2^2}{2}+\dfrac{\lambda_3 v_1^2}{2} &= \frac{v_1}{v_2} \left[ m_{12}^2 - (\lambda_4+\lambda_5)\dfrac{v_1 v_2}{2}\right], \end{align} allows us to express $m_{11}^2$ and $m_{22}^2$ in terms of the soft $Z_2$ breaking term $m_{12}^2$ and the quartic couplings $\lambda_{1-5}$. \item \textbf{\underline{Perturbative unitarity}}: Another important theoretical constraint on the scalar sector of 2HDMs stems from the perturbative unitarity requirement of the $S$-wave component of the various scalar scattering amplitudes. That condition implies a set of constraints that have to be fulfilled and are given by~\cite{Kanemura:1993hm} \begin{equation} |a_\pm|, |b_\pm|, |c_\pm|, |f_\pm|, |e_{1,2}|, |f_1|, |p_1| < 8 \pi, \end{equation} where \begin{align} \begin{split} a_\pm &= \dfrac{3}{2}(\lambda_1+\lambda_2)\pm \sqrt{\dfrac{9}{4}(\lambda_1-\lambda_2)^2+(2\lambda_3+\lambda_4)^2},\\ b_\pm &= \dfrac{1}{2}(\lambda_1+\lambda_2)\pm \dfrac{1}{2} \sqrt{(\lambda_1-\lambda_2)^2+4\lambda_4^2},\\ c_\pm &= \dfrac{1}{2}(\lambda_1+\lambda_2)\pm \dfrac{1}{2} \sqrt{(\lambda_1-\lambda_2)^2+4\lambda_5^2},\\ e_1 &= \lambda_3 + 2 \lambda_4 -3\lambda_5,\hspace*{3cm} e_2 = \lambda_3-\lambda_5,\\ f_+ &= \lambda_3+2 \lambda_4+3\lambda_5, \hspace*{2.9cm} f_- =\lambda_3+\lambda_5,\\ f_1 &= \lambda_3+\lambda_4, \hspace*{4.3cm}p_1 = \lambda_3-\lambda_4. \end{split} \end{align} \item \textbf{\underline{EW Precision Tests}}: The additional neutral and charged scalars contribute to the gauge boson vacuum polarisation through their coupling to gauge bosons. As a result, the updated EW precision data provide important constraints on new physic models. In particular, the universal parameters $S, T$ and $U$ provides constraint on the mass splitting between the heavy states $m_H$, $m_{H^\pm}$ and $m_A$ in the scenario in which $h$ is identified with the SM-like Higgs state. The general expressions for the parameters $S$, $T$ and $U$ in 2HDMs can be found in~\cite{Barbieri:2006bg}. To derive constraints on the scalar spectrum we consider the following updated values for $S, T$ and $U$: \begin{equation} \Delta S = 0.05\pm 0.11,\quad \quad \Delta T = 0.09\pm 0.13, \quad \quad \Delta U = 0.01\pm 0.11, \\ \end{equation} and use the corresponding covariance matrix given in \cite{Baak:2014ora}. The $\chi^2$ function is then expressed as \begin{equation} \chi^2_{ST}= \sum_{i,j}(X_i - X_i^{\rm SM})(\sigma^2)_{ij}^{-1}(X_j - X_j^{\rm SM}), \end{equation} with correlation factor +0.91. \item \textbf{\underline{LHC constraints}}: Moreover, we take into account the new experimental data at 13 TeV from the observed cross section times Branching Ratio (BR) divided by the SM predictions, i.e., the so-called `signal strengths' of the Higgs boson defined by \begin{eqnarray} \mu^f_{i}&=&\frac{\sigma(i \rightarrow h)^{\rm 2HDM} {\rm BR}(h\rightarrow f)^{\rm 2HDM}}{\sigma(i\rightarrow h)^{\rm SM} {\rm BR}(h\rightarrow f)^{\rm SM}}, \quad\quad\quad i=1,2, \end{eqnarray} where $\sigma(i \rightarrow h)$ denotes the Higgs boson production cross section through channel $i$ and ${\rm BR}(h\rightarrow f)$ the BR for the Higgs decay $h\rightarrow f$. Since several Higgs production channels are available at the LHC, they are grouped to be $\mu^f_{1} =\mu^f_{{\rm ggF}+tth}$ and $\mu^f_{2} = \mu^f_{{\rm{VBF}}+Vh}$, containing gluon-gluon Fusion (ggF) plus associated Higgs production $t\bar{t}h$ as well as Vector Boson Fusion (VBF) plus Higgs-strahlung $Vh$ with $V=W^\pm,Z$. The values of the observed signal strengths are shown with their correlation factor in table~\ref{tab:Higgs data}. According to LHC results, which appear to be in good agreement with the SM predictions~\cite{Corbett:2015ksa}, data seems to favour a scenario with alignment limit where $\sin(\beta-\alpha)\approx 1$ where $h$ is the SM-like or $\cos(\beta-\alpha)\approx 1$ where $H$ is the SM-like. As intimated, in our study, we identify the lightest CP-even state $h$ with the SM-like scalar observed at the LHC with mass $m_h=125.09(24)$~GeV~\cite{Olive:2016xmw} which, because we have discarded the possibility of being in the wrong sign limit, in turn implies that $\sin(\beta-\alpha)\approx 1$. \end{itemize} \begin{table}[hpbt] \begin{ruledtabular} \begin{tabular}{ccccc} $f$ & $\widehat{\mu}^{f}_{\rm{1}}$ & $\widehat{\mu}^{f}_{\rm{2}}$ & $\pm\,\,1\widehat{\sigma}_{\rm{1}}$ & $\pm\,\,1\widehat{\sigma}_{\rm{2}}$ \\ \hline $\gamma\gamma$ & 1.09 & 1.14 & 0.23 & 0.25 \\ \hline $ZZ^*$ & 1.31 & 1.25 & 0.24 & 0.28 \\ \hline $WW^*$ & 1.06 & 1.27& 0.18 & 0.21 \\ \hline $\tau^+\tau^-$ & 1.05 & 1.24 & 0.35 & 0.40 \\ \hline $b\bar{b}$ & 3.9 & 3.7 & 2.8 & 2.4 \\ \hline \end{tabular} \caption{Combined best-fit signal strengths $\widehat{\mu}_{\rm{1}}$ and $\widehat{\mu}_{\rm{2}}$ for corresponding Higgs decay mode from ~\cite{run2}.} \label{tab:Higgs data} \end{ruledtabular} \end{table} \begin{itemize} \item \textbf{\underline{Flavour physics constraints}}: We take into account all the relevant flavour constraints which, as previously discussed, force the the charged Higgs mass to be above about 580 GeV from $b \to s \gamma$ at the $2\sigma$ level in Type II and Flipped~\cite{Misiak:2017bgg}. However, we relax this condition to the $3\sigma$ level in order to obtain $H^\pm$ signal rates that are more within the reach of the next run of the LHC. All other flavour constraints were discussed recently for Type III in~\cite{Arhrib:2017yby} and are also taken into account here. We again note that the tuning of the $\chi_{ij}^f$'s together with the other 2HDM Type III parameters allows us to relax the bound on the charged Higgs boson mass significantly in this scenario. \end{itemize} \section{Numerical analysis} The above mentioned constraints are then imposed onto a set of randomly generated points in the ranges: \begin{eqnarray} && 200 \,{\rm GeV} \le m_{H^\pm} \le 1 \,{\rm TeV}, \quad 126 \,{\rm GeV} \le m_{H} \le 1 \,{\rm TeV}, \quad 100 \,{\rm GeV} \le m_{A} \le 1 \,{\rm TeV} \,, \nonumber \\ && -1 \le \sin\alpha \le 1, \quad 2 \le \tan\beta \le 50, \quad -(1000\, {\rm GeV})^2 \le m^2_{12} \le (1000\, {\rm GeV})^2 \,. \label{numbers} \end{eqnarray} We note again that we take the $\chi_{ij}^f$'s in the range $[-1,1]$ and that all constraints are taken at the 2$\sigma$ level except the ones from the $b\to s\gamma$ measurement where we allow compatibility at the 3$\sigma$ level which in Type II and Flipped mean a reduction in the bound from 580 GeV to 440 GeV. For Type III the scan starts at 200 GeV. \begin{figure}[h!] \includegraphics[width=0.47\textwidth]{Fig_TY2_2A.pdf} \includegraphics[width=0.47\textwidth]{Fig_TY2_2B.pdf} \includegraphics[width=0.47\textwidth]{Fig_TY2_2A_1.pdf} \includegraphics[width=0.47\textwidth]{Fig_TY2_2A_2.pdf} \caption{Upper panels: Total decay widths (in GeV) of the CP-odd $A$ and charged Higgs $H^\pm$ bosons in the plane $(m_A,m_{H^\pm})$. Lower panels: BR($A\to b\bar{b}$) (left) and ${\rm BR}(H^\pm \to W^\pm A)$ (right) in the plane $(m_A,m_{H\pm})$. All panels are for Type II.} \label{figure:ty2a} \end{figure} \begin{figure}[h!] \includegraphics[width=0.47\textwidth]{Fig_TY3_2A.pdf} \includegraphics[width=0.47\textwidth]{Fig_TY3_2B.pdf} \includegraphics[width=0.47\textwidth]{Fig_TY3_2A_1.pdf} \includegraphics[width=0.47\textwidth]{Fig_TY3_2A_2.pdf} \caption{Same as in figure~\ref{figure:ty2a} but for Type III.} \label{figure:ty3a} \end{figure} In figure~\ref{figure:ty2a} (upper panels) we show the total widths of the CP-odd Higgs $A$ and the charged Higgs boson $H^\pm$ in the plane $(m_A,m_{H^\pm})$. It is clear that the two widths can be simultaneously large. In the case of the charged Higgs boson the total width is amplified by the opening of the bosonic decay $H^\pm \to W^\pm A$ for $m_A\leq 350$ GeV while for the CP-odd Higgs the total width gets enhanced after the opening of $A\to t\bar{t}$. In the lower panels of figure~\ref{figure:ty2a} we present the BRs of $A\to b\bar{b}$ (left) and $H^\pm \to W^\pm A$ (right). One can see that the ${\rm BR}(A\to b\bar{b})$ could be sizeable and above about 70\% below the $t\bar{t}$ threshold. In the case of the charged Higgs boson, since $g_{H^\pm W^\mp A}$ is a gauge coupling with no suppression factor, we expect ${\rm BR}(H^\pm \to W^\pm A)$ to be large when kinematically allowed and able to compete with the $H^+ \to \bar{t}b$ and $H^\pm \to W^\pm H$ decays. The lighter the pseudoscalar is the larger the ${\rm BR}(H^\pm \to W^\pm A)$ can be, easily reaching values above 50\% as can be seen from the figure. Therefore, since we need a charged Higgs boson with a large width, in order to compromise and to obtain large ${\rm BR}(H^\pm \to W^\pm A)$ and large ${\rm BR}(A\to b\bar{b})$, we need a heavy charged Higgs and a much lighter pseudoscalar. Still the charged Higgs mass should not be too large so that the rate of signal events is large enough to be seen at the LHC Run-II. We note that the plots for the Flipped model would be very similar and therefore we will refrain from presenting them here. In figure~\ref{figure:ty3a} we show the total width of the CP-odd $A$ and of the charged Higgs boson $H^\pm$ together with ${\rm BR}(A\to b\bar{b})$ and ${\rm BR}(H^\pm \to W^\pm A)$ in Type III. The picture is rather similar to the one for Type II except that the charged Higgs mass for Type III is relaxed up to 200 GeV. The conclusions regarding the possible decays are the same as for Type II and Flipped. \section{Monte Carlo Analysis} \begin{figure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag_sig_1.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag_sig_2.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag_sig_3.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag_sig_4.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag_sig_5.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag_sig_6.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag_sig_7.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag_sig_8.pdf} \caption{} \end{subfigure} \caption{\label{figs:sig} Feynman diagrams in a 2HDM contributing to resonant charged Higgs production and corresponding decays leading to the signal $pp\to tW^- b \bar b$ with h- $\equiv H^-$, h1 $\equiv h$, h2 $\equiv H$ and h3 $\equiv A$ (as appropriate).} \end{figure} \begin{figure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag1.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag2.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag7.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag8.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag11.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag14.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag15.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag19.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag21.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag22.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag23.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag28.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag30.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag12.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag24.pdf} \caption{} \end{subfigure} \caption{\label{figs:bckg_a} Non-resonant Feynman diagrams contributing to the background for the process $pp\to tW^- b \bar b$ with h- $\equiv H^-$, h1 $\equiv h$, h2 $\equiv H$, h3 $\equiv A$ and a $\equiv \gamma$ (as appropriate).} \end{figure} \begin{figure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag3.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag4.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag5.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag6.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag9.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag10.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag13.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag16.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag17.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag18.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag20.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag25.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag26.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag27.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag29.pdf} \caption{} \end{subfigure} \caption{\label{figs:bckg_b} Non-resonant Feynman diagrams contributing to the background for the process $pp\to tW^- b \bar b$ with h- $\equiv H^-$, h1 $\equiv h$, h2 $\equiv H$, h3 $\equiv A$ and a$\equiv \gamma$ (as appropriate).} \end{figure} \begin{figure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag31.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag32.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag35.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag36.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag37.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag33.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.325]{diag34.pdf} \caption{} \end{subfigure} \caption{\label{figs:bckg_c} Non-resonant Feynman diagrams contributing to the background for the process $pp\to tW^- b \bar b$ with h- $\equiv H^-$, h1 $\equiv h$, h2 $\equiv H$, h3 $\equiv A$ and a $\equiv \gamma$ (as appropriate).} \end{figure} We study the process $pp\to tW^- b\bar b$ wherein the interference effects between the charged Higgs resonant diagrams (shown in figure~\ref{figs:sig}) and the non-resonant background graphs (presented in figures~\ref{figs:bckg_a}, \ref{figs:bckg_b} and \ref{figs:bckg_c}) are found to be substantial. Non-resonant diagrams include all possible contributions coming from SM background as well as from 2HDM contributions. In total, there are 394 diagrams for the background contributing to the process $pp\to tW^- b \bar b$. The SM top-pair production associated with a $b$ quark is the dominant component of the latter. We have calculated the interference effects between the resonant diagrams (from figure~\ref{figs:sig}) and the diagrams which come from SM QCD interactions at the $\alpha_{\rm S}^3 \alpha_{\rm EW}$ order (Feynman diagrams with gluon contributions in figure~\ref{figs:bckg_a} and diagrams 1--5 in figure~\ref{figs:bckg_c}) as well as with the ones that come from SM EW interactions at the $\alpha_{\rm S} \alpha_{\rm EW}^3$ order. We found that, in most cases, the EW contributions produce a small but positive interference with the signal while the QCD contributions a large negative one. Thus, the net result is generally an overall negative interference for the total cross section of the process $pp\to t W^- b\bar b$ and the magnitude of this interference is determined by the width of the intermediate Higgs particles, i.e., $A$ and $H^\pm$. However, for a minority of the Benchmark Points (BPs) to be studied, the overall effect can be positive. As far as the signal is concerned, we focus on the dominant production mode of a heavy charged Higgs, i.e., $pp\to tH^-$, followed by its decay via $H^-\to W^-h\to W^- b\bar b$, $H^-\to W^-H\to W^- b\bar b$, $H^-\to W^-A \to W^- b\bar b$ and $H^-\to \bar t b\to W^- b\bar b$. Thus, all such decays lead to the same final state, facilitating interference effects amongst the different signal amplitudes. However, as previously shown in~\cite{previous}, interference effects amongst the signal contributions are generally negligible. Moreover, the BPs chosen in this study are such that the $H^-\to W^- A\to W^-b\bar b$ decay mode dominates over all other charged Higgs boson decays. For the BPs of the models that we consider, we will focus on the mass of the scalars involved in the process, $A$ and $H^\pm$, the BRs of the decays $H^\pm\to W^\pm A$ and $A \to b \bar{b}$ as well as the total width of the two scalars. As previously discussed, in order to have large interference effects between signal and background, the total width of the charged Higgs boson has to be quite large. However, to have a large interference, a sufficiently large width of the pseudoscalar is also essential. Otherwise the decays of the $A$ would be extremely narrow and would not overlap with any background processes. In this analysis we first consider a 2HDM Type II/Flipped where the pseudoscalar couplings to down-type fermions are proportional to $\tan\beta$ so that, for a large values of it, the width of the pseudoscalar can be made significantly large. Taking into account the latest searches on $pp \to \Phi \to \tau^+ \tau^-$~\cite{Aaboud:2017sjh}, $\Phi$ being any heavy spin-0 object, very large values of $\tan \beta$ are disallowed. In table~\ref{tab:inputs1} we present our input parameters for the Type II and Flipped models for five chosen BPs. The five points have passed all the constraints described before plus they are all valid for the Flipped model as well since searches for $pp \to \Phi \to \tau^+ \tau^-$ are negligible in the Flipped case owing to the very small coupling to $\tau$ leptons for high $\tan \beta$. Therefore the five points are valid in the Flipped model but only the first three are valid in Type II. The latest searches for charged Higgs bosons by ATLAS~\cite{Aad:2015typ, Aaboud:2016dig} and CMS~\cite{Khachatryan:2015qxa, CMScharged} are indeed in agreement with the values of the charged Higgs mass and the corresponding value of $\tan \beta$. \begin{table}[hpbt] \begin{ruledtabular} \begin{tabular}{cccccc} & $\tan\beta$ & $\sin(\beta-\alpha)$ & $m_{H^\pm}$ (GeV) & $m_{A}$ (GeV) & $m^2_{12}$ (GeV$^{2})$ \\ \hline BP1 (II) & 10.25 & 0.98 & 509.14 & 248.27 & 52287.83 \\ \hline BP2 (II) & 16.75 & 0.99 & 545.82 & 268.41 & 33622.43 \\ \hline BP3 (II) & 18.80 & 0.99 & 457.71 & 247.22 & 16427.97 \\ \hline BP4 (F) & 37.21 & 0.99 & 469.45 & 258.03 & 9800.68 \\ \hline BP5 (F) & 44.10 & 1.00 & 519.45 & 288.32 & 10200.34 \\ \hline \end{tabular} \caption{Type II and Flipped input parameters for the BPs. } \label{tab:inputs1} \end{ruledtabular} \end{table} \begin{table}[hpbt] \begin{ruledtabular} \begin{tabular}{cccccc} & $\Gamma(A)$ & $\Gamma(H^\pm)$ & ${\rm BR}(A\to b \bar b)$ & ${\rm BR}(H^+ \to b \bar t)$ & $ {\rm BR}(H^+ \to W^+ A)$ \\ \hline BP1 (II) & 0.47 & 72.85 & 0.83 & 0.01 & 0.29 \\ \hline BP2 (II) & 1.29 & 91.97 & 0.86 & 0.02 & 0.29 \\ \hline BP3 (II) & 1.50 & 34.83 & 0.87 & 0.05 & 0.17 \\ \hline BP4 (F) & 5.45 & 50.45 & 0.99 & 0.13 & 0.16 \\ \hline BP5 (F) & 10.46 & 85.45 & 1.00 & 0.18 & 0.26 \\ \hline \end{tabular} \caption{Partial widths (in GeV) and BRs in Type II and Flipped for the BPs. } \label{tab:inputs2} \end{ruledtabular} \end{table} In table~\ref{tab:inputs2} we present the partial widths for $\Gamma(A)$ and $\Gamma(H^\pm)$ and the ${\rm BR}(A\to b \bar b)$, ${\rm BR}(H^+ \to b \bar t)$ and $ {\rm BR}(H^+ \to W^+ A)$ for the five BPs. Note that the major difference between the models is the column for ${\rm BR}(A\to b \bar b)$ that is always larger in the Flipped model because the decays to $\tau$ leptons become negligible in this model. For all other columns the differences are extremely small. This in turn means that the results are slightly better for the Flipped model. We choose for the detailed analysis BP5 for of Flipped model. Cross sections for signal, background and total (including interference) for BP5 are 0.74 pb, 10.43 pb and 10.72 pb respectively. This results into an interference cross section of 0.45 pb which is around 60\% of the signal cross section. The results for the cross sections for all BPs for the Type II and Flipped models are presented in table~\ref{tab:inputsx}. \begin{table}[h]\centering \begin{tabular}{|c|c|c|c|c|}\hline BP & Signal (pb) & Background (pb) & Total (pb) & Interference (pb) \\\hline BP1 & 0.031 & 10.03 & 9.96 & -0.101\\\hline BP2 & 0.052 & 9.96 & 10.02 & -0.008\\\hline BP3 & 0.144 & 10.07 & 10.18 & 0.034\\\hline BP4 & 0.469 & 9.94 & 10.31 & -0.102\\\hline BP5 & 0.742 & 10.43 & 10.72 & -0.452\\\hline \end{tabular} \caption{Cross sections (in pb) for signal, background, total and interference for the BPs of Type II and Flipped.} \label{tab:inputsx} \end{table} In the case of the Type III model we choose three BPs which are in agreement with all constraints. These are shown in table~\ref{tab:inputs3} and the corresponding widths and BRs are presented in table~\ref{tab:inputs4}. \begin{table}[hpbt] \begin{ruledtabular} \begin{tabular}{ccccccc} & $\tan\beta$ & $\sin(\beta-\alpha)$ & $m_{H^\pm}$ (GeV) & $m_{A}$ (GeV) & $m^2_{12}$ (GeV$^{-2})$ & $\chi$ \\ \hline BP1 & 15.84 & 0.99 & 480.75 & 369.89 & 27463.94 & 0.21\\ \hline BP2 & 19.41 & 0.99 & 307.23 & 225.46 & 6045.62 & -0.34 \\ \hline BP3 & 38.11 & 0.99 & 447.45 & 258.33 & 9833.68 & 0.71 \\ \hline \end{tabular} \caption{Type III input parameters for the BPs. } \label{tab:inputs3} \end{ruledtabular} \end{table} \begin{table}[hpbt] \begin{ruledtabular} \begin{tabular}{cccccc} & $\Gamma(A)$ & $\Gamma(H^\pm)$ & ${\rm BR}(A\to b \bar b)$ & ${\rm BR}(H^+ \to b \bar t)$ & $ {\rm BR}(H^+ \to W^+ A)$ \\ \hline BP1 & 2.79 & 60.72 & 0.47 & 0.02 & 0.27 \\ \hline BP2 & 1.69 & 12.78 & 0.88 & 0.05 & 0.21 \\ \hline BP3 & 6.10 & 52.77 & 0.87 & 0.10 & 0.17 \\ \hline \end{tabular} \caption{Partial widths in units of GeV and BRs in Type III. } \label{tab:inputs4} \end{ruledtabular} \end{table} For the Type III, we choose the benchmark point BP2 which has the lightest $H^\pm$ mass and lowest mass splitting between $H^\pm$ and $W^\pm$ in order to demonstrate the interference effect in a wide range of mass spectra. For this benchmark point the cross sections for the signal, background and the total are 0.978 pb, 9.95 pb and 10.92 pb, respectively. Thus, the resulting cross section turns out to be -0.008 pb. The small interference effect in this case can be attributed to the small width of both $H^\pm$ and $A$. The results for the cross sections for all BPs for the Type III model are presented in table~\ref{tab:inputsyy}. \begin{table}[h]\centering \begin{tabular}{|c|c|c|c|c|}\hline BP & Signal (pb) & Background (pb) & Total (pb) & Interference (pb) \\\hline BP1 & 0.059 & 10.35 & 10.27 & -0.139\\\hline BP2 & 0.978 & 9.95 & 10.92 & -0.008\\\hline BP3 & 0.291 & 10.02 & 10.34 & 0.029\\\hline \end{tabular} \caption{Cross sections (in pb) for signal, background, total and interference for the BPs in the 2HDM Type III.} \label{tab:inputsyy} \end{table} All the numbers presented above are at parton level. Next we perform a detector level analysis and study if these interference effects survive even after all acceptance and selection cuts. For this purpose, we generate the events using {\tt MadGraph} \cite{madgraph} and then we pass these to {\tt Pythia} \cite{pythia8} for parton showering and hadronisation. After that all the events are finally passed through {\tt Delphes} \cite{delphes} for a realistic detector level analysis. Below we list the basic detector acceptance cuts. \begin{itemize} \item {\bf Acceptance cuts} \begin{enumerate} \item Events must have at least one lepton ($e$ or $\mu$) and at least 5 jets. \item Leptons must have transverse momentum $p_T>20$ GeV and rapidity $|\eta|<2.5$. \item All jets must satisfy the following $p_T$ and $\eta$ requirements: $$p_{Tj}>20~ \mbox{GeV}, |\eta_j|<2.5.$$ \item All pairs of objects must be well separated from each other, $$\Delta R_{jj,jb,bb,\ell j,\ell b}\geq 0.4~~ \mbox{where}~~\Delta R=\sqrt{(\Delta \phi)^2+(\Delta \eta)^2}.$$ \end{enumerate} \end{itemize} \subsection{Event reconstruction} In this section, we describe the procedure which we employ to reconstruct the masses of top quark, charged Higgs $H^\pm$, pseudoscalar $A$ and the two $W^\pm$ bosons in each event. For this purpose, we make use of a method based on a $\chi^2$ template. We then discuss the efficiency of the reconstruction. Each event in the analysis is assumed to be a $tH^-$ event decaying to $W^+W^- j j j$ and one of the $W^\pm$ is considered to decay hadronically and the other leptonically. Thus, each single event is considered to have at least one lepton, 5 jets and missing transverse energy. The $\chi^2$ fit takes as input the four vectors of the five leading jets, lepton and neutrino. The treatment of the neutrino four-vector is as follows. The transverse momentum of the neutrino is determine through balancing the initial and final particle momenta in an event. The longitudinal component of the neutrino momentum is instead determined by imposing the invariant mass constraint $M_{l\nu}^2 = M_{W^\pm}^2$. Since this condition leads to a quadratic equation, there are in general two solutions for $p_\nu^z$: \begin{equation} p_{\nu}^z=\frac{1}{2p_{\ell T}^2}\left(A_W p_{\ell}^z \pm E_\ell \sqrt{A_W^2\pm 4 p_{\ell T}^2 E_{\nu T}^2}\right), \end{equation} where $A_W=M_{W^\pm}^2+2p_T\cdot E_{\nu T}$. A separate $\chi^2$ is evaluated for each of the $p_\nu^z$ solutions and the one having minimum $\chi^2$ value is retained to reconstruct the event. We write two expressions for $\chi^2$, one corresponding to a scenario where $H^\pm$ decays fully hadronically, $\chi^2_{\rm had}$, and other where it decays semi-leptonically, $\chi^2_{\rm lep}$: \begin{equation}\label{chi1} \chi^2_{\rm had}=\frac{(M_{\ell \nu}-M_W)^2}{\Gamma_W^2}+\frac{(M_{jj}-M_W)^2}{\Gamma_W^2}+\frac{(M_{\ell \nu j}-M_{top})^2}{\Gamma_{top}^2}+\frac{(M_{jj}-M_A)^2}{\Gamma_A^2}+\frac{(M_{jjjj}-M_{H^\pm})^2}{\Gamma_{H^\pm}^2}, \end{equation} \begin{equation}\label{chi2} \chi^2_{\rm lep}=\frac{(M_{\ell \nu}-M_W)^2}{\Gamma_W^2}+\frac{(M_{jj}-M_W)^2}{\Gamma_W^2}+\frac{(M_{j j j}-M_{top})^2}{\Gamma_{top}^2}+\frac{(M_{jj}-M_A)^2}{\Gamma_A^2}+\frac{(M_{\ell \nu jj}-M_{H^\pm})^2}{\Gamma_{H^\pm}^2}, \end{equation} where in the denominators we have the decay widths of the respective particles as calculated for the BPs in the various models. \begin{figure}[h!]\centering \includegraphics[scale=0.4]{Mass_BP5.pdf} \caption{\label{Mass_BP5}Reconstructed masses of $W^\pm$, pseudoscalar $A$, top quark and charged Higgs $H^\pm$ for BP5 in the 2HDM Flipped.} \end{figure} For each event, $\chi^2$ is evaluated for each possible way of assigning the five leading jets to the reconstructed top and charged Higgs four-momenta. The number of such permutations turns out to be 15 for each of $\chi^2_{\rm had}$ and $\chi^2_{\rm lep}$. In addition, there is a twofold ambiguity in assigning the two solutions for $p_\nu^z$. Finally, there are two ways with which two of the jets can be assigned to either a $W^\pm$ boson or to the pseudoscalar. Thus, for each event, the $\chi^2$'s are evaluated for 120 different combinations and the combination with minimum $\chi^2$ values is kept for mass reconstruction. Using the procedure described above, we now proceed to reconstruct the masses of the various particles involved in the process in order to see the efficiency of it. We present the reconstructed masses of all the intermediate resonant particles in the process, {i.e.}, $W^\pm$, $A$, top and $H^\pm$ in figure~\ref{Mass_BP5} for the Flipped case (BP5) and in figure~\ref{Mass_BP2} for the Type III case (BP2). In each plot we see that the peak is found to be at the particle masses, vouching for the effectiveness of our reconstruction procedure. In presenting the plots, we take events after applying all the acceptance cuts discussed above and selection cuts mentioned in table~\ref{tab:cutflow_BP5}. \begin{figure}[h!]\centering \includegraphics[scale=0.4]{Mass_BP2.pdf} \caption{\label{Mass_BP2}Reconstructed masses of $W^\pm$, pseudoscalar $A$, top quark and charged Higgs $H^\pm$ for BP2 in the 2HDM Type III.} \end{figure} \begin{figure}[h!]\centering \includegraphics[scale=0.5]{Lepton_Vars_BP5.pdf} \includegraphics[scale=0.5]{Lepton_Vars_BP2.pdf} \caption{\label{Lep_BP5}Distributions for transverse momentum, rapidity and energy of a lepton for signal and interference for BP5 of the 2HDM Flipped (top) and BP2 of the 2HDM Type III (bottom).} \end{figure} In order to further investigate interference effects, we look at various distributions, e.g., transverse momentum $p_T$, rapidity $\eta$ and energy $E$ of the lepton, for both the signal and interference contributions. The distributions for the interference are obtained by subtracting those of the signal and background processes (separately) from the total ones. The distributions for BP5 of the 2HDM Flipped (top) and BP2 of the 2HDM Type III (bottom) are shown in figure~\ref{Lep_BP5}. We can clearly see that the shape of all distributions for signal alone and interference are almost the same but with opposite signs, the latter being expected, as we found the overall interference between signal and irreducible background to be destructive for the used BP. It is instead remarkable the similarity found between the two contributors to the total signal cross section. Notice that in figure~\ref{Lep_BP5} we have only shown the lepton distributions though it has been verified for all the jets involved in the process that their distributions present the same behaviour. Finally, we present in table~\ref{tab:cutflow_BP5} the flow of cross section values after each cut for (Flipped) BP5 and in table~\ref{tab:cutflow_BP2} for (Type III) BP2. We observe that the relative ratio of the signal-to-interference cross section increases with each cut for both BPs. For (Flipped) BP5, we see that the ratio rises from 60\% to almost larger than 100\% while, for (Type III) BP2, the increment is from 0.1\% to 17\%. The reason for the smaller interference cross section for the latter with respect to the former is a smaller width for both $A$ and $H^\pm$: this well illustrates the correlation between interference effects and off-shellness of the Higgs bosons involved. \begin{table}[h!] \begin{center} {\renewcommand{\arraystretch}{1.5}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \begin{tabular}{ ||C{0.85cm}C{2.5cm}C{1.75cm}|| C{1.5cm} | C{2.25cm} | C{1.5cm} |C{2.5cm} ||} \hline \hline \multicolumn{3}{||c||}{\multirow{2}{*}{Cuts}}&\multicolumn{4}{c||}{$\sigma$ [fb] } \\\cline{4-7} &&&Signal & Background & Total & Interference \\\cline{1-7} C0:& \multicolumn{2}{c||}{No Cuts} & 740 & 10430 & 10720 & -450 \\ \hline C1:& \multicolumn{2}{c||}{Only one lepton} & 115.0 & 1116.2 & 1151.2 & -80.1 \\ \hline C2:& \multicolumn{2}{c||}{At least 5 light jets} & 91.9 & 680.8 & 703.5 & -69.2 \\ \hline C3:& \multicolumn{2}{c||}{Cut on $H_T>500$ GeV} & 70.8 & 173.8 & 173.6 & -71.1 \\ \hline \hline \end{tabular} \caption{Cut flow of the cross sections for signal (BP5 in the 2HDM Flipped) and irreducible background at the 14 TeV LHC. Conjugate processes are included here. \label{tab:cutflow_BP5}} \end{center} \end{table} \begin{table}[h!] \begin{center} {\renewcommand{\arraystretch}{1.5}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \begin{tabular}{ ||C{0.85cm}C{2.5cm}C{1.75cm}|| C{1.5cm} | C{2.25cm} | C{1.5cm} |C{2.5cm} ||} \hline \hline \multicolumn{3}{||c||}{\multirow{2}{*}{Cuts}}&\multicolumn{4}{c||}{$\sigma$ [fb] } \\\cline{4-7} &&&Signal & Background & Total & Interference \\\cline{1-7} C0:& \multicolumn{2}{c||}{No Cuts} & 978 & 9950 & 10920 & -8 \\ \hline C1:& \multicolumn{2}{c||}{Only one lepton} & 243.6 & 2040.8 & 1151.2 & -6.4 \\ \hline C2:& \multicolumn{2}{c||}{At least 5 light jets} & 180.3 & 1221.4 & 1398.1 & -3.6 \\ \hline C3:& \multicolumn{2}{c||}{Cut on $H_T>500$ GeV} & 89.8 & 491.2 & 566.9 & -14.1 \\ \hline \hline \end{tabular} \caption{Cut flow of the cross sections for signal (BP2 in the 2HDM Type III) and irreducible background at the 14 TeV LHC. Conjugate processes are included here. \label{tab:cutflow_BP2}} \end{center} \end{table} \section{Conclusions} In this paper, we have assessed whether interference effects involving heavy charged Higgs signals appearing via $W^\pm b\bar b$ final states at the LHC, both amongst themselves and in relation to irreducible background, can be sizable and thus affect ongoing experimental searches. We have taken as reference models to perform our analysis two $Z_2$ symmetric 2HDMs, the Type II and Flipped versions, as well as the Type III one. We have then prepared the corresponding parameter space regions amenable to phenomenological investigation by enforcing both theoretical (i.e., unitarity, perturbativity, vacuum stability, triviality) and experimental (i.e., from flavour physics, void and successful Higgs boson searches at the Tevatron and LHC, EW precisions observables from LEP and SLC) constraints. We have finally proceeded to simulate the relevant signal processes via $bg\to tH^-$ (+ c.c.) scattering with the charged Higgs state decaying via $H^-\to W^- h,A,H\to W^- b\bar b$ or $H^-\to \bar t b\to W^- b\bar b$ (+ c.c. in all cases) and the irreducible background given by $bg\to t W^- b\bar b$ topologies. The motivation for this is that signals and background are treated separately in current approaches. Indeed, these may be invalidated by the fact that, on the one hand, a heavy charged Higgs state can have a large width and, on the other hand, this can also happen for (some of) the neutral Higgs states emerging from its decays. Clearly, a prerequisite for such interference effects to onset is that such widths are large enough, say, 10\% or so, which we have verified here to be the case. While the phenomenology we have investigated could well occur in the other decay chains in suitable regions of the parameter space, we have chosen to single out here $H^-\to W^- A\to W^- b\bar b$, as it is the one that is most subject to interference effects with the irreducible background, at least in the 2HDM Type II, Flipped and Type III setups adopted. In fact, the latter are generally predominant over interference effects amongst the different decay patterns of the $H^\pm$ signal. After performing a sophisticated MC simulation, we have seen that such interference effects can be very large, even of ${\cal O}(100\%)$, both before and after $H^\pm$ selection cuts are enforced, and mostly negative. This appears to be the case for all masses tested, from 300 to 500 GeV or so, in both the 2HDM II and Flipped as well as Type III, the more so the larger the $H^\pm$ and $A$ masses (and, consequently, their widths). Remarkably, after all cuts are applied, the shapes of the analysed signal and interference (with the irreducible background) are essentially identical in all kinematical observables relevant to the signal extraction, as the selection drives these two components of the total cross section to be very similar. These findings therefore imply that current and, especially, future LHC sensitivities to heavy charged Higgs bosons signals in $W^- b\bar b$ final states require an `inclusive' rescaling of the event yields, as the the `exclusive' shape of the signal is roughly unchanged after such interference effects are accounted for. \acknowledgments RB was supported in part by Chinese Academy of Sciences (CAS) President's International Fellowship Initiative (PIFI) program (Grant No. 2017VMB0021). The work of AA, RB, SM and RS is funded through the grant H2020-MSCA-RISE-2014 No. 645722 (NonMinimalHiggs). SM is supported in part through the NExT Institute and the STFC Consolidated Grant ST/P000711/1. PS is supported by the Australian Research Council through the ARC Center of Excellence for Particle Physics (CoEPP) at the Terascale (grant no. \ CE110001004). RS is also supported in part by the National Science Centre, Poland, the HARMONIA project under contract UMO-2015/18/M/ST2/00518. \clearpage
{ "timestamp": "2017-12-15T02:01:49", "yymm": "1712", "arxiv_id": "1712.05018", "language": "en", "url": "https://arxiv.org/abs/1712.05018" }
\section{Introduction} Quasars are among the most luminous objects in the observable universe associated with rapidly accreting supermassive black holes. We classify AGN depending on how the AGN is viewed – whether the detected radiation allows us to „see” the nucleus directly (Type-1 AGNs); or the radiation is partially obscured due to the presence of the dusty torus lying in-between the observer and the source (Type-2 AGNs). In Type-1 AGNs, the continuum emission is dominated by the energy output in the optical-UV band that comes from the accretion disk which surrounds a supermassive black hole \cite[e.g.][]{c87,cap2015} and the broad emission lines in the optical-UV, including Fe${\mathrm{II}}$ pseudo-continuum and Balmer Component are usually considered to be coming from the Broad Line Region (BLR) clouds. The spectral properties of the broad band spectra and the line emissivities are strongly correlated (\citealt{bg92,sul00,sul02,sul07,yip04,sh14,sun15}). The Principal Component Analysis (PCA) is a powerful tool that allows to procure the dominating correlations that can be used to identify the quasar main sequence. This sequence is analogous to the stellar main sequence in the Hertzsprung-Rusell diagram. Quasar main sequence was suggested to be driven mostly by the Eddington ratio (\citealt{bg92,sul00,sh14}), but also by the additional effect of the black hole mass, viewing angle and the intrinsic absorption (\citealt{sh14,sul00,kura09}). Among the significant correlations projected, the leading component, related to Eigenvector 1 (EV1) is dominated by the anti-correlation between the Fe${\mathrm{II}}$ optical emission and [OIII] line where the EV1 alone contained 30\% of the total variance. The parameter R$_{\mathrm{FeII}}$, which strongly correlates to the EV1, is the ${\mathrm{FeII}}$ strength, defined to be the ratio of the equivalent width of ${\mathrm{FeII}}$ to the equivalent width of ${\mathrm{H\beta}}$. \par We postulate that the true driver behind the R$_{\mathrm{FeII}}$ is the maximum of the temperature in a multicolor accretion disk which is also the basic parameter determining the broad band shape of the quasar continuum emission. The hypothesis seems natural because the spectral shape determines both broad band spectral indices as well as emission line ratios, and has already been suggested by \cite{b07}. We expect an increase in the maximum of the disk temperature as the R$_{\mathrm{FeII}}$ increases. According to Figure 1 from \cite{sh14}, increase in R$_{\mathrm{FeII}}$ implies increase in the Eddington ratio or decrease in the mass of the black hole. We expect that this maximum temperature depends not only on the Eddington ratio \citep{c06}, but on the ratio of the Eddington ratio to the black hole mass (or, equivalently, on the ratio of the accretion rate to square of the black hole mass). \section{Basic Theory} The spectral energy distribution (SED) for a typical quasar reveals that most of the quasar radiation comes from the accretion disk and forms the Big Blue Bump (BBB) in the optical-UV (\citealt{c87, rich06}), and this thermal emission is accompanied by an X-ray emission coming from a hot optically thin mostly compact plasma, frequently refered to as a corona (\citealt{c87,haa91,Fabian2015}). The ionizing continuum emission thus consists of two physically different spectral components. We parametrize the BBB component by the maximum of the disk temperature, which according to the standard Shakura-Sunyaev accretion disk model, is related to the black hole mass and the accretion rate \begin{equation} \mathrm{T}_{\mathrm{BBB}} = \left[\frac{3\mathrm{GM}\dot{\mathrm{M}}}{8\pi \sigma \mathrm{r}^3}\left(1 - \sqrt{\frac{\mathrm{R}_{\mathrm{in}}}{\mathrm{r}}}\right)\right]^{0.25} = 1.732\times 10^{19} \left(\frac{\dot{\mathrm{M}}}{\mathrm{M}^{2}}\right)^{0.25}, \label{eq:01} \end{equation} where $\mathrm{T}_{\mathrm{BBB}}$ - maximum temperature corresponding to the Big Blue Bump; G - gravitational constant; M - black hole mass; $\dot{\mathrm{M}}$ - black hole accretion rate; r - radial distance from the centre; $\mathrm{R_{in}}$ - radius corresponding to the innermost stable circular orbit. $\mathrm{M}$ and $\dot{\mathrm{M}}$ are in cgs units. Similar formalism has been used by \cite{b07} although the coefficient differs by a factor of 2.6 from Eq. (1). This maximum is achieved not at the innermost stable orbit around a non-rotating black hole (3R$_{\mathrm{Schw}}$) but at 4.08$\bar{3}$ R$_{\mathrm{Schw}}$. The SED component peaks at the frequency \begin{equation} \nu_{\mathrm{max}} \sim \left[\frac{\frac{\mathrm{L}}{{\mathrm{L}_{\mathrm{Edd}}}}}{\mathrm{M}}\right]^{\mathrm{0.25}}\label{eq:02}. \end{equation} where $\nu_{\mathrm{max}}$ - frequency corresponding to $\mathrm{T}_{\mathrm{BBB}}$; L - accretion luminosity $\left (=\eta \dot{\mathrm{M}} \mathrm{c}^2 \right )$; $\mathrm{L_{\mathrm{Edd}}}$ - Eddington limit $\left (= \mathrm{\frac{4\pi GMm_{p}c}{\sigma_{T}}}\right )$, where $\mathrm{m_{p}}$ - mass of a proton, $\sigma_{\mathrm{T}}$ - Thompson cross section. \par We use a power law with a fixed slope ($\alpha_{uv}$) for the accretion disk spectrum with an exponential cutoff which is determined by the value of $\mathrm{T}_{\mathrm{BBB}}$. The X-ray coronal component shape is defined by the slope ($\mathrm{\alpha_{x}}$) and has an X-ray cut-off at 100 keV (\citealt{fra02} and references therein). The relative contribution is determined by fixing the broad band spectral index $\mathrm{\alpha_{ox}}$, and finally the absolute normalization of the incident spectrum is set assuming the source bolometric luminosity. We fix most of the parameters, and $\mathrm{T}_{\mathrm{BBB}}$ is then the basic parameter of our model. \par Some of this radiation is reprocessed in the BLR which produces the emission lines. In order to calculate the emissivity, we need to assume the mean hydrogen density ($\mathrm{n_H}$) of the cloud, and a limiting column density (N$_\mathrm{H}$) to define the outer edge of the cloud, and we use here a single cloud approximation. Ionization state of the clouds depends also on the distance of the BLR from the nucleus. We fix it using the observational relation by \cite{b13} \begin{equation} \left(\frac{\mathrm{R}_{\mathrm{BLR}}}{1\;\mathrm{lt-day}}\right) = 10^{\left[1.555 + 0.542\; \mathrm{log}\left(\frac{\lambda \mathrm{L}_{\lambda}}{10^{44} \;\mathrm{erg s}^{-1}}\right)\right]}\label{eq:03} \end{equation} The values for the constants considered in Equation 3 are taken from the Clean $\mathrm{H \beta\; R_{BLR} - L}$ model from \citet{b13} where $\lambda$ = 5100 \AA. \section{Results and Discussions} In \cite{panda17}, we checked the dependence of the change in the R$_{\mathrm{FeII}}$ as a function of the maximum of the accretion disk temperature, T$_\mathrm{{BBB}}$ at constant values of L$_\mathrm{bol}$, $\mathrm{\alpha_{uv}}$ , $\mathrm{\alpha_{ox}}$ , $\mathrm{n_H}$ and $\mathrm{N_H}$. Here, we refer to the approach in \cite{panda17} as Method-1 (M1). \par In M1, the source bolometric luminosity is fixed, $\mathrm{L_{bol}}$ = $\mathrm{10^{45} \;erg\; s^{-1}}$ with accretion efficiency $\epsilon$ = 1/12, since we consider a non-rotating black hole in Newtonian approximation (see Eq.~1). This determines the accretion rate, $\mathrm{\dot M}$. For a range of disk temperature from $5\times 10^4\;$K to $5\times 10^5\;$K, we computed the range of black hole mass to be between [$2.2\times 10^6\; \mathrm{M_{\odot}}$, $2.2\times 10^8\; \mathrm{M_{\odot}}$] using the Eq. (1). From the incident continuum, we estimated the $\mathrm{L_{5100\AA}}$ which we then used as an input to derive the $\mathrm{R_{BLR}}$ using the Eq. (3). The ratio of the optical to X-ray spectral index, $\mathrm{\alpha_{ox}}$ was fixed at -1.6 which specifies the optical-UV and X-ray luminosities. This allowed us to determine the normalization of the X-ray bump. The resulting two-power law SED was constructed with an optical-UV slope , $\mathrm{\alpha_{uv}}$ = -0.36, and X-ray slope, $\mathrm{\alpha_{x}}$ = -0.91 \citep{roz14} and their corresponding exponential cutoffs (the optical-UV cutoff is determined from the $\mathrm{T_{BBB}}$ while for the X-ray case it was fixed at 100 keV). We tested two cases by changing the mean hydrogen density from (i) $\mathrm{n_H = 10^{10}\; cm^{-3}}$ to (ii) $\mathrm{n_H = 10^{11} \;cm^{-3}}$, keeping the hydrogen column density, $\mathrm{N_H = 10^{24} \;cm^{-2}}$, in accordance to \cite{bv08}. We had dropped the X-ray power-law component in M1 in further computations for simplicity. Knowing the irradiation, we computed the intensities of the broad Fe${\mathrm{II}}$ emission lines using the corresponding levels of transitions present in CLOUDY 13.05 \citep{f13}. We calculated the Fe${\mathrm{II}}$ strength (R$_{\mathrm{FeII}}$ = EW$_{\mathrm{Fe{II}}}$ / EW$_{\mathrm{H\beta}}$), which is the ratio of Fe${\mathrm{II}}$ EW within 4434-4684 \AA $\;$to broad H$\beta$ EW. This prescription is taken from \cite{sh14}. All the simulations have been considered without including microturbulence. \par In Method-2 (M2), we now make three important modifications: (i) we allow for the presence of the hard X-ray power law (ii) we fix the Eddington ratio (i.e., $\mathrm{L_{bol}/L_{Edd} = 1}$) instead of a constant bolometric luminosity (iii) we use the observational relation between the UV and X-rays to obtain $\alpha_{\mathrm{ox}}$. Fixing the Eddington ratio provides us with a relation between the mass accretion rate ($\mathrm{\dot{M}}$) and black hole mass ($\mathrm{M_{BH}}$). For the same range of maximum of the disk temperature as in M1, we calculate the black hole masses using the Eq.(1) which we have in the range [$6.06\times 10^5\; \mathrm{M_{\odot}}$, $6.06\times 10^9\; \mathrm{M_{\odot}}$]. We then determine the normalisation factor by integrating the optical-UV spectrum to the bolometric luminosity (which we calculate for each case from the range of $\mathrm{M_{BH}}$ as mentioned above). We then compute the values of $\mathrm{L_{2500\AA}}$ and $\mathrm{L_{5100\AA}}$ from the incident continuum. We use the $\mathrm{L_{2500\AA}}$ as the $\mathrm{L_{UV}}$ in the Eq. (1) from \cite{lusso17} to compute the $\mathrm{L_{X}}$ at 2 keV: \begin{equation} \left\{\begin{array}{r@{}l@{\qquad}l} (\log{\mathrm{L_X}}-25) = (0.610\pm 0.019)(\log{\mathrm{L_{UV}}}-25)+ \\ (0.538\pm 0.072)[\log{\mathrm{v_{FWHM}}}-(3 + \log{2})] + (-1.978\pm 0.100) \end{array}\right. \end{equation} Here, $\mathrm{v_{FWHM}}$ is estimated using the corresponding $\mathrm{M_{BH}}$, $\mathrm{R}_{\mathrm{BLR}}$ and assuming a virial factor, f=1. Subsequently, the value of $\mathrm{\alpha_{ox}}$ is determined, i.e., $\mathrm{\alpha_{ox}} = -0.384\left(\frac{\mathrm{L_{2500}}}{\mathrm{L_{2keV}}}\right)$. \par The results from the photoionization modeling for both the methods (M1 and M2) are shown in Fig.1. \begin{figure}[h!] \begin{center} \includegraphics[height=13cm, width=8cm, angle=270]{rfe2_cat.eps \end{center}\caption{Comparison between $\mathrm{R}_{\mathrm{Fe{II}}}$ - T$_{\mathrm{BBB}}$ for four different constant-density single cloud models. The blue ($\mathrm{n_H = 10^{10}\; cm^{-3}}$) and red ($\mathrm{n_H = 10^{11}\; cm^{-3}}$) dotted curves are for M1, the cyan ($\mathrm{n_H = 10^{10}\; cm^{-3}}$) and orange ($\mathrm{n_H = 10^{11}\; cm^{-3}}$) curves are for M2. Vertical red lines indicate the range of disk temperature considered [$5\times 10^4\;$K, $5\times 10^5\;$K] for which the black hole mass varies from $6.06\times 10^5\; \mathrm{M_{\odot}}$ to $6.06\times 10^9\; \mathrm{M_{\odot}}$.}\label{fig:R_Tbb_2} \end{figure} In M1, the dependence of $\mathrm{R_{FeII} - T_{BBB}}$ is monotonic (the red curve with points) for the case (ii), while we see a turnover for the trend in case (i) at $1.45\times 10^4\; K$ (the blue curve with points) although the monotonic trend reappears after this turnover and continues to the limit of the $\mathrm{T_{BBB}}$ considered for that case, i.e., $10^{7.5}\; \mathrm{K}$. This upper limit for the maximum of the disk temperature is solely considered for the modelling. In \cite{panda17}, we overplot $\mathrm{R_{FeII}}$ values obtained from two observational data on the modelled trends. We found that the trends did not justify the results from the observations. We intended to re-evaluate the trends by adopting the prescription in M2. \par In M2, we test the same two cases of changing $\mathrm{n_H}$ from (iii) $10^{10} \;\mathrm{cm^{-3}}$ to (iv) $10^{11} \;\mathrm{cm^{-3}}$, keeping the hydrogen column density at $\mathrm{N_H = 10^{24} \;cm^{-2}}$. For both these cases, we now clearly see the turnover in the trend between $\mathrm{R_{FeII} - T_{BBB}}$ close to $10^6\; \mathrm{K}$ which was absent in case (i) and its presence was speculative in case (ii). In the considered range of $\mathrm{T_{BBB}}$, we now have a proportional dependence of $\mathrm{R_{FeII}}$ on $\mathrm{T_{BBB}}$ (the vertical red lines depict the said range). For values of $\mathrm{T_{BBB}} \leq 2.24\times 10^4\; \mathrm{K}$, we again see a rising trend which we suspect is due to the variation in the value of $\mathrm{\alpha_{ox}}$ which is biased by the observationally-derived Eq.(4), where the authors \citep{lusso17} preferentially selected bluer candidates. For the considered $\mathrm{T_{BBB}}$ range the $\mathrm{R_{FeII}}$ lies within [0.75, 5.53] for $\mathrm{n_H}$ = $10^{10} \;\mathrm{cm^{-3}}$, and [0.66, 3.01] for $\mathrm{n_H}$ = $10^{10} \;\mathrm{cm^{-3}}$. In case (iv) we see that the FeII strength goes down by a factor 2 compared to case (iii), i.e., FeII emission is suppressed with rising mean density. The maximum values of $\mathrm{R_{FeII}}$ obtained in M2 is 6.48 at $\mathrm{T_{BBB}} = 1.26\times 10^6\; \mathrm{K}$ corresponding to $\mathrm{M_{BH}}$ as low as $1.5\times 10^4\; \mathrm{M_{\odot}}$. From a preliminary analysis of the \cite{shen11} SDSS DR7 quasar catalog, when the sample is z-corrected (i.e. $0.1 \leq \mathrm{z} \leq 0.9$) and errors in determining $\mathrm{FeII}$ and $\mathrm{H\beta}$ fluxes is kept within 20$\%$, we do have the maximum value of $\mathrm{R_{FeII}}$ at 6.56. \bibliographystyle{ptapap}
{ "timestamp": "2017-12-15T02:06:36", "yymm": "1712", "arxiv_id": "1712.05176", "language": "en", "url": "https://arxiv.org/abs/1712.05176" }
\section{Introduction} After the discovery of the Higgs boson particle at LHC in 2012~\cite{Htn,Htn1}, many improved measurements confirmed the consistence of its quantum numbers and couplings with the Standard Model (SM) predictions, including the loop-induced coupling $h\gamma\gamma$~\cite{Khachatryan:2016vau,HiggsExconstrain1}. Meanwhile, another loop-induced coupling $hZ\gamma$ related to the decay $h\rightarrow Z\gamma$ has not been measured yet even so that the predicted decay rate is of the same order as the one of $h\rightarrow \gamma\gamma$ in the SM case~\cite{hzgapredict}. The partial decay width $h\rightarrow Z\gamma$ was calculated within the SM framework and its supersymmetric extension~\cite{hzga1,hzga2,Bardin:1991dp,hzgamssm,HHunter,Cao:2013ur,Belanger:2014roa,Hammad:2015eca}. From the experimental side, this decay channel is now been searched at the LHC by both CMS and ATLAS collaborations~\cite{hzgaex1,hzgaex2, hzgaex3}. Many discussions concerning studies of this channel are going also in planned experimental projects as at the LHC as well as at future $e^+e^-$ and even 100~TeV proton-proton colliders~\cite{hzgaexim1, hzgaexim2}. While the effective coupling $h\gamma\gamma$ is now very strictly constrained experimentally, the coupling $hZ\gamma$ might be still significantly different from the SM prediction in certain SM extensions because of the $Z$ boson couplings with new particles. Studies the decay of the SM-like Higgs boson $h\rightarrow Z\gamma$ affected by the presence of new fermions and charged scalars were performed in several models beyond the SM (BSM) having the same SM gauge group~\cite{hzgamssm, HzgaTHD, GMmodel, hdcay1,Dev:2013ff}. At the one loop level, the amplitude of the decay $h\rightarrow Z\gamma$ contains also contributions from new gauge boson loops of the BSM models constructed from larger electroweak gauge groups such as the left-right (LR), 3-3-1, and 3-4-1 models~\cite{lr1, lr2, lr3, lr4, lr5, lr6, g331, g331a, g331b, g331c, g331d, g331e, g331f,g331g, g341,g341a}. Calculating these contributions is rather difficult in the usual 't~Hooft-Feynman gauge, because of the appearance of many unphysical states, namely Goldstone bosons and ghosts which always exist along with the gauge bosons. They create a very large number of Feynman diagrams. In addition, their couplings are indeed model dependent, therefore it is hard to construct general formulas determining vector loop contributions using the t'~Hooft-Feynman gauge. This problem has been mentioned recently~\cite{GMmodel} in a discussion of the Georgi-Machacek model, where only new Higgs multiplets are added to the SM. The reason is that the new Higgs bosons will change the couplings of unphysical states with the gauge bosons $Z$ and $W^\pm$. In the left-right models predicting new gauge bosons that contribute to the amplitude of the decay $h\rightarrow Z\gamma$, previous calculations in this gauge were also model dependent \cite{hzgalr1,hzgalr2}. An approach introduced recently in Ref. \cite{Goodsell:2017pdq} for calculating the decay $h\rightarrow Z\gamma$, with the help of numerical computation packages, may be more convenient. The technical difficulties caused by unphysical states will vanish if calculations have been done in the unitary gauge. There the number of Feynman diagrams as well as the number of necessary couplings become minimum, namely only those which contain physical states are needed. Then the Lorentz structures of these couplings are well defined, and hence the general analytic formulas of one-loop contributions from gauge boson loops can be constructed. But in the unitary gauge we face complicated forms of the gauge boson propagators, which generate many dangerous divergent terms. Fortunately, many of them are excluded by the condition of on-shell photon in the decay $h\rightarrow Z\gamma$. The remaining ones will vanish systematically when loop integrals are written in terms of the Passarino-Veltman (PV) functions \cite{PVfunc}. This situation will be demonstrated in this work explicitly. Moreover, the choice of the unitary gauge allows us to derive general analytic formulas for one-loop contributions involving various gauge bosons to the amplitude of the decay $h\rightarrow Z\gamma$. The formulas will be given in terms of standard PV functions defined by ref.~\cite{PVDenner} and in the LoopTools library~\cite{LoopTools}. The analytic forms of these PV functions are also presented so that our results can be compared with the earlier results calculated independently in specific cases. In addition, the analytic formulas can be implemented into numerical stand-alone packages without dependence on the LoopTools. Our results can be translated into the general analytic form used to calculate the amplitudes of the charged Higgs decay $H^{\pm}\rightarrow W^{\pm}\gamma$ which is also an interesting channel predicted in many BSM models. Our results can be easily compared also with those given recently in~\cite{GMmodel}, which were calculated in the 't~Hooft-Feynman gauge. Moreover, our results can be cross-checked with another one-loop formula expressing new gauge boson contributions in the gauge-Higgs unification (GHU) model~\cite{hzgaGHU}. The decay $H\rightarrow Z\gamma$ of the new heavy neutral Higgs boson $H$ in the SM supersymmetric model was also mentioned in \cite{Belanger:2014roa}. The signal strength of this decay was shown to be very sensitive with the parameters of the model, hence it may give interesting information on the parameters once it is detected. Many other BSM also contain heavy neutral Higgs bosons $H$, and the one loop amplitudes of their decays $H\rightarrow Z\gamma$ may include many significant contributions that do not appear in the case of the SM-like Higgs boson. Some of the complicated contributions are usually ignored by qualitative estimations. The analytic formulas introduced in this work are enough to determine more quantitatively these approximations. Apart from the above BSM with non-Abelian gauge group extensions, there are BSM with additional Abelian gauge groups \cite{Holdom:1985ag,Babu:1996vt}. These models predict new kinetic mixing parameters between Abelian gauge bosons, which appear in the couplings of the neutral physical gauge bosons including the SM-like one, for example see \cite{Cassel:2009pu}. Our calculation in the unitary gauge are also applicable with only condition that couplings of physical states are determined. Our paper is organized as follows. Section~\ref{Feyrule} will give the general notations and Feynman rules necessary for calculation of the width of the decay $h\rightarrow Z\gamma$ in the unitary gauge. In section~\ref{analytic} we present important steps of the derivation of the analytic formula for the total contribution of gauge boson loops. We also introduce all other one-loop contributions from possible new scalars and fermions appearing in BSM models. In section~\ref{comparision}, the comparison between our results with previous ones will be discussed, including the case of charged Higgs decays. We will emphasize the contributions from gauge boson loops both in decays of neutral CP-even and charged Higgs bosons. Our result will be applied to discuss on two particular models in section~\ref{application}. In Conclusions we will highlight important points obtained in this work. In the first Appendix, we review notations of the PV functions given by LoopTools and their analytic forms used in other popular numerical packages. Two other Appendices contain detailed calculations of the one-loop fermion contributions to the amplitude $h\to Z\gamma$ and the relevant couplings in the LR models discussed in our work. \section{\label{Feyrule} Feynman diagrams and rules} The amplitude of the decay $h\rightarrow Z\gamma$ is generally defined as \begin{align} \mathcal{M}(h\rightarrow Z\gamma)&\equiv \mathcal{M}\left(Z_{\mu}(p_1),\gamma_{\nu}(p_2),h(p_3)\right) \varepsilon^{\mu*}_1(p_1) \varepsilon^{\nu*}_2 (p_2)\nonumber \\ &\equiv \mathcal{M}_{\mu\nu}\varepsilon^{\mu*}_1 \varepsilon^{\nu*}_2, \label{GenAm} \end{align} where $\varepsilon^{\mu}_1$ and $\varepsilon^{\nu}_2$ are the polarization vectors of the $Z$ boson and the photon $\gamma$, respectively. The external momenta $p_{1}$, $p_{2}$, and $p_{3}$ satisfy the condition $p_3=p_1+p_2$ with the directions denoted in figure~\ref{loopHd} where one-loop Feynman diagrams contributing to the decay are presented. Only diagrams which are relevant in the unitary gauge are mentioned. The on-shell conditions are $p_1^2=m_Z^2$, $p_2^2=0$, and $p_3^2=m_h^2$. \begin{figure}[t] \centering \includegraphics[width=15cm]{Fig_hZgamma}\\ \caption{One-loop diagrams contributing to the decay $h\rightarrow Z\gamma$, where $f_{i,j}$, $S_{i,j}$ and $V_{i,j}$ are fermions, Higgs, and gauge bosons, respectively.}\label{loopHd} \end{figure} The decay amplitude is generally written in the following form~\cite{hzgamssm}: \begin{align} \mathcal{M}_{\mu\nu}\equiv F_{00}\, g_{\mu\nu}+ \sum_{i,j=1}^2F_{ij} p_{i\mu}p_{j\nu}+ F_{5}\times i\epsilon_{\mu\nu\alpha\beta} p_{1}^{\alpha}p_{2}^{\beta}, \label{mmunu} \end{align} where $\epsilon_{\mu\nu\alpha\beta}$ is the totally antisymmetric tensor with $\epsilon_{0123}=-1$ and $\epsilon^{0123}=+1$~\cite{peskin}. The equality $\varepsilon^{\nu*}_2 p_{2\nu}=0$ for the external photon implies that $F_{12,22}$ do not contribute to the total amplitude (\ref{GenAm}). In addition, the $\mathcal{M}_{\mu\nu}$ in eq.~(\ref{mmunu}) satisfies the Ward identity, $p_{2}^{\nu}\mathcal{M}_{\mu\nu}=0$, resulting in $F_{11}=0$ and~\cite{hzgamssm} % \begin{equation} F_{00}=- (p_1.p_2) F_{21}=\frac{(m_Z^2-m_h^2)}{2}F_{21}. \label{Wardconsequence} \end{equation} Hence the amplitude (\ref{GenAm}) can be calculated through the form~(\ref{mmunu}) via the following relations \begin{align} \mathcal{M}(h\rightarrow Z\gamma)&= \mathcal{M}_{\mu\nu}\varepsilon^{\mu*}_1 \varepsilon^{\nu*}_2,\nonumber \\ % \mathcal{M}_{\mu\nu}&= F_{21}\left[-(p_2.p_1) g_{\mu\nu} +p_{2\mu}p_{1\nu}\right]+ F_{5}\times i\epsilon_{\mu\nu\alpha\beta} p_{1}^{\alpha}p_{2}^{\beta}.\label{amp1} \end{align} The partial decay width then can be presented in the form~\cite{HHunter,GMmodel} \begin{equation} \Gamma(h\rightarrow Z\gamma)=\frac{m_h^3}{32\pi} \times \left(1-\frac{m_Z^2}{m_h^2}\right)^3\left(|F_{21}|^2 +|F_5|^2\right). \label{GaHZga1} \end{equation} The above formula shows us that we need to find only two scalar coefficients $F_{21}$ and $F_5$ in eq.~(\ref{amp1}). Because $F_5$ arises from only chiral fermion loops, it is enough to pay attention to terms proportional to $F_{21}p_{2\mu}p_{1\nu}$ for gauge boson loops. Therefore calculations will be simplified, especially in the unitary gauge. Combining with notations of the PV functions~\cite{PVfunc}, we will determine explicitly which terms give contributions to $F_{21}p_{2\mu}p_{1\nu}$, and hence exclude step by step irrelevant terms throughout our calculations. Calculation of the factor $F_{21}$ is very interesting because it does not receive contributions from diagrams which contain counterterm vertices. The Lorentz structures of the counterterm vertices are shown in figure~\ref{counterVer}. \begin{figure}[t] \label{counterVer} \includegraphics[width=12cm]{Fig_Counterterm}\\ \caption{Counterterm vertices and related one-loop diagrams contributing to the one-loop amplitude of the decay $h\rightarrow Z\gamma$.} \end{figure} The first line represents three additional counterterm vertices. The second line shows two more diagrams. The total amplitude is the sum of three diagrams 1, 4, and 5 in figure~\ref{counterVer} and all diagrams shown in figure~\ref{loopHd}. We can see in figure~\ref{counterVer} that, the first diagram contributes only to $F_{00}$. In the unitary gauge, the propagator of a gauge boson is \begin{equation} \Delta^{\mu\nu}(k^2,m^2) =\frac{-i}{k^2-m^2}\left(g^{\mu \nu}-\frac{k^{\mu}k^{\nu}}{m^2} \right). \label{prounitary} \end{equation} The Lorentz structures of the two remaining counterterms are \begin{align*} i\mathcal{M}^{\mathrm{CT}}_{(4)\mu\nu}&\sim g_{\mu\alpha}\times \left( g^{\alpha\alpha'}- \frac{p_{2}^{\alpha} p_{2}^{\alpha'}}{m_Z^2}\right) \times \left( g_{\alpha'\nu} C_{1ZA} +p_{2\alpha'}p_{2\nu}C_{2ZA} \right)\nonumber \\ &= g_{\mu\nu} C_{1ZA} + p_{2\mu}p_{2\nu} \left( C_{2ZA}- \frac{C_{1ZA}}{m_Z^2}\right),\nonumber \\ i\mathcal{M}^{\mathrm{CT}}_{(5)\mu\nu}&\sim (p_3 +p_2)_{\mu} \times \left(p_{2\nu}C_{S_iA}\right)= (p_1 +2 p_{2})_{\mu} p_{2\nu} C_{S_iA}, \end{align*} which contribute only to $F_{00}$, $F_{12}$, and $F_{22}$. The result for the Lorentz structures is unchanged if the virtual gauge boson $Z$ in diagram~4 is replaced with the new ones in a gauge extended versions of the SM. As the result, $F_{21}$ is not affected by counterterms, therefore we do not need to include them in our calculation. In addition, $F_{21}$ is finite without including the related counterterm diagrams. A similar situation in two Higgs doublet models was discussed in~\cite{HzgaTHD}. Examples for Lorentz structures of the counterterms were given also, e.g., in refs.~\cite{PVDenner,grace}. \begin{table}[t] \begin{tabular}{|c|c|} \hline Vertex& Coupling \\ \hline $h\overline{f_i}f_j$& $-i\left(Y_{hf_{ij}L}P_L + Y_{hf_{ij}R}P_R\right)$\\ \hline $hS^{Q}_iS^{-Q}_j$, $hS^{-Q}_iS^{Q}_j$& $-i\lambda_{hS_{ij}}$, $-i\lambda^*_{hS_{ij}}$\\ \hline $h(p_0) S^{-Q}_i(p_-)V^{Q\mu }_j $, $h(p_0)S^{Q}_i(p_+)V^{-Q\mu}_j$ & $ig_{hS_iV_j} (p_0-p_{-})_{\mu}$, $-ig^*_{hS_iV_j} (p_0-p_{+})_{\mu}$ \\ \hline $h V^{-Q\mu }_iV^{Q\nu}_j$, $h Z^{\mu }Z^{\nu}$ & $ig_{hV_{ij}} g_{\mu\nu}$, $ig_{hZZ} g_{\mu\nu}$ \\ \hline $A^{\mu}\overline{f_i}f_i$, $A^{\mu}S^{Q}_iS^{-Q}_i$& $ie\,Q\gamma_{\mu}$, $ie\,Q(p_{+}-p_{-})_{\mu}$\\ \hline $A^{\mu}(p_0) V^{ Q\nu}_i(p_+)V^{-Q\lambda}_i(p_-)$& $-ie Q\Gamma_{\mu\nu\lambda}(p_{0}, p_+,p_-)$ \\ \hline $Z^{\mu}\overline{f_i}f_j$ & $i\left(g_{Zf_{ij}L}\gamma_{\mu}P_L+ g_{Zf_{ij}R}\gamma_{\mu} P_R\right)$\\ \hline $Z^\mu S^{Q}_i(p_+)S^{-Q}_j(p_-)$& $ig_{ZS_{ij}}(p_{+}-p_{-})_{\mu}$\\ \hline $ Z^{\mu }V^{Q\nu }_iS^{-Q}_j$, $Z^{\mu}V^{-Q\nu}_iS^{Q}_j $& $ig_{ZV_iS_j}\, g_{\mu\nu}$, $ig^*_{ZV_iS_j}\, g_{\mu\nu}$ \\ \hline $Z^{\mu}(p_0) V^{Q\nu}_i(p_+)V^{-Q\lambda }_j(p_-)$& $-ig_{Z V_{ij}}\Gamma_{\mu\nu\lambda}(p_{0}, p_+,p_-)$ \\ \hline $Z^{\mu} A^{\nu } V^{Q\alpha}_iV^{-Q\beta}_j$& $-ie\,Q\,g_{ZV_{ij}}\left(2 g_{\mu\nu}g_{\alpha\beta} - g_{\mu\alpha}g_{\nu\beta}- g_{\mu\beta}g_{\nu\alpha}\right)$ \\ \hline \end{tabular} \caption{Couplings involving the decay of CP even neutral Higgs $h\rightarrow Z\gamma$, in the unitary gauge. A new notation is $\Gamma_{\mu\nu\lambda}(p_{0}, p_+,p_-)\equiv (p_0-p_+)_{\lambda} g_{\mu\nu} +(p_+-p_-)_{\mu} g_{\nu\lambda} +(p_--p_0)_{\nu} g_{\lambda\mu}$, where all momenta are incoming, and $p_{0,\pm}$ are respective momenta of $h$ and charged gauge and Higgs bosons with electric charges $\pm Q$, denoted as $V_{i,j}^{\pm Q}$ and $S_{i,j}^{\pm Q}$, respectively. The general case of four-gauge-boson coupling is $(2,-1,-1)\rightarrow (a_1,a_2,a_3)$ and $g_{Z\gamma V_{ij}}\neq e\,Q\,g_{ZV_{ij}}$.}\label{tVcoupling} \end{table} The Feynman rules used in our calculations are listed in table~\ref{tVcoupling}. We found them to appear commonly in many gauge extensions of the SM, for example in the models constructed from the following electroweak gauge symmetries: $SU(2)_1\times SU(2)_2\times U(1)_Y$, $SU(2)_L\times SU(2)_R\times U(1)_Y$, and $SU(3)_L\times U(1)_X$~\cite{gaugExtent221, Roitgrund:2014zka, gaugExtent221b, gaugExtent331,gaugExtent331a,C0f}, where an important relation $g_{Z\gamma V_{ij}}=e\,Q\,g_{ZV_{ij}}$ is valid. It results in that many complicated terms containing dangerous divergences in two contributions from diagrams 5 and 6 in figure~\ref{loopHd} cancel each other out. Following LoopTools~\cite{LoopTools}, figure~\ref{loopHd} defines three internal momenta $q,q_1,q_2$ as follows \begin{eqnarray} \label{02} q_1=q+k_1=q-p_1, \quad q_2=q+k_2=q-(p_1+p_2),\quad p_1=-k_1, \quad p_2=k_1-k_2. \end{eqnarray} Our formulas will be written in terms of common well-defined PV functions. Moreover, we can compare our results with previous works, as well as we can perform numerical estimates with the help of the LoopTools library. Definitions and notations for the PV functions are shown in Appendix~\ref{Looptoolnote}. As the result, we only need to calculate the coefficient $F_{21}$. In the next section, we will present important steps of how to get contributions from pure gauge boson loops to $F_{21}$. \section{\label{analytic} Analytic formulas} \subsection{Total contribution from diagrams with pure gauge boson mediations} Here we will consider calculation of the contribution from pure gauge boson loops to the decay amplitude of $h\rightarrow Z\gamma$. All of them were performed using the FORM language~\cite{form1,form2}. Other contributions from diagrams which contain only one or two internal gauge boson lines are computed more easily. The contribution from diagram~5 from figure~\ref{loopHd} reads \begin{align} \label{M5Vij} i \mathcal{M}_{(5)\mu \nu} &= 2 \times \int \frac{d^dq}{(2 \pi)^d} (ig_{hV_{ij}}\,g_{\alpha \beta}) \frac{-i}{D_0} \left( g^{\alpha \alpha'} - \frac{q^\alpha q^{\alpha'}}{m_1^2} \right) \times \left[-ig_{ZV_{ij}}\Gamma_{\mu\alpha'\lambda}(-p_1,q,-q_1)\right] \nonumber \\ &\times \frac{-i}{D_1} \left( g^{\lambda \rho} - \frac{q_1^\lambda q_1^{\rho}}{m_{2}^2} \right)\times \left[-ie\,Q\, \Gamma_{\nu\rho\delta}(-p_2,q_1,-q_2)\right] \times \frac{-i}{D_2} \left(g^{\delta \beta}-\frac{q_2^\delta q_2^\beta}{m_{2}^2} \right) \nonumber \\ &= 2e\,Q\, g_{hV_{ij}}\,g_{ZV_{ij}} \int \frac{d^dq}{(2 \pi)^d} \frac{1}{D_0D_1D_2}V_{1\mu\beta\lambda}V_{2\nu}^{\beta\lambda}, \end{align} where $m_{1,2}\equiv m_{V_{i,j}}$, $D_0=q^2-m_1^2$, $D_{1,2}=q^2_{1,2}-m_2^2$, \begin{align} V_{1\mu\beta\lambda}&=g_{\alpha \beta}\left( g^{\alpha \alpha'} - \frac{q^\alpha q^{\alpha'}}{m_1^2} \right)\Gamma_{\mu\alpha'\lambda}(-p_1,q,-q_1),\nonumber \\ V_{2\mu}^{\beta\lambda}&=\left( g^{\lambda \rho} - \frac{q_1^\lambda q_1^{\rho}}{m_{2}^2} \right)\times \left[\Gamma_{\nu\rho\delta}(-p_2,q_1,-q_2)\right] \left(g^{\delta \beta}-\frac{q_2^\delta q_2^\beta}{m_{2}^2} \right).\label{V12munu} \end{align} We note that factor 2 appearing in the first line of eq.~(\ref{M5Vij}) was added in order to count two different diagrams with opposite internal lines in the loops. It can be done because coupling constants $g_{hV_{ij}}$ and $g_{ZV_{ij}}$ are real numbers in all models that we consider here. Based on the structure of the PV functions, we know that $F_{21}p_{2\mu}p_{1\nu}$ gets contributions from parts having the following factors: $q_{\mu}q_{\nu}$, $q_{\mu}p_{1\nu}$, $p_{2\mu}q_{\nu}$, and $p_{2\mu}p_{1\nu}$. This means that we can do the following replacements in the calculation: \begin{eqnarray} q_{1\mu}&\rightarrow& q_{\mu},\quad q_{2\mu}\rightarrow q_{\mu}-p_{2\mu},\quad q_{2\nu}\rightarrow q_{\nu}-p_{1\nu}=q_{1\nu},\nonumber \\ k_{1\mu}&\rightarrow& 0, \quad k_{2\mu}\rightarrow -p_{2\mu}, \quad k_{1\nu},\, k_{2\nu}\rightarrow -p_{1\nu},\quad g_{\mu\nu}\rightarrow0. \label{ht}\end{eqnarray} After some intermediate steps shown in Appendix~\ref{detailsAmp}, and combining with the relations $q^2=D_0+m_1^2$ and $D_{1,2}=q_{1,2}^2+m^2_2$, we have \begin{align} i\mathcal{M}_{(5)\mu\nu}&\rightarrow \left[e\,Q\, g_{hV_{ij}}\,g_{ZV_{ij}}\right]\times \int \frac{d^dq}{(2 \pi)^d} \times\frac{1}{m_1^2m_2^2}\nonumber \\ &\times \left\{q_{\mu}q_{\nu} \left[-\frac{1}{D_2} -\frac{1}{D_0} +\frac{2(m_1^2-m_2^2+m_Z^2)}{D_1D_2} +\frac{m_1^2+m_2^2+m_h^2)}{D_0D_2} \right.\right. \nonumber \\ &\left.\left. +\frac{8(d-2)m_1^2m_2^2 +2\left(m_1^2+m_2^2+m_h^2\right)(m_1^2+m_2^2-m_Z^2)}{D_0D_1D_2}\right]\right.\nonumber \\ &+\left. q_{\mu}p_{1\nu}\left[\frac{1}{D_2} +\frac{1}{D_0} -\frac{2(m_1^2-m_2^2+m_Z^2)}{D_1D_2} -\frac{5 m_1^2+3m_2^2 +m_h^2}{D_0D_2} \right.\right. \nonumber \\ &\left.\left.+\frac{2(m_1^2+m_2^2-m_Z^2)}{D_0D_1} -\frac{8(d-2)m_1^2m_2^2 +2\left(m_1^2+m_2^2+m_h^2\right)(m_1^2+m_2^2-m_Z^2)}{D_0D_1D_2}\right]\right.\nonumber \\ &+\left.p_{2\mu}q_{\nu} \left[-\frac{4 m_1^2}{D_1D_2} +\frac{2m_1^2+4m_2^2}{D_0D_2} - \frac{4(m_1^2-m_2^2)(m_1^2+m_2^2-m_Z^2)}{D_0D_1D_2}\right]\right.\nonumber \\ &\left.+p_{2\mu}p_{1\nu} \left[\frac{4m_1^2}{D_1D_2} +\frac{2m_1^2}{D_0D_2} +\frac{4m_1^2(m_1^2 +3m_2^2 -m_Z^2)}{D_0D_1D_2}\right] \right\}. \label{F21_5} \end{align} The calculation to derive the needed contribution from digram 6 in figure~\ref{loopHd} are the same way applied to diagram 1, see details in Appendix~\ref{detailsAmp}. Diagram 7 does not give any contributions. We can see that many divergent terms related to $q_{\mu}q_{\nu}$ in two amplitudes~(\ref{F21_5}) and~(\ref{iM6u1}) of diagram~6 will cancel out each other when they are summed. Hence, the pure gauge boson loops give the following total contribution: % \begin{align} \mathcal{M}_{(5+6)\mu\nu}&\rightarrow e\,Q\, g_{hV_iV_j}\,g_{ZV_iV_j} \int \frac{d^dq}{(2 \pi)^d} \times\frac{1}{m_1^2m_2^2}\nonumber \\ &\times \left\{q_{\mu}q_{\nu} \left[\frac{2(m_1^2-m_2^2+m_Z^2)}{D_1D_2} \right.\right. \nonumber \\ &\left.\left. +\frac{8(d-2)m_1^2m_2^2 +2\left(m_1^2+m_2^2+m_h^2\right)(m_1^2+m_2^2-m_Z^2)}{D_0D_1D_2}\right]\right.\nonumber \\ &+\left. q_{\mu}p_{1\nu}\left[\frac{1}{2D_2} +\frac{1}{2D_0} -\frac{2(m_1^2-m_2^2+m_Z^2)}{D_1D_2} -\frac{7(m_1^2+m_2^2) +m_h^2}{2D_0D_2} \right.\right. \nonumber \\ &\left.\left.+\frac{2(m_1^2+m_2^2-m_Z^2)}{D_0D_1} -\frac{8(d-2)m_1^2m_2^2 +2\left(m_1^2+m_2^2+m_h^2\right)(m_1^2+m_2^2-m_Z^2)}{D_0D_1D_2}\right]\right.\nonumber \\ &+\left.p_{2\mu}q_{\nu} \left[ -\frac{1}{2D_2} -\frac{1}{2D_0}-\frac{4 m_1^2}{D_1D_2} +\frac{7(m_1^2+m_2^2) +m_h^2}{2D_0D_2}\right.\right. \nonumber \\ &\left.\left. - \frac{4(m_1^2-m_2^2)(m_1^2+m_2^2-m_Z^2)}{D_0D_1D_2}\right]\right.\nonumber \\ &\left.+p_{2\mu}p_{1\nu} \left[\frac{4m_1^2}{D_1D_2} +\frac{4m_1^2(m_1^2 +3m_2^2 -m_Z^2)}{D_0D_1D_2}\right] \right\}. \label{F21_56} \end{align} % Based on Appendix~\ref{Looptoolnote}, expression~(\ref{F21_56}) can be presented explicitly in terms of the PV functions $\mathcal{M}_{(5+6)\mu\nu}= \mathcal{M}_{(5+6)\mu\nu}(B_{0,\mu,\nu,\mu\nu}, C_{0,\mu,\nu,\mu\nu})\times 1/(16\pi^2)$. In addition, to keep only the parts with factor $p_{2\mu}p_{1\nu}$ we can use the following replacements: \begin{align} &A^{(0)}_{\mu,\nu},\,A^{(1)}_{\mu},\,B^{(1)}_{\mu,\mu\nu} \rightarrow 0,\quad \left\{ A^{(2)}_{\mu},B^{(2)}_{\mu},\,B^{(12)}_{\mu}\right\} \rightarrow \left\{ A^{(2)}_0,-B^{(2)}_1,\frac{B^{(12)}_0}{2} \right\}p_{2\mu},\nonumber \\ % & A^{(1,2)}_{\nu},\,B^{(1,2)}_{\nu},\,B^{(12)}_{\nu}\rightarrow\left\{A^{(1,2)}_0,\,-B^{(1,2)}_1,\, B^{(12)}_0\right\}p_{1\nu},\quad B^{(12)}_{\mu\nu}\rightarrow\frac{B^{(12)}_0}{2}p_{2\mu}p_{1\nu},\nonumber \\ % &C_{\mu}\rightarrow -C_2\,p_{2\mu},\, C_{\nu}\rightarrow -(C_1+C_2)p_{1\nu}, \, C_{\mu\nu}\rightarrow (C_{12}+C_{22}) p_{2\mu} p_{1\nu}. \end{align} Then, the total contribution from $V_i-V_{j}-V_j$ gauge boson loops is \begin{align} F_{21,V_{ijj}}&=\frac{2e\,Q\,g_{hV_{ij}}\,g_{ZV_{ij}}}{16\pi^2} \nonumber \\ &\times\left\{ \left[8+\frac{(m_1^2+m_2^2+m_h^2)(m_1^2+m_2^2-m_Z^2)}{m_1^2m_2^2}\right] \left(C_{12}+C_{22}+C_{2}\right)\right. \nonumber \\ &\left. +\frac{2(m_1^2-m_2^2)(m_1^2+m_2^2-m_Z^2)}{m_1^2m_2^2}(C_1+C_2) +\frac{2(m_1^2+ 3m_2^2-m_Z^2)C_0}{m_2^2} \right\},\label{F21Vijj} \end{align} where all PV functions having divergence completely disappeared, and therefore $d=4$. We would like to emphasize now that formula~(\ref{F21Vijj}) is written in terms of PV functions which are contained in LoopTools and hence it can be easily evaluated numerically. Moreover, analytic expressions for the relevant PV functions have been constructed~\cite{hzgamssm,C0f}, that is enough to implement our results in existing numerical programs or to write a new stand-alone code. We would like comment here about a more general case when couplings of gauge bosons and photon do not obey the relation $g_{Z\gamma V_{ij}}=e\,Q\,g_{ZV_{ij}}$, which helps us to reduce many divergent terms in $\mathcal{M}_{(5+6)\mu\nu}$. The key point here is that the condition of on-shell photon always cancels out the most dangerous divergent terms in the last line of~(\ref{V2_i}). As a by-product, the final form of $\mathcal{M}_{(5+6)\mu\nu}$ can contain more PV functions with divergent parts. Fortunately, all of them are well-determined and widely used for numerical computation. Before comparing our result with many well-known expressions computed in specific models, we will introduce analytic formulas for contributions from the remaining diagrams listed in figure~\ref{loopHd} for completeness. \subsection{Contributions from other diagrams in figure~\ref{loopHd}} The contributions to $F_{21}$ from the first four diagrams in figure~\ref{loopHd} are \begin{align} F_{21,f_{ijj}}&=F^{(1)}_{21}=-\frac{e\,Q\,N_c}{16\pi^2}\left[ 4\left(K^+_{LL,RR} +K^+_{LR,RL} +\mathrm{c.c.}\right) \left(C_{12} +C_{22} +C_2\right) \right. \nonumber \\ &\left.\quad\quad\quad \quad+ 2\left(K^+_{LL,RR} -K^+_{LR,RL} +\mathrm{c.c.} \right) \left(C_1 +C_2\right) +2(K^+_{LL,RR} +\mathrm{c.c.})C_0\right],\nonumber \\ F_{5,f_{ijj}}&= -\frac{e\,Q\,N_c}{16\pi^2}\left[ 2\left(K^-_{LL,RR} -K^-_{LR,RL}- \mathrm{c.c }\right) \left(C_1 +C_2\right)-2(K^-_{LL,RR}- \mathrm{c.c.}) C_0\right], \label{F21fff}\\ F_{21,S_{ijj}}&=F^{(2)}_{21} =\frac{e\,Q\left(\lambda^*_{hS_{ij}}g_{ZS_{ij}} +\mathrm{c.c.}\right)}{16\pi^2} \left[ 4(C_{12}+C_{22} +C_2)\right],\label{F21sss}\\ F_{21,VSS}&=F^{(3)}_{21}=\frac{e\,Q\,(g^*_{hV_iS_j}g_{ZV_iS_j} +\mathrm{c.c.})}{16\pi^2}\nonumber \\ &\quad\quad\quad \quad\times \left[2 \left(1+\frac{-m_2^2+m_h^2}{m_1^2}\right)(C_{12}+C_{22} +C_2) +4(C_1+C_2 +C_0)\right],\label{F21VSS}\\ F_{21,SVV}&=F^{(4)}_{21}=\frac{e\,Q\, (g_{hV_jS_i}g^*_{ZV_jS_i} +\mathrm{c.c.})}{16\pi^2} \nonumber \\ & \quad\quad\quad \quad\times \left[2 \left(1+\frac{-m_1^2+m_h^2}{m_2^2}\right)(C_{12}+C_{22} +C_2)-4(C_1+C_2)\right], \label{F21SVV} \end{align} where $m_{1,2}\equiv m_{X,Y}$ in the loop of $F_{21,XYY}$, $N_c$ is the colour factor coming from the $SU(3)_C$ symmetry, and the abbreviation $\mathrm{c.c.}$ stands for the complex conjugated parts. The latter are the contributions coming from diagrams having opposite directions of internal lines with respect to the ones given in figure~\ref{loopHd}. Other relevant notations are \begin{align} \label{KLR} K^{\pm}_{LL,RR}&=m_1\left(Y_{hf_{ij}L}\, g^*_{Zf_{ij}L}\pm Y_{hf_{ij}R}\, g^*_{Zf_{ij}R}\right),\nonumber \\ K^{\pm}_{LR,RL}&=m_2\left(\pm Y_{hf_{ij}L}\, g^*_{Zf_{ij}R} + Y_{hf_{ij}R}\, g^*_{Zf_{ij}L}\right). \end{align} Details of calculating contributions from fermion loops $F_{21,f_{ijj}}$ are shown in Appendix~\ref{detailsAmp}. Formulas for $F_{21,S_{ijj}}$ and $F_{21,VSS}$ are calculated easily. The $F_{21,SVV}$ part was computed based on the result of $V_{2\mu}^{\beta\lambda}$ in eq.~(\ref{V1_12}). All steps we presented here were performed using the FORM language~\cite{form1,form2}. Formulas for $F_{21,f_{ijj}}$, $F_{5,f_{ijj}}$, and $F_{21,S_{ijj}}$ are irrelevant for the discussion of boson mediations. Similar general forms can be found in many previous works, e.g., in~\cite{GMmodel, hdcay1, HzgaTHD}. All of them are easy to check to be consistent with our result so we will not present the comparison here. We just focus on the most important formula $F_{21,V_{ijj}}$. \section{\label{comparision}Comparison with previous results} \subsection{The Standard Model} The contribution of $W$ bosons corresponds to $( g_{hV_{ij}},\,g_{ZV_{ij}},\,Q ) \rightarrow (g\,m_W,\, g\, c_W,\,1)$ with $m_1=m_2=m_W$, where $m_W$ is the $W$ boson mass, $g$ is the gauge coupling of the $SU(2)_L$ group, $s_W\equiv \sin\theta_W$ with $\theta_W$ being the Weinberg angle. Then formula~(\ref{F21Vijj}) is reduced to the simpler form: \begin{align} F^{\mathrm{SM}}_{21,W}&=\frac{e\,g^2 m_W \,c_W}{16\pi^2} \left\{ 2\left[ 8+ \left(2 +\frac{m_h^2}{m_W^2}\right)\left(2 -\frac{m_Z^2}{m_W^2}\right) \right] \left( C_{12} +C_{22} +C_2\right) + 4\left(4- \frac{m_Z^2}{m_W^2}\right) C_0\right\} \label{SMF21Wa}\\ &= \frac{\alpha_{\mathrm{em}}\,g\,c_W}{4\pi m_W\,s_W} \left\{ \left[ 5+\frac{2}{t_2}- \left(1+ \frac{2}{t_2}\right)t^2_W \right]I_{1}(t_2,t_1)- 4(3-t_W^2)I_2(t_2,t_1)\right\}, \label{SMF21W} \end{align} where we have used $\alpha_{\mathrm{em}}=e^2/(4\pi)$, $e=g\,s_W$, $m_h^2/m_W^2=4/t_2$, $m_Z^2/m_W^2=4/t_1$, $m_Z^2/m_W^2=1/c^2_W=1+t_W^2$, $s_W=\sin{\theta_W}$, and $t_W=s_W/c_W$. We also used the well-known functions $I_{1,2}(t_2,t_1)$ given in ref.~\cite{HHunter} to identify $C_{12}+C_{22} +C_2= I_{1}(t_2,t_1)/(4m_W^2)$, and $C_0=-I_2(t_2,t_1)/m_W^2$\footnote{The function $C_0$ in this special case is consistent with the one from~\cite{HzgaTHD, GMmodel}, but different from the one in~\cite{hzgamssm} by the opposite sign.}. They are proved in Appendix~\ref{specialf}. Formula~(\ref{SMF21W}) is consistent with well-known result for the SM case given in~\cite{HHunter,GMmodel}, which even has been confirmed using various approaches \cite{Boradjiev:2017khm}. The right hand side of eq.~(\ref{SMF21Wa}) can be proved to be completely consistent with the $W$ contribution to the amplitude of the decay $h\rightarrow \gamma\gamma$ with $g_{ZWW}\rightarrow g_{\gamma WW}=e$, and in the limit $m_Z\rightarrow0$, equivalently $t_1=4m_W^2/m_Z^2\rightarrow \infty$. The analytic form of this contribution is known~\cite{HHunter, h2ga1}, namely \begin{align} F_{21,W}^{h\gamma\gamma,\mathrm{SM}}&= \frac{\alpha_{\mathrm{em}}\,g}{4\pi m_W}\left[ 2+3t_2 + 3(2t_2-t_2^2) f(t_2)\right], \label{F21wh2gaSM} \end{align} where $t_2=4m_W^2/m_h^2$ and $f(x)$ is the well-known function given in Appendix~\ref{specialf}. The partial decay width is $\Gamma(h\rightarrow\gamma\gamma)=m_h^3/(64\pi)|F^{h\gamma\gamma,\mathrm{SM}}_{21}|^2$, where $F_{21}^{h\gamma\gamma,\mathrm{SM}}$ contains $F_{21,W}^{h\gamma\gamma,\mathrm{SM}}$. The above determination of $F_{21,W}$ depends only on the diagrams with $W$ boson, hence it should be the same in both cases of photon and $Z$ boson, except their masses and couplings with the $W$ boson. For the case of photon we have \begin{align} C_0&=-\frac{1}{m_W^2} \lim_{t_1\rightarrow \infty}I_2(t_2,t_1)=-\frac{t_2f(t_2)}{2m_W^2}, \nonumber \\ % C_{12}+C_{22}+C_2&=\frac{1}{4m_W^2} \lim_{t_1\rightarrow \infty}I_1(t_2,t_1)=\frac{1}{8m_W^2}\left[ -t_2 +t_2^2 f(t_2)\right], \label{I12photon} \end{align} where the expression for $C_0$ is the same as the one in~\cite{C0h2ga}. By inserting two equalities~(\ref{I12photon}) into the right hand side of~(\ref{SMF21Wa}) with $m_Z=0$, we will obtain exactly eq.~(\ref{F21wh2gaSM}). Regarding the fermionic contribution in the SM, we verify here the simple case of a single fermion without mixing and color factors, where $m_1=m_2=m_f$ and $Y_{hf_{ij}L}=Y_{hf_{ij}R}= e\, m_f/(2m_W\,s_W)$, leading to $$K^{+}_{LL,RR}=K^{+}_{LR,RL}=K^{+*}_{LL,RR}=K^{+*}_{LR,RL}= \frac{e}{2m_Ws_W}\times m^2_f (g_{ZfL} +g_{ZfR})$$ and $ K^{-}_{LL,RR}=K^{-}_{LR,RL}=K^{-*}_{LL,RR}=K^{-*}_{LR,RL}= m^2_f(g_{ZfL} -g_{ZfR})$. Two formulas (\ref{F21fff}) for fermionic contributions are \begin{align} F^{\mathrm{SM}}_{21,f}&=-\frac{e^2\,Q}{16\pi^2m_W s_W}\times m^2_f(g_{ZfL} +g_{ZfR})\left[ 8 \left(C_{12} +C_{22} +C_2\right) +2C_0\right]\nonumber \\ &=\frac{\alpha_{\mathrm{em}}\,g}{4\pi m_W}\times\left[\frac{-2Q\left(T^{3L}_f-2 Q s^2_W\right)}{s_W\,c_W} \right] \left[ I_{1}(t_2,t_1) -I_2(t_2,t_1)\right], \nonumber \\ F_{5,f_{ijj}}&=0, \label{F21fffSM} \end{align} where $g_{ZfL}+g_{ZfR}=\left(T^{3L}_f-2 Q s^2_W\right)\times g/c_W$, and $T^{3L}_f$ is the fermion weak isospin. Formula~(\ref{F21fffSM}) coincides with the result given in~\cite{HHunter}. At the one loop level, the effective coupling $h\gamma\gamma$ can be calculated using the 't Hooft - Feynman gauge \cite{Lavoura:2003xp}, which will be useful to crosscheck with our result when the decay $h\rightarrow \gamma\gamma$ in a particular BSM is investigated. \subsection{Recent results} The one-loop contribution from new gauge bosons in the GHU model was given in ref.~\cite{hzgaGHU}, where the unitary gauge was mentioned without detailed explanations. We see that the triple and quartic gauge boson couplings in this model also obey the Feynman rules listed in table~\ref{tVcoupling}, hence our formula in eq.~(\ref{F21Vijj}) is also valid. Because the final result in ref.~\cite{hzgaGHU} was written in terms of only $B_0$ and $C_0$ functions, which are independent on the choice of integration variable, it can be compared with our result. Translated into our notation, the most important relevant part in ref.~\cite{hzgaGHU} is \begin{align} F^{\mathrm{GHU}}_{21,V}&= \left(m_1^4 +m_2^4 +10 m_1^2 m_2^2\right) E_+(m_1,m_2) + \left[ (m_1^2 +m_2^2) (m_h^2-m_Z^2) -m_h^2m_Z^2\right] E_{-}(m_1,m_2) \nonumber \\ &- \left[ 4 m_1^2m_2^2 (m_h^2-m_Z^2) +2m_Z^4(m_1^2+m_2^2)\right] \left(C_0 +C'_0\right),\label{FHGU21V} \end{align} where function $C'_0$ is determined by changing the roles of $m_1$ and $m_2$, and \begin{align} E_{\pm}(m_1,m_2)=1+\frac{m_Z^2}{m_h^2-m_Z^2}\left( B^{(2)}_0- B^{(1)}_0\right) \pm (m_2^2 C_0 +m_1^2 C_0'). \label{epm} \end{align} Formula~(\ref{FHGU21V}) should be equivalent to our result, namely to the sum $F_{21,V_{ijj}}+ F_{21,V_{jii}}$. In the special case where $V_i\equiv V_j$, corresponding to $m_1=m_2=m$, $C_0'=C_0= -I_2(t_2,t_1)/m^2$, and $C_{12} +C_{22}+C_2=I_1(t_2,t_1)/(4 m^2)$. In fact we find the agreement between eq.~(3.18) of ref.~\cite{hzgaGHU} and our result, namely \begin{align} \delta F_{21}=\left. F^{\mathrm{GHU}}_{21,V}- \left[ \frac{16\pi^2}{2e\,Q\,g_{hV_{ij}}\,g_{ZV_{ij}}}( F_{21,V_{ijj}} +F_{21,V_{jii}})\right]\times \left[- m_1^2m_2^2(m_h^2-m_z^2)\right] \right|_{m_1=m_2}=0. \nonumber \end{align} But two general results are not the same, {\it i.e.} they differ by $\delta F_{21}= -2 \left(m_1^2 C_0 + m_2^2 C_0'\right) m_Z^4$. Except $F_{21,V_{ijj}}$ in eq.~(\ref{F21Vijj}), our formulas are consistent with the results given in ref.~\cite{GMmodel}, which were obtained by calculating the decay amplitude of charged Higgs boson $h^{\pm}\rightarrow W^{\pm}\gamma$ in the 't~Hooft-Feynman gauge for the Georgi-Machacek model. In our notations, $F_{21,S_{ijj}}$, $F_{21,S_{i}VSS}$, and $F_{21,SVV}$ correspond to scalar, vector-scalar-scalar, and scalar-vector-vector loop diagrams mentioned in ref.~\cite{GMmodel}. By using the same notations from LoopTools, our results and those of ref.~\cite{GMmodel} have the same form. The consistency between our results and those in ref.~\cite{GMmodel} is explained by the same Lorentz structures in couplings of the gauge bosons $Z$ and $W^{\pm}$. An important difference is that the $W^{\pm}$ carry electric charges while the $Z$ does not. For a certain diagram with $W^+$ or $W^-$ in the final state, the directions of internal lines are fixed, hence the complex conjugated terms are allowed in the amplitude of the decay $h\rightarrow Z\gamma$, but not in that of $H^{\pm} \rightarrow W^{\pm}\gamma$. Hence, except the pure gauge boson loop diagrams, the contributions to $h\rightarrow Z\gamma$ can be translated into those to $H^\pm \rightarrow W^{\pm}\gamma$ by excluding all complex conjugated parts. Of course, the mass $m_Z$ and couplings of the $Z$ boson must be replaced with those of the $W^{\pm}$ bosons. This explanation can be checked directly based on our calculations given above. Regarding $F_{21,V_{ijj}}$, which presents the total vector loop contribution to the decay amplitude $H^\pm\rightarrow W^{\pm}\gamma$, the explicit expression derived from eq.~(\ref{F21Vijj}) reads \begin{align} F_{21,V_{ijj}}^{H^{\pm}W^{\pm}\gamma} &=\frac{e\,Q\,g_{hV_{ij}}\,g_{WV_{ij}}}{16\pi^2} \nonumber \\ & \times \left\{ \left[8+\frac{(m_1^2+m_2^2+m_{H^{\pm}}^2)(m_1^2+m_2^2-m_W^2)}{m_1^2m_2^2}\right] \left(C_{12}+C_{22}+C_{2}\right)\right. \nonumber \\ &\left. +\frac{2(m_1^2-m_2^2)(m_1^2+m_2^2-m_W^2)}{m_1^2m_2^2}(C_1+C_2) +\frac{2(m_1^2+ 3m_2^2-m_W^2)C_0}{m_2^2} \right\},\label{F21Vijjw} \end{align} where $m_{H^\pm}$ is the charged Higgs boson mass, $g_{WV_{ij}}$ is the triple gauge coupling of the $W$ boson, and $Q$ is always the electric charge of the gauge boson $V_j$ coupling with the photon. We note that the factor $2$ in eq.~(\ref{F21Vijj}) is not counted anymore. Now, we only need to focus on the part generated by the loop structures used to compare with the specific result given in~\cite{GMmodel}. This case corresponds to $m_1=m_Z,m_2=m_W=m_Zc_W$ and $m_{H^{\pm}}=m_5$ for the decay $h^{+}_5\rightarrow W^{+} \gamma$. Formula~(\ref{F21Vijjw}) now has the following form \begin{align} F_{21,V_{ijj}}^{H_5^{\pm} W^{\pm}\gamma} &\sim \left(9 +\frac{1}{c_W^2} +\frac{m_5^2}{m_W^2}\right) \left(C_{12}+C_{22}+C_{2}\right) +2\left(\frac{1}{c_W^2}-1\right)(C_1+C_2) +2\left(\frac{1}{c_W^2} +2 \right)C_0 \nonumber \\ &= 10(C_{12}+C_{22}+C_2) +6 C_0 + \frac{m_5^2}{m_W^2}(C_{12}+C_{22}+C_2)\nonumber \\ &+ \frac{s^2_W}{c^2_W}(C_{12} +C_{22} +2C_1 +3C_{2} +2 C_0),\label{F21Vijjwa} \end{align} which is different from the result given in ref.~\cite{GMmodel} by the coefficient $10$ instead of $12$ in front of the sum $(C_{12}+C_{22} +C_2)$. We see that the two parts in our result with coefficients $m^2_5/m_W^2$ and $s^2_W/c^2_W$ are consistent with $S_{GGG}$ and $S_{XGG}$ in ref.~\cite{GMmodel}, respectively. The difference in the remaining part might arise due to a missed sign of the ghost contribution $S_{\mathrm{ghost}}$. An approach using Feynman gauge was introduced in Ref.~\cite{Goodsell:2017pdq}, where the result must be implemented in some numerical packages. The results can be used to crosscheck with ours for consistence, but left for a further work. \section{\label{application}Heavy charged boson effects on Higgs decays $h\rightarrow Z\gamma$ in BSM} Because new heavy charged gauge $V^{\pm}$ and Higgs bosons $S^{\pm}$ appear in non-trivial gauge extensions of the SM, they may contribute to loop-induced SM-like Higgs decays $h\rightarrow \gamma\gamma$ and $h\rightarrow Z\gamma$. While the couplings $h VV$ and $hSS$ consisting of virtual identical charged particles always contribute to both decay amplitudes, the couplings $hWV$ and $hWS$ of the SM-like Higgs boson only contribute to the later. These couplings may cause significant effects to Br$(h\rightarrow Z\gamma)$ in the light of the very strict experimental constraints of Br$(h\rightarrow\gamma\gamma)$ \cite{Khachatryan:2016vau}. When $m^2_{X}\gg m^2_{W}$ with $X=S,V$, the loop structures of the form factors with at least one virtual $W$ boson have an interesting property that $$F'_{WX}\equiv \left|\frac{F_{21,{WXX}} +F_{21,{XWW}}}{ eQg_{hXW}g_{ZXW}/(16\pi^2)}\right|\sim F'_{W}\equiv \left|\frac{F_{21,W} }{ eg_{hWW}g_{ZWW}/(16\pi^2)}\right| \sim \mathcal{O} \left(\frac{1}{m_W^2}\right),$$ i.e., the same order with the $W$ loop contribution. In contrast, the loop structure of a heavy gauge boson $F_{21,VVV}$ is $$F'_{V}\equiv \frac{F_{21,VVV}}{g_{hVV}g_{ZVV}/(16\pi^2)} \sim \mathcal{O}(m_V^{-2}),$$ which is different from the SM contribution of the $W$ boson by a factor $m^2_{W}/m_V^2$. Numerical illustrations are shown in figure~\ref{fWX} where $f_{W,X}\equiv F'_{WX}/F'_{W}$, $f_{V}\equiv F'_{V}/F'_W$, and $m_S=m_V$. \begin{figure}[ht] \includegraphics[width=15cm]{LWX}\\ \caption{$f_V m_V^2/m_W^2$, $f_{W,S}$ and $f_{W,V}$ as functions of the $SU(2)_R$ scale $m_V$. }\label{fWX} \end{figure} Hence, the large coupling product $g_{hWX} g_{ZWX}$ may give significant effects on the total amplitude of the decay $h\rightarrow Z\gamma$. But the contributions arising from this part were omitted in the literature, even with well known-models such as the left-right models and the Higgs Triplet Models (HTM). In the original LR models reviewed in \cite{gaugExtent221}, $g_{ZWW'}\sim (m_W/m_{W'})^2$, lower bounds of few TeV for heavy gauge boson mass $m_{W'}$ were concerned from recent experiments at LHC \cite{Aaboud:2018vgh}. As a result, its contributions may be small. In contrast, recent versions introducing different assignments of fermions representations to explain latest experimental data of anomalies in $B$ meson decays allow lower values of $m_{W'}$ near 1 TeV \cite{lowmwp,Boucenna:2016qad}. Interesting studies on new charged gauge bosons $W'$ in left-right models \cite{Dobrescu:2015qna,Dobrescu:2015jvn,Dobrescu:2015yba} indicated that the couplings $W'Wh,\, W'WZ,W'H^\pm Z$ result in important decays of $W'^{\pm}$, which are being hunted at LHC. These coupling also contribute to the decay $h\rightarrow Z\gamma$. The gauge bosons of the gauge groups $SU(2)_{L,R}$ and $U(1)_{B-L}$ are $W^a_{L,R\mu}$ ($a=1,2,3$) and $A_{B-L\mu}$~\cite{Dobrescu:2015jvn}, respectively. The Higgs sector consists of one bidoublet $\Sigma$ whose breaks the electroweak scale, and a $SU(2)_R$ multiplet whose breaks the $SU(2)_R\times U(1)_{B-L}$ scale. Apart from the SM-like gauge bosons $Z_{\mu}$, $W^{\pm}_{\mu}$, and photon $A_{\mu}$, the left-right models predict new heavy gauge bosons including $W'^{\pm}$ and $Z'$ with masses $m_{W'}$ and $m_{Z'}$, respectively. The bidoublet contributes mainly to the SM-like Higgs boson, Goldstone bosons of $Z$ and $W^{\pm}$, and a pair of singly charged Higgs $H^{\pm}$ that couple with the SM-like Higgs boson. Relevant vertex factors are summarized in Table~\ref{Wpcoup}. The details of the models and calculations are given in Appendix \ref{LRmodel}. \begin{table}[ht] \begin{tabular}{ccc} \hline Vertex& SM & LR~ \cite{Dobrescu:2015qna,Dobrescu:2015yba}\\ \hline $g_{hWW}\, g_{ZWW}$&$g^2\,m_{W}\,c_W,$& $g_L^2\,m_{W}c_W\sin(\beta -\alpha)$ \\ \hline % $g_{hW'W}g_{ZWW'}$& $-$& $g_Lg_R\, m_W \cos(\beta +\alpha) \frac{s_{\theta_+}}{c_W}$ \\ \hline $g_{hW'W'}g_{ZW'W'}$& $-$& $-g^2_R\,m_{W}\,\sin(\beta -\alpha) \frac{ s^2_W}{c_W}$\\ % \hline $g_{hW^+H^-}g_{ZW^-H^+}$& $-$& $-\frac{g^2_R}{2}\,m_{W}\,c_W\sin(\beta +\alpha)\cos(2\beta) s^2_{\theta_+}$\\ % \hline $g_{hW'^+H^-}g_{ZW'^-H^+}$& $-$& $-\frac{g^2_R}{2}\,m_{W}\,c_W\sin(\beta +\alpha)\cos(2\beta)$\\ % \hline \end{tabular} \caption{Vertex factors involved charged gauge and Higgs bosons contributing to one loop amplitude of the SM-like Higgs decay $h\rightarrow Z \gamma$ in the LR model with $g\equiv g_L$ and $s_{\theta_+}\simeq \tan{\theta_+}= \frac{g_R}{g_L}\times \sin(2\beta)\epsilon^2$, and $\epsilon=m_W/m_{W'}$.}\label{Wpcoup} \end{table} We have used the condition $\alpha=\beta-\pi/2$ to guarantee that the coupling $hWW$ is the same as that in the SM. We ignore all suppressed terms having factors with orders larger than $\mathcal{O}(\epsilon^2)$, where $\epsilon=m_W/m_{W'}$ and $m_{W'}$ is the new heavy gauge boson mass, which can be considered as the breaking scale of the $SU(2)_R$ group. The couplings of the SM-like Higgs boson we discuss here are consistent with those in Ref. \cite{Dev:2016dja, Dobrescu:2015jvn, Dobrescu:2015yba, Jinaru:2013eya}. The triple gauge couplings are also consistent with Refs. \cite{gaugExtent221, Vijcouple}. Because they are not affected by the fermion assignments, they can be considered in the general case which does not depend on the recent experimental limit. With the above assumptions, the couplings of the SM-like Higgs boson are nearly the same as those in the SM. The decay $h\rightarrow Z\gamma$ has contributions associated with charged gauge bosons estimated as follows, \begin{align} \label{WBoson_con} \frac{F^{\mathrm{LR}}_{21,WWW}}{F^{\mathrm{SM}}_{21,W}}&\simeq 1,\quad \frac{F^{\mathrm{LR}}_{21,W'W'W'}}{F^{\mathrm{SM}}_{21,W}}\sim -\frac{g_R^2s^2_W}{g_L^2c^2_W}\epsilon^2,\nonumber \\ \frac{F^{\mathrm{LR}}_{21,WW'W'} +F^{\mathrm{LR}}_{21,W'WW}}{F^{\mathrm{SM}}_{21,W}}&\sim \frac{g_R^2\sin^2(2\beta)}{2g_L^2c^2_W}\epsilon^2, \quad \frac{F^{\mathrm{LR}}_{21,HW'W'} +F^{\mathrm{LR}}_{21,W'HH}}{F^{\mathrm{SM}}_{21,W}} \sim \frac{g_R^2\cos^2(2\beta)}{2g_L^2}\epsilon^2, \end{align} % where $\epsilon\equiv m_W/m_{W'}$ and $\alpha\simeq \beta-\pi/2$. We can see that all quantities listed in (\ref{WBoson_con}) have the same order, although some of them are affected by the tiny mixing parameter $s_{\theta_+}= \mathcal{O}(\epsilon^2)$ between two charged gauge bosons. Hence all of them must be taken into account. This argument is different from previous treatment where only $F^{\mathrm{LR}}_{21,W'W'W'}$ was mentioned \cite{hzgalr1, hzgalr2, Maiezza:2016bzp}. The recent lower bounds of the $SU(2)_R$ scale give $\epsilon^2\le \mathcal{O}(10^{-3})$, implying that the heavy charged Higg and gauge contributions discussed here are suppressed. But the calculation is very useful for further investigation in many other gauge extensions allowing lower new breaking scales, for example, the models belonging to the class of breaking pattern I mentioned in Ref. \cite{Vijcouple}, or recent models with breaking pattern II \cite{Boucenna:2016qad, lowmwp}. The effects of heavy charged Higgs boson $m_{H}$ from $F_{21,WSS}$ and $F_{21,SWW}$ appear in simple models like the HTM, for a review see \cite{Accomando:2006ga}. They even appear in the simple HTM models extended from the SM by adding only one Higgs triplet $\Delta$ ~\cite{Konetschny:1977bn,Cheng:1980qt,Schechter:1980gr}. It contains one singly, another doubly charged scalar components, and a neutral one with non-zero expectation vacuum value (vev) denoted as $v_{\Delta}$. As a result, apart from the SM particles, the HTM predicts only new Higgs bosons. The factors $g_{hSW}$ and $g_{ZWS}$ arise from couplings of singly charged Higgs bosons $S^{\pm}$ with all gauge and neutral bosons. The correlation of the two decays $h\rightarrow\gamma\gamma$ and $h\rightarrow Z\gamma$ were investigated previously, but the contributions $F_{21,WSS}$ and $F_{21,SWW}$ mentioned here were ignored in \cite{Dev:2013ff} because of the small product $g_{hSW}g_{ZWS}$. It is proportional to the small ratio $(v_{\Delta}/v)^2$ \cite{Arhrib:2011uy}, where $v=246$ GeV. The requirement that the parameter $\rho=m_W^2/(m_Z^2c_W^2)$ is close to 1 at the tree level forces $v_{\Delta}$ to be small with largest values of few GeV~\cite{Dev:2013ff, Aoki:2011pz,Blunier:2016peh}. But the tree-level deviation $\Delta\rho=\rho-1$ predicted by this model is negative, in contrast with the recent experimental results \cite{Patrignani:2016xqp}. Hence, loop corrections should be included into this parameter, implying that small $v_{\Delta}$ is no longer necessary \cite{Gunion:1990dt, Accomando:2006ga}. Theoretical prediction for $v_{\Delta}\sim \mathcal{O}(10)$ GeV is still allowed \cite{Kanemura:2012rs}. The recent experimental upper bound is $v_{\Delta}<25$ GeV \cite{Agrawal:2018pci}. As a result, contributions from $F_{21,SWW}$ and $F_{21,WSS}$ to the SM-like Higgs boson decay $h\rightarrow Z\gamma$ can reach value of $F_{21,W}\times \mathcal{O}(10^{-2})$, which is still far from the sensitivity of the recent experiments. Hence, previous investigations ~\cite{Dev:2013ff, Arbabifar:2012bd, Aoki:2011pz} ignoring $F_{21,SWW}$ and $F_{21,WSS}$ in the one loop amplitude of the SM-like Higgs decay $h\rightarrow Z\gamma$ are still accepted. On the other hand, heavy neutral bosons $H$ predicted by many BSM may have large $g_{HWS}\,g_{ZWS}$, for example the HTM \cite{Arhrib:2011uy}. In this case, contributions of $F_{21,SWW}$, $F_{21,WSS}$ can reach the significant values of $F_{21,WWW}\times \mathcal{O}(v_{\Delta}/v)=F_{21,WWW}\times \mathcal{O}(10^{-1})$ in the decay Br$(H\rightarrow Z\gamma)$ but they were ignored in previous works \cite{Arbabifar:2012bd,Chabab:2014ara, Blunier:2016peh}. The formulas we introduced in this work should be used for improved calculations of the mentioned decay rates. \section{Conclusions} The decay $h\rightarrow Z\gamma$ attracts now a great interest from both theoretical and experimental sides. It should be observed and studied soon by the LHC experiments. If a deviation from the SM prediction is found, it will be associated with new physics implying additional contributions from exotic particles in many BSM models. In this paper, we have introduced the general analytic formulas expressing one-loop contributions from scalars, fermions, and gauge bosons to the amplitude of the decay $h\rightarrow Z\gamma$. In addition, we proved that our results can be used to calculate the amplitude of the charged Higgs decays $H^{\pm} \rightarrow W^{\pm}\gamma$ which exist in many BSM models. Although some of these formulas were derived earlier by other groups, the general forms were not concerned, in particular, the contributions related to new gauge boson loops. Our formulas are applicable to many well-known gauge extended versions of the SM, as we discussed in detail. We stress that all one-loop contributions with gauge bosons involved are calculated explicitly using the unitary gauge, so that the readers can cross-check our results. Our final results are written in a convenient form. Namely, they are presented in terms of the standard Passarino-Veltman functions which can be evaluated numerically with the help of the LoopTools library. The analytic forms of these PV functions were also discussed, so that our results can be identified with well known formulas in several special cases as well as implemented into other numerical packages. Our results were checked to be mainly consistent with several recent calculations in some specific BSM models, except the contributions from diagrams containing two different virtual gauge bosons. We believe that our results will be useful for further studies of loop-induced decays of neutral and charged Higgs bosons $H\rightarrow Z\gamma, W\gamma$, which have not been yet treated in many well-known BSM models. \section*{Acknowledgments} L.T. Hue thanks Dr. LE Duc Ninh for enlightening discussions and comments about divergences and counterterms. He also thanks the BLTP, JINR for financial support and hospitality during his stay where this work is performed. The authors thank Prof. Roberto Enrique Martinez and Dr. Bhupal Dev, for communicating and recommending us Refs. \cite{Dev:2013ff,hzgalr1,hzgalr2}. This research is funded by the Vietnam National Foundation for Science and Technology Development (NAFOSTED) under the grant number 103.01-2017.29.
{ "timestamp": "2018-11-06T02:22:09", "yymm": "1712", "arxiv_id": "1712.05234", "language": "en", "url": "https://arxiv.org/abs/1712.05234" }